The Breakthrough Hiring Show: Recruiting and Talent Acquisition Conversations
Welcome to The Breakthrough Hiring Show! We are on a mission to help leaders make hiring a competitive advantage.
Join our host, James Mackey, and guests as they discuss various topics, with episodes ranging from high-level thought leadership to the tactical implementation of process and technology.
You will learn how to:
- Shift your team’s culture to a talent-first organization.
- Develop a step-by-step guide to hiring and empowering top talent.
- Leverage data, process, and technology to achieve hiring success.
Thank you to our sponsor, SecureVision, for making this show possible!
The Breakthrough Hiring Show: Recruiting and Talent Acquisition Conversations
EP 162: Leveraging AI for High-Volume Hiring with Steve Bartel, CEO of Gem
James Mackey and Steve Bartel explore how AI is transforming high-volume hiring. Learn how companies can AI to enhance sourcing, streamline processes, and navigate compliance while balancing innovation with human oversight.
Thank you to our sponsor, SecureVision, for making this show possible!
Our host James Mackey
Follow us:
https://www.linkedin.com/company/82436841/
#1 Rated Embedded Recruitment Firm on G2!
https://www.g2.com/products/securevision/reviews
Thanks for listening!
Hey, welcome to the Breakthrough Hiring Show. I'm your host, james Mackey. We've got Steve Bartell back with us today. Founder and CEO of GEM. What's going on, steve?
Speaker 2:Hey, it's great to be here. Thanks for having me, James.
Speaker 1:Yeah, of course, looking forward to today. As always, we have a lot of fun recording. So, to start us off, we wanted to talk about AI applications and hiring, as you put it, like what's real and what's not, like what's actually driving value, I think, particularly when LLMs came out. I guess ChatGPT. It's been a couple of years, right? Has it been over two years?
Speaker 2:Yeah, it's been probably just a little over two years now.
Speaker 1:That's wild. Time doesn't even make sense to me anymore. I can't tell if something's been like nine months or two years. It's just all blur In tech.
Speaker 1:It's just like everything moves so fast, right, yeah, I guess it has been a couple of years as well, but anyways, I think we saw initially, you saw a fair amount of companies get funding or startups pop up, even if they're bootstrapped with some use cases that probably weren't as valuable as folks maybe initially thought they would be, and you see some of those companies just disappear, right.
Speaker 1:Really, they might have even been just more of like a product feature, not even necessarily like a full point solution, certainly not a platform like Jim is creating. So I think that this is a great place to start. Let's talk about different AI use cases and hiring and try to share some insight with our audience on what we feel is most valuable versus what is not really going to help companies achieve their goals, and it might be more of a headache to implement than it's worth, or really is more of just a very small feature that should just be nice to have in terms of associated with purchasing a bigger product. So maybe you could start us off here. How do you? What are your thoughts on this topic?
Speaker 2:Yeah, absolutely, and this is like a super timely and important topic in terms of what's real and what's not with AI, because so many folks are looking to deploy AI, they've realized that the technology has gotten a lot better, but it's not entirely clear, like, which use cases are real and which ones aren't. And, to your point, like, there've been a lot of things that have looked pretty promising but maybe have fallen short, but there's also dozens more that are popping up every single week, right, and so it's really hard to separate what's real from the noise. Maybe, zooming out a little bit, a lot of industries, when there's major tech disruptions, they go through what Gartner calls the hype cycle. Have you heard of this one, james? The hype cycle.
Speaker 1:I have not heard of that.
Speaker 2:Essentially it's this idea that markets enter four different phases when it comes to really disruptive technology. And the first phase is what's called the peak of inflated expectations and that's when excitement's running really high. People are really excited about all the different use cases but aren't really sure which ones are going to be real or not. It isn't really backed by actual customers having driven value over an extended period of time. So that's where we are today.
Speaker 2:In my mind it's hard to separate what's real from what the noise is and unfortunately almost every major technology disruption from there enters the trough of disillusionment. Disillusionment because buyers and the market enter this phase where all of those lofty expectations and they've been sold dozens of different things from all different angles those expectations aren't met and actually the market perception about this new technology pivots to be overly negative compared to that overly positive hype. Inflated expectations part of the cycle. Gradually things start to rebound it's called the slope of enlightenment. As the products get better, the use cases get more defined, the industry really understands what's real and what's not, and then things eventually level off in a much better place at the end of all this. But I would say we're at that peak of inflated expectations moment where people are so excited about the possibilities but it's not 100% clear which of the use cases will be real or not. Does that kind of resonate in terms of how these markets go through cycles?
Speaker 1:That definitely resonates with me. It's nice to hear that laid out. It sounds very intuitive and it makes sense. I don't know if I really spent time to think through that, but yes, it's certainly aligned with what I'm seeing on the market. I am seeing at this point, teams really start thinking they're really considering implementing AI into their workflows in 2025. So I am starting to see more of those conversations occur in the past few weeks or so, talking with teams as they're doing planning. So maybe we're entering that next phase where it's the slope of enlightenment. I don't know, maybe not, I don't know, but we're starting.
Speaker 1:I am starting to see folks overcorrected and I think we're oh, ai is really maybe not going to make a lift.
Speaker 1:Or this folks overcorrected and I think we're oh, ai is really maybe not going to make a lift, or this is really just about it's just note-taking.
Speaker 1:It seems like all we're doing is organizing feedback and summaries and I think, particularly recruiting people are just getting a little bit like okay, this is just another AI note-taker, essentially, but that was basically it.
Speaker 1:They just feel like the market was flooded with that kind of like content creation, which, of course, like has its place, but it's not like that. It was just flooded with that. But now I think, yeah, people are starting to dial in and some of the top organizations maybe not across the entire market, but people are starting to identify the use case that makes sense for their business. Maybe that's what also creates like it's more of a nuanced conversation. Is that and I think, what we can get into our products as well later on based on for us in terms of like our customers, or based on the business the the same use case may not be valuable for two different companies totally so I think it's also been like a learning curve on not like only which in case are real, which use case is actually valuable for an actual individual business, which there is a lot more nuance than maybe folks, including including myself maybe realized.
Speaker 1:I thought, okay, what are the top like? Where's going to be the most disrupted place across every type of company? And I don't think that's actually the case anymore. I think the top use case for a SaaS company startup growth stage is going to be very different than a top use case for a staffing company.
Speaker 2:Yeah, totally agreed. Yep, yeah. For a staffing company yeah, agreed, yep, yeah, a hundred percent, because these different types of customers need different things and care about different things. Yeah, so it's interesting in terms of what's real and what's not. I think that there's just this proliferation of tons of different companies doing things with AI.
Speaker 2:I've tried to like think deeply about if, as a buyer, like how could you understand at a glance whether something's going to work or not? I have a few litmus tests that I think are really helpful. So the first is the thing that's gotten a lot better with LLM technology is anything text-based, and so there's a lot of applications actually a lot of really exciting ones, in recruiting as it relates to things that involve text. The first and most obvious place is resumes right, because at the end of the day, like somebody's resume, or like online profile, their experience, their education, history, the things they've worked on, like all of that is just text, and so there's a ton of really great applications on top of resumes. We can dive into those in a sec.
Speaker 2:But I think the other place that's just really rich when it comes to text and like human language is both like the interface between companies and candidates as it relates to the messages that are sent, but also like maybe even more excitingly the conversations that are being had, and so I do think that's why a lot of the AI note takers have really spun up and they are driving real value. But I do think, taking that a step further, there's probably something pretty interesting to be done in the conversational interface between recruiting teams and candidates, maybe even assessments and things like that in terms of use cases being real. So the interesting thing is, if a use case in recruiting doesn't involve, like, natural language or text or things like that, there actually isn't anything that's fundamentally changed about that technology that would make other AI applications better 10x better than they were two years ago. Does that make sense in terms of a easy litmus test for which use cases could be real?
Speaker 1:Yeah, yeah, for sure, cool. What I wanted to dial in on, too, is I think you mentioned a product or a feature that you recently have released at GEM and I think that's incorporating is that it was a product release that you incorporating AI as well. Right, which one? You talked about a couple, which one was the first one we discussed?
Speaker 2:Yeah, totally so. We have three awesome new products leveraging LLMs and the next generation of AI technology, and it's all focused on like use cases that have to do with the resumes, and essentially what we've done is we've built this matching and ranking layer, leveraging the new LLMs, where recruiters, hiring managers, folks can enter the criteria that they care about for a role and then we can use the new LLM technology to match and rank resumes based on that criteria. And we're applying that to a few different really valuable use cases, and that's where those three different products come in. But all of it is built on that same underlying matching and ranking technology, built on top of the new advancements in LLMs. And so the first product that we launched it was back in the summer was around AI sourcing, so you could tell us what criteria you care about for a role and then a sourcing bot will get to work for you, scanning the hundreds of millions of public profiles out there and surface folks that could be a good fit for this role that you opened up, super complimentary to all of the amazing sourcing and CRM technology that gem had already built, so it felt like a really natural place for us to start. Then we took that same technology and we applied it to your inbound applicants. So, again, tell us the objective criteria that you care about for a role. But better yet, we'll actually ingest your job description. We'll ingest your spot doc that maybe you've collaborated with a hiring manager on, and we'll create that criteria for you as a first pass, using our best practices embedded directly into the AI, and then AI will help you identify the folks that are applying that might be the best fit, so you can get to them first. And this is really impactful because Something like 20% of our customers have thousands of applications for a single role and we've seen inbound applications year over year increase more than 3x across our customer base, which is a tremendous amount of inbound that companies are dealing with, especially given the fact that so many teams these days are being asked to do more with less.
Speaker 2:Given the fact that so many teams these days are being asked to do more with less, the third thing that's coming next in the next few months for us is that same AI technology applied to candidate rediscovery on top of your ATS and on top of your CRM, and when you combine all these three things together, the vision is you can for every role you open up, enter that clear, objective criteria.
Speaker 2:Once AI will get to work immediately on your current inbound applications for that role, then, as soon as you've exhausted that inbound queue, it'll go and find the people that could be a good fit from your CRM and your ATS. And then, finally, if you still need help filling that role, an AI sourcing bot will get to work for you to go find external candidates. And that's how all these things come together. The great thing is you just configure that criteria once it works across all the different channels and, yeah, I think it's going to have a huge impact in terms of helping companies fill those roles faster and with the most qualified candidates referrals faster and with the most qualified candidates.
Speaker 1:Yeah, that's pretty cool, right, and the one that resonates the most with me is the inbound applicants. We are growing and that's exactly so. As we've talked about, offline, I'm founding a recruiting tech company called June, and essentially what June does is it helps companies hire faster by essentially screening all inbound applicants, so it's turning high volume applicant pools into qualifying candidate shortlists, essentially primarily in high volume environments staffing companies as well, as we think there might be application for large enterprise companies and retail consumer goods manufacturing Anywhere where you see high volume inbound applications I probably could leverage this solution. But really dialing into staffing and we definitely see that as an area where there's an opportunity for disruption, just given again, like the amount of inbound applicants, the fact that a lot of companies simply just can't get to review all of them. So I like that use case a lot and I've been hosting on the show this AI for hiring series over the past several months speaking with folks that are building different AI use cases right, and this one in particular, managing inbound candidate flows and really making the most of that and giving everybody opportunity to every candidate to be heard and evaluating in a consistent, unbiased way is really exciting, and it's just thinking about, like the amount of there's just sheer number of hours that it would take to screen every inbound applicant that would make it impossible.
Speaker 1:Or the sheer number of recruiters or sourcers that it would take, the cost associated with that. It just isn't possible. And particularly for high volume candidate pools, a lot of times the high volume jobs, depending on the rules, folks may not typically have there. It may not be clear on their resume whether or not they're a fitter or not. They may not have a resume that outlines everything that they have experience with. And then there's other people who stack the resume with keywords yeah, yeah, that's what about june?
Speaker 1:what we're building is it gets to the bottom of that, where it's like it clarifies and then hiring teams can look at, really dial into the resumes of folks that are essentially already qualified. But yeah, I think it's a really interesting use case in general, like the inbound applications and how to leverage AI to make that a much better experience.
Speaker 2:Yeah, totally, and if I can just expand on that, it really sounds what we've built is a lot more focused on the in-house recruiter pain points for knowledge worker hiring, tech hiring, large enterprise hiring.
Speaker 2:What you're focused on sounds uniquely valuable, both for higher volume roles but also for staffing agencies, and I think one of the key distinctions is it sounds like with JEM, what we're doing is we're helping identify the folks from the resume that could be the strongest fit so that recruiting teams can go focus on those folks and for knowledge work hiring, for tech hiring, for large enterprises that are hiring a bunch of knowledge workers, a lot of those resumes are pretty built out, so you have a lot of the information you need, but also the actual assessment of whether that person's a good fit today at least, might be better handled by a human hopping on a call, Because when you're trying to understand and assess those candidates, A recruiting tends to be like way more high touch, but also understanding the complexity of their background might just be a little bit more nuanced.
Speaker 2:For some of these knowledge worker roles, tech roles, Now for higher volume roles, it does feel like there's a big opportunity to take that a step further and leverage AI in the actual assessment piece which it sounds like y'all are focused on, which makes a lot of sense to me makes a lot of sense to me.
Speaker 1:Yeah, I think it's like it's applicant pools, where you have thousands of people applying and not necessarily the best, most clarity into resumes, but also there are industries out there and recs out there where literally 75% of applicants are not even being viewed. And so it's, and I the other. I love the use case for staffing, specifically because it puts June in a revenue budget bucket versus often in talent acquisition, hr tech. We're put into this consider this like cost bucket of. We have to fight our way out of this perception that it's just like this cost center, this overhead center, and explaining, explaining the value of people being your biggest asset and the biggest investment that you're making. And so when you're making, in most cases for our customers, literally eight-figure investments and payroll and people, this is a massive investment that you're making and you're wanting the best ROI, you should have the best recruiters and the best technology. And so we make those arguments and folks get it. But what's cool about June is, again, it's when staffing companies make more placements, they make more money, right. So I love that.
Speaker 1:I love that for June because we have been talking with staffing agencies and the ones that are really interested. They operate in industries that are high volume right. Where they just don't, it's just not possible for them to quickly look at all applications, or it's not possible for them to look at resumes and truly identify who's worth having a conversation with, or they're in a space where it's difficult to schedule, there's a lot of rescheduling, and then on the first five minutes of the screening call you ask your knockout questions and you realize why am I like? Okay, now I got to jump off this call. So it's for those types of use cases. People are really jumping on it.
Speaker 1:It's been pretty easy honestly to engage with staffing leaders, explain the value and get them to move forward, and I think that's our primary focus on a go-to-market. But I do think some in-house applications again where it's like really high volume but also so I don't necessarily know. That's one thing actually we should talk about from a product strategy perspective is how dialed in to be in the staffing. But I think that is our focus. But who knows, I could see it also for some in-house teams if it's like really high volume, entry level, potentially blue collar roles right.
Speaker 2:Totally and I could see it for both.
Speaker 2:But I do love the angle that you're taking with staffing.
Speaker 2:To start, because those folks uniquely care about the bottom line and, if you can, like you said, maybe 75% of these folks aren't even getting reviewed. Percent of these folks aren't even getting reviewed and it's not obvious which ones of those could be really killer fits, because so much of the time the resumes are sparse so you don't even have much information to go on as to which folks you should even be spending time with right. So this vision of every single candidate could get an AI assessment or an AI interview, whether that's over text or whether that's over AI voice. In the long term it feels pretty compelling to me for the high volume use case where resumes are sparse you don't always know about somebody's background until you hop on a call and ask these questions. But also the types of questions that you need to ask tend to be a little bit more well-defined, scoped, and especially for staffing agencies. If you could just help turn a small percentage of that 75% that aren't being touched into placements, that's a ton of revenue for them.
Speaker 1:Yeah, and that's it's really exciting is that it's like directly tied to revenue growth and actually healthier margins in general. I think from a go-to-market perspective, we're going to be focusing more on top line growth and value creation from that perspective. But the reality is that services companies are looking at EBITDA in terms of valuation. It's not just top line growth. In fact, top line growth is you'd rather have a company at 5 million that generates a million in EBITDA than a company at $10 million that generates $1 million in EBITDA. It's the smaller the top Sometimes it's so it's just, of course I guess that's any company. But the emphasis on services is really looking at that margin and the consistency of that margin.
Speaker 1:And so what's cool is that from a cost perspective we can make this like 10x cheaper to do to manage all these screenings. It's so much cheaper than hiring screeners or like at that entry level for the blue collar or high volume searches. There's still a need for a recruiting team, but a little bit further down the funnel, once folks are qualified, it's decreasing that cost. It's increasing revenue. So it's creating a really nice spread for staffing companies, which traditionally have really low margins.
Speaker 1:And again, just because a staffing company has a lot more revenue doesn't necessarily mean that they have a higher valuation or that they're actually making more money, because the more you grow a staffing agency, the more overhead you have like significantly more, because people are delivering the solution People are managing. There's a lot of management oversight that needs to occur, and so it's just very heavy in terms of org charts and expanding your leadership and management team. It's just. It's so cost burdensome that staffing companies really have to look at how to limit costs as much as possible. So I love it. It feeds into both of those needs, right, where talking about budget and cost is probably, I think, more important to staffing as decision makers, maybe even more so than SaaS, right?
Speaker 2:Just because of the interest yeah, totally James. I'm curious. I'm sure y'all run into this a lot. This is certainly top of mind for our customers as it relates to AI. Everybody realizes they need it. They're starting to figure out which of these use cases are real and which ones aren't. The next thing that's on a buyer's mind is the shifting regulations and compliance. How do you think about that? What do you see as happening there?
Speaker 1:Yeah, so we actually had a guest on the show a couple of episodes ago, wes Windler, and he's the founder and CEO of Woven, and Woven is actually interesting because it's a combination of technology and services and it actually sounds like it's somewhat services heavy to deliver on their solution. But what I found interesting about Wes is that he had this. He thought he had an interesting take on how to navigate regulation as a company, incorporating AI into the hiring process and for products and for potential customers as well, and basically what he was saying is that you know, we have to get clear on how we define evaluation and AI evaluations, because he's saying there's nuance in the definition. There's nuance in the definition. We have to really be clear on that, because is it considered evaluation if the hiring team puts together the role requirements and all the AI is doing is matching somebody's background to role requirements? Is that considered evaluation? Or is evaluation only if a hiring team says I need to hire an engineer?
Speaker 1:An AI puts together a job description and says here's what you need, right, here's it's defining the role requirements and then doing kind of the match based on the requirements that it defines itself, and so that's like the interesting nuance. What are we actually considering? Ai evaluation and, from a regulatory perspective, how that kind of nuance might impact how products are built. I thought that was an interesting conversation.
Speaker 2:Yeah, and it's a really interesting distinction, as I'm just thinking about what you're building in your use case. Companies a lot of the time have very simple like knockout questions, especially for like higher volume roles where you just might need a certain certification to like actually just do the role, and so companies require that you might need a very specific skill, and if somebody doesn't have that on their resume and y'all are helping companies understand whether they match those very basic criteria that are defined by the hiring team through conversational AI or like an AI interviewer, you're not actually doing assessment in that world necessarily. You're just like pulling out whether they match these criteria that the hiring team's already defined Right, and so it's, at least to me, intuitively it should feel, it should be good. It is a little bit of a gray area, but yeah, I can totally see where you're going with that one.
Speaker 1:Yeah, it's a, it's a nuance and being on the same page on a definition to make sure everybody's speaking the same language on what we're considering assessments and what we're considering an AI driven decision. Is AI making a decision? If it's just taking role requirements, in June's case, before we had really dialed in on the staffing high volume use case, we're like, okay, maybe it would be valuable for somebody to just say, hey, I really need to hire an engineer for my startup. This is a project scope. And June pushes out like a job description, comes up with custom questions that could be used for screening the applicant.
Speaker 1:And we got away from that because we're like, look, we're working with experts. Our customers know what they want to ask, they know the role requirements. They're not looking for a tool to teach them those things. They already know those things with a lot more clarity and expertise than an LLM has, right? So now it's the shift is okay, the customers are giving us the role requirements. They're giving us the questions, the knockout questions.
Speaker 1:Right, that they want to have asked the questions where it's in five minutes whether you should continue on the call, on the screening call, right, like eliminating all of those conversations where you get off in five minutes, like if there's a way and that's best for everyone, it's best for the candidates and the recruiters. There's just no point in having those conversations if there isn't like a high level fit right, and I do. I think that what I'm seeing is that I think, ultimately, evaluation is going to be when, if AI is defining what is important. That feels like evaluation versus matching. It's essentially just more sophisticated and fluid matching, like with an LLM, if you're just pulling role requirements from a hiring team, versus coming up with what is going to make somebody successful in a role. And you actually had a good point too in terms of being able to track how LLMs actually come to conclusions and how that's different from previous generations of AI. So maybe you could talk about that a little bit too.
Speaker 2:Yeah, totally. I think one of the cool things about how we've built our matching and ranking leveraging LLMs is, first off, just like you're saying, recruiters and hiring teams have full control over what the criteria are based on their intimate knowledge of the role and what they're looking for. And so they enter their five to 10 pieces of criteria and they can even say, hey, these are must have pieces versus nice to have, so we'll score those more. And then all the LLM is doing in the background is looking for evidence on the resume that they match those criteria and coming up with a score based on how much of the criteria they match. The cool thing about that is all of that is transparent to the end user. So when we find those matches for you, we'll tell you this person is a 0.8 match and here's the breakdown of exactly the criteria they match.
Speaker 2:And we can even point to where and why they match on the resume. And so that still gives recruiters full control over the criteria that they're inputting, gives them full visibility into the algorithm and why it's making the decision it is. And then if, for whatever reason, the recruiter assesses that this person isn't the right match or actually these other people are a stronger match. They can then use that information to go adjust the criteria and make it better again, giving them full control, I think, taking that a step further. One of the cool things with LLMs is we can actually give you advice on the criteria and flag certain criteria that might be more biased, that recruiters might not even think about. So we can build some best practices, run the criteria that the recruiters are entering through those best practices and then have some guidance on the side as they're setting up their criteria. So ideally we even do better than what a recruiter would do on their own.
Speaker 1:That's really cool than what a recruiter would do on their own. That's really cool. So that's actually not a use case like that I've specifically heard is pulling in more requirements from recruiting team and actually creating aligning that with a list of best practices that I'm assuming you're providing to the LLM saying, hey, make sure to look for these types of things and whatnot. That's really cool.
Speaker 2:Yeah, exactly. So two examples. We can ask LLMs does this specific criteria have the risk of introducing bias and give some feedback as to why? Now, obviously, a recruiter is still in the final control of what they put in criteria. I think that's still really important, but at least we can nudge them in the right ways if there might be introducing some risk there. We can also do things to help them just set up more successful searches. For example, if they enter criteria that could be subjective in nature instead of objective, based on someone's resume, like good collaborator or something like that. Like how would anybody be able to tell that from a resume? Something like that? Like how would anybody be able to tell that from a resume we can guide them towards? Hey, that's probably better asked and dug into in that initial recruiter screen than as a criteria that you input for matching and ranking.
Speaker 1:Yeah, yeah. And one other point and we're running out of time here, but one other just example of how use cases can differ significantly based on the type of organization. When you look at, I think that there are a lot more than AI note takers. I think it's like interview intelligence is what it's being called at this point. So you have companies like Bright, higher and Pillar out there that essentially are the AI co-pilot component. They do a lot more than that, of course, but they're essentially recording the interviews on Zoom and they're putting together summaries and making sure that any kind of custom questions in the ATS are essentially being addressed and flagging if things are not addressed in an interview, and it's really making sure that the interview process is thorough and rigorous and consistent across the entire candidate base. And for instance, like an example of how the use case can be significantly different is for a SaaS company who's hiring for an enterprise account executive or a full stack engineer.
Speaker 1:In a third round interview they might have four candidates they're considering. You don't need AI doing an interview evaluation when you have a small candidate pool, like where you need the AI evaluation or matching or whatever you want to call. It is at the top of the funnel right when you have thousands of applicants. So it's just again like what's real and not, and then also the nuance to that is what's actually impactful for your business and the problem that you're trying to solve.
Speaker 1:So there's some cases where what companies need is like a bright hire. Other cases they need help with inbound applicants and sometimes it might be more so on the resume side. Sometimes it might be more on like actually conducting screens, and it's just cool to see. I think buyers are becoming slightly more sophisticated in terms of understanding what's going to make sense for their business, which is pretty cool. I haven't the conversations I've been having over the past month or so or a lot. Buyers are becoming a lot more sophisticated. There's still a big gap and a lot of folks maybe aren't, but I'm starting to have more of those conversations where people are really dialing in on what makes sense for their business, which is pretty cool.
Speaker 2:Yeah, totally. And yeah, just to maybe round that out all out, I do agree that, depending on A the type of company you are, but B where you're looking to address problems in the funnel, you need a different approach and I do think, like the down funnel, once you get to the actual onsite interviews, humans interviewing candidates that's where the and especially for, like knowledge worker roles, that's where, like the right hires, the meta views of the world, I would add that to the, that's where they can really shine at the top of the funnel, like matching and ranking, but also AI assisted assessments like what you're talking about. Those feel like where those use cases really shine and especially on the higher volume side, for the full, like AI assessments like what you're talking about. And so I do agree that there's this nuance to what type of customer, at what point in the funnel are they looking to address a problem and like from there, what's the best application of AI? Yeah, for sure, for sure.
Speaker 1:As usual, we're right up on time. We go right up to the top of the hour, steve. I sure For sure, as usual, we're right up on time. We go right up to the top of the hour, steve. I just wanted to say thank you so much for joining me today on the show, and it's always a lot of fun.
Speaker 2:Likewise Thanks for having me.
Speaker 1:Yeah, of course, everybody tuning in. Thank you for joining us and we'll talk to you real soon, take care.