The Breakthrough Hiring Show: Recruiting and Talent Acquisition Conversations
Welcome to The Breakthrough Hiring Show! We are on a mission to help leaders make hiring a competitive advantage.
Join our host, James Mackey, and guests as they discuss various topics, with episodes ranging from high-level thought leadership to the tactical implementation of process and technology.
You will learn how to:
- Shift your team’s culture to a talent-first organization.
- Develop a step-by-step guide to hiring and empowering top talent.
- Leverage data, process, and technology to achieve hiring success.
Thank you to our sponsor, SecureVision, for making this show possible!
The Breakthrough Hiring Show: Recruiting and Talent Acquisition Conversations
EP 152: Beyond Automation: Exploring AI's impact on hiring strategies with Babblebots.ai CEO, Roli Gupta
James Mackey, CEO of a leading RPO provider, SecureVision, and Elijah Elkins, CEO of Avoda, a highly rated global recruiting firm, co-host Babblebots.ai CEO, Roli Gupta in our special series on AI for Hiring.
They explore how the Babblebots.ai tool efficiently interviews, screens, and assesses candidates, cutting down time-to-hire and simplifying talent acquisition across industries. They also address the complexities of salary negotiations across different regions, offering insights into how large language models manage these nuances. Additionally, they discuss the importance of structured yet adaptable interview frameworks for ensuring unbiased evaluations.
1:42 AI recruiter agent in action
12:06 Understanding context in salary conversations
16:52 Enhancing candidate interaction with AI
27:00 Optimizing early candidate assessment with AI
32:25 Innovative interview structure and future vision
Thank you to our sponsor, SecureVision, for making this show possible!
Our host James Mackey
Follow us:
https://www.linkedin.com/company/82436841/
#1 Rated Embedded Recruitment Firm on G2!
https://www.g2.com/products/securevision/reviews
Thanks for listening!
no-transcript. So anyways, Roli, welcome to the show. Thanks for joining us today.
Speaker 2:Thank you guys, hi, hi and thanks for inviting me. So very excited to be here.
Speaker 1:Yeah, we're really pumped to host you, and can we start? Just great to learn a little bit about you and the product. What are you building right now?
Speaker 2:Absolutely so. I'm here representing Babblepodsai. This is a generative AI plus voice product and we're based out of India, mumbai, so very happy to be on the show and speaking to people worldwide. So what we are building is basically, we are building like an AI recruiter agent. We're building an agent that allows companies to autonomously interview, screen, assess their candidates very quickly at the top of the funnel and just get them the best candidates really quickly. Get them the best candidates really quickly. So I think speed is definitely one of the key things that we are going after, but it is pretty much the speed to the right talent, the time to hire, and these are some of the key metrics that we are going after. And, yeah, I think that's what we are doing. In a nutshell, we've been around for two and a half years and we have a ton of customers in India and also now some in the US and UK.
Speaker 1:Wow, that's awesome and just for context, for our listeners and for us, that'll help us ask questions too. Could you tell us a little bit about your customers right? Are they SMB, mid-market enterprise, what industry they might be in, as well as the types of roles that your product helps them hire for?
Speaker 2:Yeah, absolutely so. Just a lot of startups in the early stages. We are also trying to figure out exactly which customer segment is in some sense going to be a beneficiary completely. We have seen actually getting adopted right from startups that are very small, like a one or two person startup, where there was just the founders and they didn't want to spend a lot of time even onboarding interns or even junior engineers, to companies that are probably I think the largest company that we are working with is more than 10,000 people. So the spectrum of customers is very wide. We've worked with more than 100 customers so far. Across this whole spectrum.
Speaker 2:We have seen the product being used from actually one of the things that I have found most magical and I believe like we have built the product being used from.
Speaker 2:Actually one of the things that I have found most magical and I believe like we have built the product, but I still find like sometimes that, wow, this is possible to do with AI now Is that the same platform can be used to actually screen all kinds of roles.
Speaker 2:So because we are using generative AI and because we are using a lot of the intelligence that has been built, the foundational intelligence that has been built in the world. You actually don't need to be very narrow in what you are using Babel bots for. So we have seen it being used from technical roles, non-technical roles, field engineers, sales, investor relations, engineering, design. I think at this point there's probably like more than 500 different roles that the our customers have used us for, and I'll honestly confess that some of the roles I didn't even know they existed and we just had customers create those interviews and conduct those, and I'm like, oh my god, like I'm just learning by what is possible if you have like powerful tools like these. So, yeah, it is a very versatile, universal tool that can be used to assist any recruiter in their top of the funnel madness, honestly, yeah.
Speaker 1:Are you seeing interest from like primarily tech industry or is there a specific industry interest that?
Speaker 2:you see? Yeah, sure, yeah, you had asked that question. Of course, tech industry is very interested to try anything new, but what I've also seen with tech industry is that they've had assessment tools in the past, right, so they've had coding challenges and they have other kinds of assessment tools. So for them it's like an evolution of some of those things that they're already using. I think the customers that find it completely unique and it's changing the game for them is people who are not coming from completely tech backgrounds, right, so more traditional industries.
Speaker 2:So we have customers from real estate, we have customers from construction, we have customers from insurance and finance and financial services. We have I'm sure I'm forgetting telecom. We have customers from companies who support telecom towers in India, of course, staffing companies now as well, and what we realize is that staffing companies also have not had a lot of tools that they have actually traditionally used for the spectrum of roles that they normally Purely tech seemed to just like really find this as liberating as to how they can actually change their workflows and how they can in fact bring some standardization to how they were doing things.
Speaker 3:That's awesome. Thank you, Elijah. Any thoughts right now? Follow up questions. I was curious with some of the large language models that you're using. You mentioned they're really versatile, right, and you can use them for a lot of different positions have you found any limitations where the large language models struggle with I don't know, like a certain industry or a certain type of position or any little nuances, and the LLMs still need to develop that knowledge set in these little niche areas or anything like that.
Speaker 2:Yeah, that's a really good question. In fact, what we have seen, that out of the box, these LLMs are about 80% good, pretty much across the board. But if you see in our industry, 80% good is not good enough because you're assessing people and it is a real opportunity for a person, right, it's not like a little video or a reel that you're putting it out there and that it can have errors. The expectation from the customers is of a very high quality output, right, but what out of the box you may get? Only about 80. So we have a fairly like.
Speaker 2:Our team consists of a lot of like annotators and people who are making sure that we are delivering the, the rights out of a quality to the customer. So there is this path that you have to traverse to make these outputs like completely consumable by the customer, which includes a lot of annotation and just making sure that the speech to text to speech and all those things are doing what they're supposed to do. But having said that, even after that, what we have realized? That in one particular case, what we felt the customer had very hyper local requirements. Right, they wanted the candidates to actually just know very specifically about a neighborhood, about the sort of shopkeepers in that particular neighborhood. It was, it was like yeah.
Speaker 2:So, for example, if you are like running a, a bike showroom in in your area and or even, let's say, a doctor's clinic in an area and you are a medical representative and you want to test somebody on the local knowledge, llms are not very good because there is not that amount of local knowledge that has been published about that little neighborhood that they are looking for that works there. So I think, when it comes to hiring in very hyperlocal pockets, unless you are assessing somebody for some very basic skills, you're going to see a limitation that I don't think LLMs will be able to address in the near future even actually in the far future, because it's not worth it to have that level of detail in an LLM at any point of time.
Speaker 2:I'm not foreseeing that path very quickly. But yeah, for the same candidates, if you wanted to assess them on something that was a little bit more standardized and it didn't need that hyperlocal knowledge, then it would be fine. But that knowledge itself is not there in LLMs the hyperlocal knowledge.
Speaker 3:Do you think something like Google's Gemini using like Google listings and Google maps and things like that might they might be the ones that have the most amount of like local knowledge that could be integrated into an LLM right and used as training data?
Speaker 2:It's a great idea. I think it is not so much just the information, though that is missing right and used as training data. It's a great idea. I think it is not so much just the information, though, that is missing right?
Speaker 2:I think what's missing is that a lot of this knowledge about the nuance of a neighborhood is resident in people's minds. So, for example, in a particular area, if, like a patient's, if a doctor prefers to give a certain kind of medication to a patient because of like the local nuances of that place, or maybe that's what they have a better reputation there versus another doctor in the same area for the same ailment, maybe prescribing something else, they're actually both right because the patients are getting fixed right, but there's no right answer there in that case. So I think some of this knowledge seems very subjective and I think it's just the lack of knowledge. I think it's just like that lack of judgment. I wouldn't say lack of judgment, but that bridge between the knowledge and that judgment is not necessarily written down anywhere in a very clear way. If I just see as what's the term for it that basically, can I see a pathway to that discovery? It looks a little unlikely.
Speaker 1:I wouldn't say well, but it just looks a little unlikely yeah, okay, so I guess, one of the follow-up questions I have on that real quick is yeah, I've actually I think it was elijah was it on Ben's episode to you of BrightHire where he was talking about like 80% out of the box and then it's the 20% that?
Speaker 3:I think he's working on. Was that him or was that? I think it was Ben. Yeah, okay nice.
Speaker 1:Yeah, yeah, um, yeah. So I'm curious to learn a little bit more about that. I think our audience would also be curious, like that 20 that just doesn't work immediately from leveraging an llm like open ai's api or like what, what is actually being done in that remaining 20? Like how are companies actually training the llm at that point? Like where does where's the work essentially going? Toward what's the work going?
Speaker 2:Yeah, I'll give you an example of what seems to be difficult to do with the high amount of accuracy that is needed. For example, in India, you can ask what is somebody's current salary and what is their expected salary. Now, this is a very simple question. Almost every interview everywhere in the world they may not be asking your current salary because it is against the regulation in, like us, and you can a lot of places. But almost everybody wants to know what is, what's the rate that you are willing to work. Right, a full time or like part time.
Speaker 2:Now, the way that this information is shared by candidates is quite contextual, right? Somebody may say that, like in india and in India I will talk in right, lakhs per year is 100,000 equivalent. Right, one lakh is 100,000. Not $100,000, but just the number. So now we are saying somebody says, hey, I'm looking for something between three and four. Okay, now in a human context, the recruiter will absolutely immediately know that this person is talking about a salary between 3 and 4 lakhs, versus if the same person says that, hey, I'm expecting a salary between 70 and 80,000. Now, in a human context, you will absolutely again know that this is a per month salary that they are looking for because the role. And now there is actually nothing wrong with the llm because it's picking up exactly like the thing that the person is saying. But this is where the human context is just very wide, because this recruiter has done hundreds of interviews and they know, when these two words are being used without any units, what they mean, right.
Speaker 2:So now I think the llms are becoming smarter and picking these things up, but the accuracy level is not high enough, because now, if you make an error on this and the company shortlists lost shortlist people who are only within that acceptable budget that they have, if you make an error, you may actually the candidate may miss out a chance on your because of your error. So those kind of errors are like absolutely like unacceptable. So now? So there are like these little models that we are creating which have their own rules that, okay, this model in India, if a candidate is talking about it in India, this is what they would mean, since this is like a mid-level position 70 to 80 probably here, or definitely here means it's a per month salary in INR rather than a USD salary. So these kind of like fine tuning things are things that, as we get more and more refined in terms of talking to human beings as they talk to other human beings. We'll get more accurate, but they're not. They're out of the box today.
Speaker 1:That's super helpful and that's, I think, this salary conversation is. That's a great example. I don't, I'm almost like a little bit. I don't even. I haven't actually thought about that, that didn't even occur to me. That would be such a difficult question to to ask leveraging an llm. But yeah, you're absolutely right, it's. I haven't really thought about too much, but there's probably different, like a hundred different variations of ways that somebody could answer that question maybe not that many, but like that's right and you feel like it's so innocuous and everybody's asking that question in every job interaction.
Speaker 3:But it is surprising how many flavors it can take and what all people can mean yeah, and different countries too, right, like there's other countries where that same context you know is, yeah, a local recru, you know is, yeah, a local recruiter would know that. But yeah, the LLM isn't going to pick that up.
Speaker 2:Yeah, I'll give you an example. It happened yesterday with one of our UK customers. We were testing something and the person said that they're looking for 2000 pounds or something. They just said they're looking for 2000 pounds and the LLM picks it up as actually the weight the pounds weight they're looking for 2000. Oh my God. So you just don't want to share this transcript with the customer because it's funny. But if it happens too often then they won't like you.
Speaker 1:That's funny. That's funny. Hey, so what's been your experience? It sounds like you have worked with a lot of international customers. What about, like language barrier, having the lm operate in different languages? Is that something that's pretty easy out of the box, or is that difficult? I'm wondering, too, it's, if you're doing like any kind of prompt engineering, setting parameters and whatnot. If, in translation, that starts to get jacked up a little bit, what's? I mean it's I don't know if do it sounds like you most likely are working in several different languages, right? How is that going? Is that a big part of that 20% that you need to refine, or is that actually pretty simple?
Speaker 2:No, actually. So we do a few more languages which have been in deployment and at this point we can probably do more than 20 languages at any point of time. So I think language per se is not an issue and there's a lot of like other LLMs that have solved it. Of course, if the corpus of that language is large, then the accuracy is large and that's why English is so much more accurate than you would have maybe any other languages. But the big languages in India also have fairly large corpus because we've had a lot of literature and there's like movies and like some other media that is all getting consumed in creating these LLMs. So I don't think the language itself is. That itself is not a very big barrier.
Speaker 2:Surprisingly, many of the LLMs are very good at mixed language. They actually get that context quite quickly. For me personally, it was a surprise because if you see, in any country where English is not the first language but people are working in some sort of a white collar type environment, the languages are getting mixed up now. So there's always going to be some smattering of English with your local thing and it's like its own flavor, its own chutney of language that is appearing in a lot of places, so we were not sure how we will be able to dial up or down this mixed, mixed flavor, which is actually the colloquial language in most places is llms are quite good actually at that, at least with the popular languages, and I don't want to generalize very because, like we have not looked at every language so I don't know if french and english mix well together or spanish and english, I think, probably mix better.
Speaker 2:But yeah, I think. But from what our experiences across Hindi and some of the South Indian languages has been pretty good to have these mixed language interactions and actually, by the way, that I think is one of the beauty of these AI recruiter agents is that traditionally there used to be a barrier between what the recruiter was comfortable in speaking in versus what the candidate could have been comfortable in speaking in, and this technology actually removes that barrier. So if you are hiring, you can always give candidates the option to speak in a language that they are more comfortable in and still be able to just represent themselves with as much confidence as they would in something that maybe English was not giving them otherwise. So I think that has actually this has opened up, in fact, the options for candidates to be a little bit more authentic in these, in these job interactions, job interviews.
Speaker 1:It's super helpful. I was just taking some notes on that. I want to make sure to mention it in the description or title of the episode, just like some nuance there, because that's I don don't know. That's just some really good insight, I think, for folks to think about, like the language aspect and, as well as what you mentioned about training, the system on aspects related to salary or otherwise. I think there's a lot of companies coming out right now trying to add this type of functionality to to the recruiting tech company or whatnot, and so it's a product rather, and I think it's interesting. It's something that people are gonna have to think about, like how far along are companies and essentially training systems on these types of things, and so that's really cool, I suppose. Elijah, are you good to move on to a slightly different topic? I got something in mind, but I want to make sure.
Speaker 3:Yeah, yeah, go ahead. I have a question I can ask later.
Speaker 1:Okay, cool, I guess in terms of like where to go next. Now it sounds like you're starting this like top of the funnel, so it's like the initial screening conversation. So are your customers typically leveraging this before candidates speak with recruiters, or what's the workflow there?
Speaker 2:Yeah, yeah, yeah, I think, if companies use it perfectly as designed, ai recruiter agents should be your first conversation with the candidate and because it's a digital interaction which is almost human, you can actually talk to a lot more candidates now, to a lot more candidates now, and because the candidates can speak to them whenever they are available, which is typically night and weekends, everybody can speak to these agents, right. So you are just like, not bound by time, bandwidth, scheduling, nothing. So I feel like this should be the best use of this should be is, in fact, when companies are engaging immediately, as soon as the candidate has shown an interest, and I think, if you think about it, just like in sales, also, right, inbound queue is always a little bit nicer than like an outbound queue, right. So, same thing, like the candidates who look for you and have applied, because you have these AI agents now, you can immediately, pretty much real time, do an interview. So imagine you are on a career page and a company's career page as a candidate and you apply for a position of I don't know supply chain manager or whatever. Like you know, apply there and instead of applying or uploading your resume now, immediately, you can actually just talk to an AI recruiter right there. Actually, just talk to an AI recruiter right there so you can just finish your first round of interaction with the company, instead of uploading your resume and then waiting for somebody to call you back. And then set up a call and just go through that, that like mechanism that we've all been used to for the last 20 years or so 20 years in my own memory, but but yeah, you don't need to do all that.
Speaker 2:So I think the the one of the best places to. It's also the place where actually the most drop-off happens. You speak to a hundred candidates, or, but you end up liking only 10, so you are effectively 90. Conversations were not very productive for various reasons, and it could be some something about the company, something about the candidate. It's not like it's always bad candidates, but it's just it was not a good match for various reasons.
Speaker 2:So wherever there is the most in some sense wasted human effort is a good place to put something like this, because you also want to be fairly standardized at that layer right and reduce any kind of like screening biases that may come into the picture by because, like you couldn't reach out to someone and you just thought, hey, this resume doesn't look as great as I like. There's so many things, like you know. I mean, I think this we can have a whole episode on just little biases that we all live with. But yeah, agents, actually people argue but in fact I think in this use case we will see more and more that in fact ai agents are less. They make things just more uniform for everyone.
Speaker 2:I think a lot of companies you will see are like. For example, we don't even tell you the gender and I know I'm speaking in US and gender itself has become like a big topic, so I don't want to go very far into it but in general, we are not using voice to determine whether this person sounds like a male voice or a female voice or whatever. So we are not processing anybody's name to determine what could be their background. We are not determining anybody's any other location information that they have given, or this person from Bay Area, so maybe they may be something different than like somebody from New York or whatever. So we are not using any of this demographic data to assess a candidate. So it's very I think. I feel it's more democratic than in a lot of other processes.
Speaker 1:Yeah, that makes a lot of sense, yeah.
Speaker 3:Do you?
Speaker 3:see yeah, I was just curious. Do you think that these voice agents are going to be more readily accepted, because you talked about inbound versus outbound, by like inbound and the applicants versus reaching out to candidates and then the candidate they're a passive candidate, they're open to a conversation, but do you think they'll take conversations with an AI agent versus a recruiter who can answer all their questions, or do you think the AI agent really is going to be able to pitch the company really well? I'm just curious of that AI agents for recruiting being used for that inbound versus outbound on the sourcing side.
Speaker 2:So I think first we'll solve for the inbound. Outbound will also happen because there are AI SDR agents whose full job is to do the selling. So it isn't like that. Ai agents are not being used for selling right. So here we're talking about combining two characteristics One is selling the opportunity and the other is assessing somebody's fitment for a particular role, depending on whatever is the custom criteria that the company has given right. So we are not coming up with the criteria. The companies are like giving us a job description and that is being used with their sort of blessing. In some sense, they are the ones who say, okay, this is making sense, this interview is making sense to me. So I feel like it's not very far, elijah, that we will be able to do outbound also with the same effectiveness.
Speaker 2:But right now, I think the place where if you're looking for, like, really senior candidates, where you have very few candidates and you have to look for them on LinkedIn or other places and there's the only five that you found, probably like it's, you're not gaining much by using an agent that time.
Speaker 2:So companies also need to be smart about what really they are solving for. But if you have, if you have an existing database, for example, and many companies have that, and we are working with many companies, in fact, just to harness that existing database now. So you have an existing database. These applicants may have come to you many years ago, they may not even remember, but you could just go use these agents to now reach out back to them and say hey guys, it looks like this is a new role and you may be suitable for it. Do you want to just have a quick conversation about it? So those things are quite possible today. They will be quite effective even today. So I think, unless you are in a very, very niche area where you have very few candidates, where, in fact, more of the work is going in scouting them rather than in evaluating them, then I think these AI agents will be working with different levels of effectiveness, but it won't be very bad anywhere.
Speaker 1:That's actually a really good question, elijah, really good. So in a similar vein, there's I don't know like there's some people that talent acquisition leaders that do not like the idea of an AI screening interview of any kind written voice whatever as the first interaction with a candidate. I've even heard from some leaders that they would be open to more of AI managing an interview a little bit further down funnel versus at the top of the funnel. So I'm curious I'm assuming in some cases you've received some pushback in terms of being the AI being the first touchpoint. Do you have customers that are leveraging your product a little differently, where maybe they're not actually pushing it out as the first touchpoint but they're actually just like leveraging it for a later round? Or I'm just curious if you see people trying to plug in your product at different parts of the workflow.
Speaker 2:Yeah, yeah, actually great questions, guys. I'm loving this conversation. It feels like a deep dive, so very nice, yeah, so, absolutely so. I think what we have seen in some cases, customers would request their recruiters to still have that first call with the candidate to just say that, hey, I'm calling from this XYZ company and you've been shortlisted for this role. Are you broadly interested? If you have any questions, you can ask me now. And they have this five minute conversation, right. And the person said, hey, this sounds great, what's the next step? Five-minute conversation, right. And the person said hey, this sounds great, what's the next step? So this recruiter will now say, okay, I'm just going to send you a link of a little interview with our AI recruiter. Please go through that. And that is how we move forward in the process.
Speaker 2:So this is a very good hybrid approach because it also really takes care of a lot of maybe candidates who were really not very interested and they may have applied without thinking so hard, but what it has done for the recruiter is actually saved a ton of time. They did like a five minute interaction. I don't think you need anything more than that. Like, that conversation was pretty much as I described. There is like not a lot more to that. So they don't have to make any notes, they don't have to actually in some sense go through a detailed interview in the first round, because that is what the AI recruiter could do, which goes in deeper into the skills and other things that in fact, many times the recruiters themselves are not very in some sense trained to evaluate. Right, because, like, you're a recruiter, you're not, like, you have not written code or you've not done a lot of supply chain stuff or anything else. Or even in like, things get even wonky when it is like clinical research and things like that. Right, those are regulated industries, so not every recruiter is trained to interview for these, but you still are able to make that first touch point with a candidate and then send out like this as a, like a sort of a, I would say, one level down in the funnel step. So there are companies who are using that as well.
Speaker 2:In our particular platform we also have a second and third second round of interviews as well. So if you wanted to set up second round of interviews, you could do it. We actually have a. So we talk a lot about the voice agents, but our interaction also has. In fact, we have a fairly large question bank where you can do a small assessment, like multiple choice questions as well. You can do an EQ test, a personality test. You can do like a basic logic, basic math test, like other technical skills that you wanted to just like do a little code test on. So all this is actually part of the platform, it's just that we talk more about the voice agent. But you think in our case actually we have seen multiple customers that have reduced their number of interviews, interview rounds, because you are getting so much signal about a candidate very early in the process.
Speaker 1:That's really interesting. Evaluation tests that's interesting, Like when you're talking about these logic, math, EQ, behavioral, maybe types of assessments, right? Are those just basically out of the box to the LLM or how structured and how much are you essentially optimizing the LLM to be able to handle those types of questions and interviews or assessments? We are very cautious with actually using LLM to be able to handle those types of questions and interviews or assessments.
Speaker 2:We are very cautious with actually using LLM real time for multiple choice questions Because, again, I think in this particular interaction, being a little bit cautious is actually helpful, because we want to be just very candidate friendly. Just because you're trying to solve a particular problem of efficiency or about reaching them early, it shouldn't it shouldn't make it difficult for them to get selected right. So what we have we've done a bunch of testing and what we have seen it's better to use llms to create these in advance, these multiple choice questions, rather than do them real time. So we have used them to create like really large question banks, along with a lot of other information to create those. But we don't we don't create them while the interview is going on.
Speaker 2:So there's a little bit of it's a little bit guardrails around how we are using LLMs in in the assessment part, like the multiple choice question.
Speaker 1:That sounds a little bit different than was it right, higher Elijah Cause I was, I remember, with Ben. I was trying to dive in to. This is a little different, but there's a parallel here. Just hang in there with me.
Speaker 1:When we were talking with Ben about, okay, they were basically generating questions based on a job description that the system would also generate. Right, this is just basic LLM, out-of-the-box stuff and they would basically generate custom stuff. And they basically generate custom questions and it was basically for the interview and then their product will sit in on the interview, do all the note taking and then do the evaluation and see what's been asked and what gaps they have and that kind of stuff. And so one of the things they were saying is we'll generate our own questions, but then also the hiring team can put in any questions they want. And so my question was like what if they don't ask something that they should like, something obvious, like, for instance, salary range, and then the LLM just doesn't realize, it just doesn't ask, it doesn't think to ask.
Speaker 1:So I was like how do you solve for that? It didn't really seem, and I'm sorry if I'm wrong but I don't know. Elijah, if you've mentioned, it didn't really seem like they had a solution for that, specifically to ensure that the LLM wasn't missing stuff if the hiring team missed it and the LLM missed it out of the box. So I guess that's just something to keep in mind as well. Do you have any thoughts on that and how? That's where you're saying like there's these predetermined question banks.
Speaker 2:I don't know if somehow that's leveraged so those predetermined question banks are only for the multiple choice question part. Most of the interview is a conversation like you and I are having. Right, you are asking me something, I'm responding and you are picking up on what I have said to ask a probing question. That's how, how all interviews go right. You build on what the candidate has said and you go deeper and deeper to some extent and then come back to some baseline. So, like what you're saying exactly, you can have a whole interview. But if you don't, if you end up not asking about the location preference or the salary or permit to work in a certain country, then actually that interview was not complete in that sense, right.
Speaker 2:So the way the conversations are designed is we make the candidate a little bit comfortable. We ask them a little bit about the background, like what they'd like to do in the free time or some question around some lightweight question, just so that they get used to talking to an AI agent, and then we typically we dive deeper, a little bit into their background. Maybe it could be educational or most of the recent projects that they have done. Then it goes into what whatever was asked in the job description or whatever was needed in the job description and these questions are not actually pre-defined because the job descriptions are changing and even if you have a starting question, that is the same because the job description is the same.
Speaker 2:The follow on questions the L2s and L3s are pretty much dependent on what the candidate is saying, right? So if you could ask a candidate about how they scale a database, there can be 100 different answers to this. So the follow on question will depend on whatever this person did in terms of scaling a database versus what another person did. So it's really so. That's where actually llms come in. Is that the probing conversation that happens with the voice conversation that's happening. That is actually not preset. It's just that only later on, if we ask them any multiple choice questions, those I'm saying are not dynamically generated. Those come from like a, a question bank, and they are sourced for that particular role, but they are not being generated on the fly.
Speaker 1:But then there would be, like some should be, like from a prompt engineer perspective, on the backend. It would be saying, hey, make sure you're collecting this and this, so it's not like a prerecorded question per se, but there there should be, it would seem, something that's in place to ensure that the LLM does cover.
Speaker 2:Yeah, yeah, absolutely, because, see, finally, it's a business case we are solving, for it's an interview that you are doing. An interview has some must haves, so of course the we have to make sure that those are getting covered, yeah.
Speaker 1:Yeah, yeah, because I was just wondering how that's done.
Speaker 2:There's like a broad skeleton to the conversation and within that skeleton you can move around a lot. But yeah, I was just like.
Speaker 1:the one thing that I've been thinking through is, you know, the idea is to help evaluate more fairly, more objectively, but like part of it being thorough is filling in for the gaps that the hiring team has, part of being thorough is filling in for the gaps that the LLM has. And if the LLM there's a gap and the hiring team has a gap, it's like how do we actually help our customers not miss stuff? And so that's where it's like getting into training the system. Maybe that's like more in the 20% side. It's not just like about, I'm assuming like asking okay, you got to know how to collect the salary, but it's there's certain things that we have to know to fully evaluate a candidate, whether the hiring team realizes it or not, or whether the LLM is going to figure it out or not.
Speaker 2:That's some of what I've been-. No, you can't have llm, figure this out, okay. So you can't have a little. So this is more around product design. Yeah, so this is around the product design it's.
Speaker 2:We were building this product even before we were working with llms. So bevel board started with the idea of solving for asynchronous conversations or asynchronous interviews. So if you, if you go back a little bit in history, just five, six years, conversational AI has been around since 2015, 2015, 16, if you see, people have started to claim that the chatbots had become more conversational. And if you know, like Google had this tool called Dialogflow. Now, dialogflow was something that made it really easy for you to create like a tree of a conversation. So now that was. That is called a decision tree, that at every step, you are giving some two or three decisions that this particular, this tree can take and the conversation goes in those parts. So the parts are very well set, right.
Speaker 2:Nowm, now you can actually chat with chat, gpt or claude now, or like sonnet or anyone, the whole day, like you can keep asking questions. It'll keep the. There is no agenda to that conversation, right, because there is no goal that you're trying to achieve together. You are curious about something and it is answering. The way babble bots is designed is something in between. You have a structure to a conversation because you're trying to achieve a business outcome which is fully understanding a candidate's fitment and interest and sort of suitability for a particular role as much as is possible to evaluate, and then give that in a very structured way to the company so they can decide whether this person is getting shortlisted or rejected. We do not make the decision, the decision is always made by them. But this structure is really important because otherwise how will they compare the candidates? So you can't have a completely free-flowing conversation because that's not useful, it's not like it's not possible, it's just not useful.
Speaker 1:Yeah, that's super helpful. I know we're coming up on time here, so I guess we should probably stop. I feel like we could definitely keep going for a while. Raleigh, are there any kind of other final thoughts you wanted to share, or anything at all, before we jump off today?
Speaker 2:I just think we're just like living in a very magical time. I think recruiting is, I feel is going to be fundamentally shifting and I think we've been talking about these things like time to hire and all these metrics for a very long time, but I don't think actually substantial movement happened in those particular metrics in the last 10-15 years. But I just don't see a world where recruitment stays the same. It is just like the whole marketplace of the world where you have people looking for opportunity and there are opportunities available anywhere in the world. That's fundamentally shifting. I don't think that shift is even very far away. And all this is happening because now you are not bound by somebody's time, availability, language, language assessment style. It's a very democratic and I think it's going to be a more beautiful world with like basically the right kind of people getting the right kind of opportunities.
Speaker 2:That's what my hope is and, yes, we are very excited to help get us to that future.
Speaker 1:I love it. I love it. This has been great, Elijah. Any other thoughts on your end before we jump off?
Speaker 3:No, it's a beautiful vision. I love that too. Thank you for sharing that, Ralu.
Speaker 1:Well, ralu, this has been great. I learned a lot. We were able to get into some nuance that Elijah and I weren't able to get into in the past couple of podcasts with Bright, higher and Pillar Founder and CEOs. Thank you for explaining some of this more nuance and getting a little bit more technical with us, which is great, I think. Probably, given your background, we were able to get into some more nuance, which is, I think, very helpful for our audience. It's truly a great episode. Thank you so much for joining us today.
Speaker 2:Thank you.
Speaker 1:Thank you, this was really good, thank you everyone for tuning in and we'll talk to you next time. Take care.
Speaker 2:Bye.