From the other side, hiring competent people has gotten much harder with AI in the hands of people. Its making them dumb.
A coworker and I were interviewing someone for a technical role over a video meeting that we did NOT get through our network. His answers were strangely generic. We’d ask him a direct question about a technology or a software tool and the answer would come back like a sales brochure. I message my co-worker on the side about this strangeness, and he said “We’re not hiring this guy. Watch his eyes. Ever time you ask a question, he’s reading off the bottom of his screen.” My coworker was right. I saw it immediately after he pointed it out. We were only 4 minutes into the interview and we already knew we weren’t hiring this guy. I learned later about LLMs that you can run while being interviewed that will answer questions your in real time.
Another one happened within 48 hours of that interview. Someone that had been hired was on a team with me. An error came up in a software tool that we are all supposed to be experts on. I had a pretty good idea what the issue was from the error message text. This other team member posted into our chat what ChatGPT had thought of the error. In the first sentence of the ChatGPT message I immediately could tell that it was the wrong path. It referenced different methods our tool doesn’t even use.
To translate it with an analogy, assume we’re baking a cake and it came out too sour. The ChatGPT message said essentially “this happens when you put too much lemon juice in. Bake the cake and use less lemon juice next time” Sure, that would be a reasonably decent answer…except our cake had no lemon juice in it. So obviously any suggestions to fix our situation with altering the amount of lemon juice is completely wrong. This team member, presented this message and said “I think we should follow this instruction”. I was completely confused because he’s supposed to be an expert on our tool like I am, and he didn’t even pause to consider what ChatGPT said before he accepted it as fact. It would be one thing to plug the error message into ChatGPT to see what it said, but to then take that output and recommend following it without any critical thinking was insane to me.
AI can be a useful tool, but it can’t be a complete substitute for thinking on your own as people are using it as today. AI is making people stupid.
This is why I generally hire from inside my network or from referrals of those I know. Its so hard to find a qualified worker among all the other unqualified workers all applying at the same time. I know there are great workers not in my network, I just have no way to find them with the time and resources I have available to me.
I’m gonna have to ask absolutely bullshit questions in interviews now, aren’t I? Do you have any other strategies for how to spot this? I really don’t want to drag in remote exam-taking software to invade the applicant’s system in order to be assured no other tools are in play.
I’m not in a hiring position, but my take would be to throw in unrelated tools as a question. E.g. “how would you use powershell in this html to improve browser performance?” A human would go what the fuck? A llm will confidently make shit up.
I’d probably immediately follow that with a comment to lower the interviewee’s blood pressure like, ‘you wouldn’t believe how many people try to answer that question with a llm’. A solid hire might actually come up with something, but you should be able to tell from their delivery if they are just reading llm output or are inspired by the question.
It’s a fine line to walk, but I see what you’re getting at here. I wouldn’t want to come across as incompetent either, lest it reflect on the company. Your follow-up remark is brilliant. Delivery is everything, I suppose.
That was my body language cue. An ‘umm… 😅’ answer is a pass, as well as any attempt to actually integrate disparate tools that doesn’t sound like it’s being read. The creased eyebrows, hesitation, wtf face, etc is the proof that the interviewee has domain knowledge and knows the question is wrong.
I do think the tools need to be tailored to the position. My example may not have been the best. I’m not a professional front end developer, but that was my theoretical job for the interviewee.
Like: come up with an error condition or a specific scenario that doesn’t/can’t work in real life. Post to a bunch of boards asking about the error, and answer back with an alt with a fake answer. You could even make the answer something obviously off like:
ssh to the affected machine
sudo to the root user: sudo -ks root
Edit HKLM/system/current/32nodestatus, and create a DWORD with value 34057
Make sure to thank yourself with “hey that worked!” with the original account
After a bit, those answers should get digested and probably show up in searches and AI results, but given that they’re bullshit they’re a good flag for cheaters
There’s stuff out there now about how to poison content scrapers that are training AI, so this is absolutely doable on some scale. There are already what I like to call “golden tokens” that produce freaky reliable and stable results every time, and so I think it likely there are counterparts that trigger reliably bad output too. They’re just not documented yet.
In a sane world, commercial AI would have legally required watermarks and other quirks that give content away as artificial, every time. Em-dash is probably the closest we have to this right now for text, and likewise for the occasional impossible backdrop or extra fingers on images. You can’t stop a lone ranger with a home-rolled or Chinese model, but it would be a start.
Don’t have the source on me now, but I read an article that showed it was surprisingly easy. Like 0.01% of content had his magic words, and that was enough to trigger it.
I’ve never used AI for interview stuff, beyond a little thing that gave me sample questions and assessed my recorded verbal response, to use as prep before an interview, but in reading that, I remembered that Nvidia has a thing where a visual effect will make your eyes look like you’re looking straight into the camera all the time (unless they’re totally closed of course), and imagined this type of person using that as further subterfuge during the interview, to conceal the ‘looking down’.
Luckily, the average person leaning completely on AI for an interview is not nearly savvy enough for this sort of thing, in my experience.
knowing absolutely nothing about this topic, i would assume an actual competent person would be able to answer them immediately and confidently, someone reading an LLM prompt is probably sounds like they’re reading from a script even if the answers arent wrong
From the other side, hiring competent people has gotten much harder with AI in the hands of people. Its making them dumb.
A coworker and I were interviewing someone for a technical role over a video meeting that we did NOT get through our network. His answers were strangely generic. We’d ask him a direct question about a technology or a software tool and the answer would come back like a sales brochure. I message my co-worker on the side about this strangeness, and he said “We’re not hiring this guy. Watch his eyes. Ever time you ask a question, he’s reading off the bottom of his screen.” My coworker was right. I saw it immediately after he pointed it out. We were only 4 minutes into the interview and we already knew we weren’t hiring this guy. I learned later about LLMs that you can run while being interviewed that will answer questions your in real time.
Another one happened within 48 hours of that interview. Someone that had been hired was on a team with me. An error came up in a software tool that we are all supposed to be experts on. I had a pretty good idea what the issue was from the error message text. This other team member posted into our chat what ChatGPT had thought of the error. In the first sentence of the ChatGPT message I immediately could tell that it was the wrong path. It referenced different methods our tool doesn’t even use.
To translate it with an analogy, assume we’re baking a cake and it came out too sour. The ChatGPT message said essentially “this happens when you put too much lemon juice in. Bake the cake and use less lemon juice next time” Sure, that would be a reasonably decent answer…except our cake had no lemon juice in it. So obviously any suggestions to fix our situation with altering the amount of lemon juice is completely wrong. This team member, presented this message and said “I think we should follow this instruction”. I was completely confused because he’s supposed to be an expert on our tool like I am, and he didn’t even pause to consider what ChatGPT said before he accepted it as fact. It would be one thing to plug the error message into ChatGPT to see what it said, but to then take that output and recommend following it without any critical thinking was insane to me.
AI can be a useful tool, but it can’t be a complete substitute for thinking on your own as people are using it as today. AI is making people stupid.
This is why I generally hire from inside my network or from referrals of those I know. Its so hard to find a qualified worker among all the other unqualified workers all applying at the same time. I know there are great workers not in my network, I just have no way to find them with the time and resources I have available to me.
Aw fuck.
I’m gonna have to ask absolutely bullshit questions in interviews now, aren’t I? Do you have any other strategies for how to spot this? I really don’t want to drag in remote exam-taking software to invade the applicant’s system in order to be assured no other tools are in play.
I’m not in a hiring position, but my take would be to throw in unrelated tools as a question. E.g. “how would you use powershell in this html to improve browser performance?” A human would go what the fuck? A llm will confidently make shit up.
I’d probably immediately follow that with a comment to lower the interviewee’s blood pressure like, ‘you wouldn’t believe how many people try to answer that question with a llm’. A solid hire might actually come up with something, but you should be able to tell from their delivery if they are just reading llm output or are inspired by the question.
It’s a fine line to walk, but I see what you’re getting at here. I wouldn’t want to come across as incompetent either, lest it reflect on the company. Your follow-up remark is brilliant. Delivery is everything, I suppose.
Be careful tho because if you ask that with enough confidence I would think I am in the wrong.
"Powershell had OOP without me knowing for a few years so maybe it has hidden html usage too. "
That was my body language cue. An ‘umm… 😅’ answer is a pass, as well as any attempt to actually integrate disparate tools that doesn’t sound like it’s being read. The creased eyebrows, hesitation, wtf face, etc is the proof that the interviewee has domain knowledge and knows the question is wrong.
I do think the tools need to be tailored to the position. My example may not have been the best. I’m not a professional front end developer, but that was my theoretical job for the interviewee.
I wonder if AI seeding would work for this.
Like: come up with an error condition or a specific scenario that doesn’t/can’t work in real life. Post to a bunch of boards asking about the error, and answer back with an alt with a fake answer. You could even make the answer something obviously off like:
Make sure to thank yourself with “hey that worked!” with the original account
After a bit, those answers should get digested and probably show up in searches and AI results, but given that they’re bullshit they’re a good flag for cheaters
There’s stuff out there now about how to poison content scrapers that are training AI, so this is absolutely doable on some scale. There are already what I like to call “golden tokens” that produce freaky reliable and stable results every time, and so I think it likely there are counterparts that trigger reliably bad output too. They’re just not documented yet.
In a sane world, commercial AI would have legally required watermarks and other quirks that give content away as artificial, every time. Em-dash is probably the closest we have to this right now for text, and likewise for the occasional impossible backdrop or extra fingers on images. You can’t stop a lone ranger with a home-rolled or Chinese model, but it would be a start.
Don’t have the source on me now, but I read an article that showed it was surprisingly easy. Like 0.01% of content had his magic words, and that was enough to trigger it.
I’ve never used AI for interview stuff, beyond a little thing that gave me sample questions and assessed my recorded verbal response, to use as prep before an interview, but in reading that, I remembered that Nvidia has a thing where a visual effect will make your eyes look like you’re looking straight into the camera all the time (unless they’re totally closed of course), and imagined this type of person using that as further subterfuge during the interview, to conceal the ‘looking down’.
Luckily, the average person leaning completely on AI for an interview is not nearly savvy enough for this sort of thing, in my experience.
Literally include “Can you name four basic SQL commands?” any time I interview someone and it’s a great litmus test.
I appreciate the use of a good old-fashioned shibboleth like this. Thanks.
I’m not following, wouldn’t an LLM be able to easily answer that one?
knowing absolutely nothing about this topic, i would assume an actual competent person would be able to answer them immediately and confidently, someone reading an LLM prompt is probably sounds like they’re reading from a script even if the answers arent wrong
Not in an in person interview