The headline gets the major point across. LLMs are like taking the whole web as an analog image and lossily digitizing it: you can make out the general shape, but there might be missed details or compression artifacts. Asking an LLM is, in effect, googling your question using a more natural language… but instead of getting source material or memes back as a result, you get a lossy version of those sources and it’s random by design, so ‘how do I fix this bug?’ could result in ‘rm -rf’ one time, and something that looks like an actual fix the next.
Gamers’ Nexus just did a piece about how youtube’s ai summaries could be manipulative. While I think that is a possibility and the risk is real, go look at how many times elmo has said he’ll fix grok for real this time; but another big takeaway was how bad LLMs still are at numbers or tokens that have data encoded in them: There was a segment where Steve called out the inconsistent model names, and how the ai would mistake a 9070 for a 970, etc, or make up it’s own models.
Just like googling a question might give you a troll answer, querying an ai might give you a regurgitated, low-res troll answer. ew.
I tripped over this awesome analogy that I feel compelled to share. “[AI/LLMs are] a blurry JPEG of the web”.
This video pointed me to this article (paywalled)
The headline gets the major point across. LLMs are like taking the whole web as an analog image and lossily digitizing it: you can make out the general shape, but there might be missed details or compression artifacts. Asking an LLM is, in effect, googling your question using a more natural language… but instead of getting source material or memes back as a result, you get a lossy version of those sources and it’s random by design, so ‘how do I fix this bug?’ could result in ‘rm -rf’ one time, and something that looks like an actual fix the next.
Gamers’ Nexus just did a piece about how youtube’s ai summaries could be manipulative. While I think that is a possibility and the risk is real, go look at how many times elmo has said he’ll fix grok for real this time; but another big takeaway was how bad LLMs still are at numbers or tokens that have data encoded in them: There was a segment where Steve called out the inconsistent model names, and how the ai would mistake a 9070 for a 970, etc, or make up it’s own models.
Just like googling a question might give you a troll answer, querying an ai might give you a regurgitated, low-res troll answer. ew.