An LLM also can’t bake a cake, decorate a Christmas tree, or bench-press 100kg.
Just understand what LLMs are good at, use them for that, and don’t throw your hands up and declare it useless because it can’t magically do something it was never designed to do in the first place.
I’ve never seen anyone advertising an LLM as being good at spelling bees. The only time I ever see this spelling thing come up is when people are making fun of it.
they’re presented as general knowledge chatbots at the very least, and i know i’d consider spelling pretty general knowledge.
the way i see it you can either acknowledge the “strawberry question” as a genuine failing of most every publicly accessible LLM, or you can acknowledge that LLMs are only ever actually correct by pure chance. sometimes it’s a REALLY GOOD chance, but at the end of the day it’s still always a variable that you can’t actually control.
You have someone complaining about what people selling AI say it can do, when it can’t do that. You see people complaining that AI can’t do things, when it can do other things.
You need to try and digest what people are saying better rather than just being contrarian.
tbh people call them useless all the time but they also cherry pick their weakness.
it is a tool, it has utility. kinda like crypto although grifters seem to always soil promising tech. but in 20 years it will all settle and we will be enslaved, anyways.
An LLM also can’t bake a cake, decorate a Christmas tree, or bench-press 100kg.
Just understand what LLMs are good at, use them for that, and don’t throw your hands up and declare it useless because it can’t magically do something it was never designed to do in the first place.
but it’s being sold as if it IS capable of that.
I’ve never seen anyone advertising an LLM as being good at spelling bees. The only time I ever see this spelling thing come up is when people are making fun of it.
they’re presented as general knowledge chatbots at the very least, and i know i’d consider spelling pretty general knowledge.
the way i see it you can either acknowledge the “strawberry question” as a genuine failing of most every publicly accessible LLM, or you can acknowledge that LLMs are only ever actually correct by pure chance. sometimes it’s a REALLY GOOD chance, but at the end of the day it’s still always a variable that you can’t actually control.
You see a false dichotomy.
I see someone pounding away at a ball of yarn with a hammer and complaining that it’s not as good a knitting implement as they imagined.
You have someone complaining about what people selling AI say it can do, when it can’t do that. You see people complaining that AI can’t do things, when it can do other things.
You need to try and digest what people are saying better rather than just being contrarian.
in this thread i’ve only seen complaints about the implementation, no one has even implied LLM’s are useless.
tbh people call them useless all the time but they also cherry pick their weakness.
it is a tool, it has utility. kinda like crypto although grifters seem to always soil promising tech. but in 20 years it will all settle and we will be enslaved, anyways.