Profile pic is from Jason Box, depicting a projection of Arctic warming to the year 2100 based on current trends.

  • 0 Posts
  • 1.03K Comments
Joined 2 years ago
cake
Cake day: March 3rd, 2024

help-circle
  • You’re correct on their limitations. That doesn’t stop corporations from implementing them, sometimes as an extra tool, sometimes as a rash displacement of paid labor, and often without your last step, checking the results they output.

    LLMs are a specialized tool, but CEOs are using it as a hammer where they see nails everywhere, and it has displaced some workers. A few have realized the mistake and backtracked, but they didn’t necessarily put workers back. As per usual anytime there is displacement.

    And for the record, while LLMs are technically under the general AI classification, they are not AI in the sense of what the term AI brings to the mind (AGI). But they have definitely been marketed as such because what started as AI research turned into a money grab that is still going on.




  • Useful maybe. For what purposes though… getting labor costs down, pumping out stuff fast assuming it’s correct because it’s AI, being ahead of their competitors. Useful as in productive? Maybe for some cases when they know what AI can and can’t do or its limitations. I get the impression from this year’s news stories that a lot of them jumped on it because it was the new thing, following everyone else. A lot got burned, some backtracked where they could, some are quiet but aren’t pursuing it as much as they advertised.

    OP is right, companies will go the direction they feel consumers will buy more from, and if that’s a “No AI” slogan, that’s what they’ll put. There’s no regulations on it, so just like before with ingredients or other labeling before rules were set, they’ll lie to get you to buy it. Hell, from a software pov there’s a big thing now on apps being sold as “FOSS” that are not, because there’s no rules to govern it. Caveat emptor.