

The second strike was murder. But so was the first strike. Even if they were running drugs, this is not how you handle the situation. If the only solution is to blow it up, then you’re looking for a reason to shoot at things, and the second attack proves that.
This is no different than the police problem we have domestically, where shoot first, don’t bother asking questions later because they’re dead is too common.









You’re correct on their limitations. That doesn’t stop corporations from implementing them, sometimes as an extra tool, sometimes as a rash displacement of paid labor, and often without your last step, checking the results they output.
LLMs are a specialized tool, but CEOs are using it as a hammer where they see nails everywhere, and it has displaced some workers. A few have realized the mistake and backtracked, but they didn’t necessarily put workers back. As per usual anytime there is displacement.
And for the record, while LLMs are technically under the general AI classification, they are not AI in the sense of what the term AI brings to the mind (AGI). But they have definitely been marketed as such because what started as AI research turned into a money grab that is still going on.