I want to let people know why I’m strictly against using AI in everything I do without sounding like an ‘AI vegan’, especially in front of those who are genuinely ready to listen and follow the same.
Any sources I try to find to cite regarding my viewpoint are either mild enough to be considered AI generated themselves or filled with extremist views of the author. I want to explain the situation in an objective manner that is simple to understand and also alarming enough for them to take action.


You are factually incorrect, willfully ignoring my point, and you don’t even appear to know who you’re talking to, confusing me with an above poster in this conversation.
Your misattribution of a specific fallacy as well as your refusal to engage in the actual topic will endure as a mark of shame against you, and I will add you as yet another example in the list of pro-AI outcomes I have observed. Cheers
What about that exchange makes you think they are pro AI? They seemed to be open minded to learning more about the topic but for some reason nothing was resolved.
To be honest with you, at the time I literally just got the vibe they were pro AI based on their defensiveness as well as their evident inability to participate in basic conversation, which is a hallmark of AI induced enfeeblement. I went with my gut, in other words
Ah, and my gut was correct. A quick look through their posting history just from this week reveals they use AI and are looking forward to its further inclusion in Firefox. Half their comments generally are defending AI tech giants, including minimizing the environmental and privacy concerns.
My 2c on a different topic - open minded people don’t try to discredit you on a technicality while actively shoehorning you into it and ignoring your actual words. I don’t detect the faintest hint of willingness to learn, either. We’re talking about the same person?
Looking through the interaction again, perhaps you are right and I was reading into it too much. They were stuck trying to get you to admit brain rot isn’t a forgone conclusion and wouldn’t accept that you already answered it noting this was your experience. I do want to add to one of their points. If you start with a premise that AI causes brain rot and you are generally hostile/aggressive in pushing that view, I would imagine it becomes a sort of self fulfilling prophecy that you will only have negative interactions with brain rotted individuals.
I think “brain rot” is because most people are lazy. YouTube/TikTok/TV “causes” brain rot in the same way. If people want to turn off their brain and fill it with mush, it will happen regardless. Counterpoint - I reference videos on YouTube fairly often for helping me fix something or learning to play an instrument.
AI use is probably the biggest threat to what I am calling “lazy” people because it is interactive, “addictive”, and the sycophantic direction it’s taking just can’t be healthy, but I’m not so sure people will come to depend on it any more than other technologies. I’m sure you saw the news of AI contributing toward suicides, but as a counterpoint, organizing knowledge for me to make decisions is one of the things I use it for. It gets in the way and tries to steer me in the wrong direction sometimes, but overall it is useful in non-sycophantic interactions (e.g., agentic tool use). The honeymoon phase of conversational AI has been over for me for a while. Hopefully I keep an immunity to bullshit like YouTube, social media and AI (yet to be seen and I’m sure you’ll set me straight :) ) and whatever comes next, and I’ll try not to demonize the new thing either.
Signed, A brain rotted individual
I agree, and actually I noted the similar effects of social media on people’s minds in general.
Right like at the most fundamental level, the main issue lies not in the inherent nature of a tool but in how it’s applied. Just as you noted with video content, you can either rot your mind with shorts or prune your algorithm to do amazing things for you like help you learn an instrument.
When you view tools merely as an input/output system like this, the nature of the tool itself is not relevant. There would be utterly no difference in this case between performing a standard web query, or having an llm collate links for you.
Given this, the question then becomes, “well, is it actually possible for AI be used in an equivalently responsible manner?”
My contention is that it is not, and the people using it for these purposes (including yourself) are incorrect about the nature of the output they think they are achieving. For example, it’s been established that AI use worsens worker productivity in general. Their numbers literally get worse, and we can also see the truth of these studies manifest in the sweeping failure of every company everywhere to realize any financial benefit from the adoption of AI tools.
The crazy thing is that these very same people will often incorrectly report that their productivity has in fact improved. Really think about for a minute. Their numbers are worse as a matter of material fact, but they believe they are working more efficiently than ever before, sometimes by ridiculous margins of 50% or more.
With that in mind, now consider what may be happening to you if you rely on AI for immeasurable things. If you rely on it to organize information, for example, with the goal of becoming well informed and making good decisions. You claim to know when it’s leading you astray, and can course correct, but…are you sure that merely being sure about that is sufficient to protect you? (Since fallacies are on the mind today, check out the toupee fallacy).
To me, demonization has nothing to do with it. When a new drug comes to market I am skeptical. As information comes to light I accept or reject it based on that information. This process is what helps us differentiate between beneficial new drugs (like Ozempic is turning out to be) or complete scams like a recent workout pill that shall remain unnamed that, despite heavy marketing, ultimately does nothing besides causing liver failure down the road. Of note: despite being proven to do nothing, there are countless anecdotes of people trying it who reported amazing additional gains in the gym.
Just be careful out there, is all I’m saying. To be honest you don’t appear particularly brain rotted to me at the moment. Hopefully this admission absolves me somewhat of the aura of self fulfilling prophecy in that regard. My hostility in general is not directed at a particular “flag” (such as AI use, political affiliation, consumer habits, and so on) but at dishonesty and the absence of integrity when discussing them. If we sacrifice these things, we have no protection whatsoever from those who seek to scam us, as they can trivially exploit us using whatever ground we conceded