Enough micro-sabotage by enough people will in time corrupt the databases enough that people will stop trusting (and at some point stop using) them. It is happening already if reports of deliberate disinformation in the LLMs can be believed.
I don’t think you need any active sabotaging in this regard. I’m not really worried about the future of LLMs, because we are already at a point of feedback cascade where thanks to LLMs, more and more of content they steal from the internet has been AI generated by them anyway, which will eventually cause the models to collapse or stagnate. And besides, you wouldn’t be able to sabotage at a scale required for this. Thankfully, the spread of fake AI generated websites and content it has enabled is so massive, that it works as well.
Enough micro-sabotage by enough people will in time corrupt the databases enough that people will stop trusting (and at some point stop using) them. It is happening already if reports of deliberate disinformation in the LLMs can be believed.
I don’t think you need any active sabotaging in this regard. I’m not really worried about the future of LLMs, because we are already at a point of feedback cascade where thanks to LLMs, more and more of content they steal from the internet has been AI generated by them anyway, which will eventually cause the models to collapse or stagnate. And besides, you wouldn’t be able to sabotage at a scale required for this. Thankfully, the spread of fake AI generated websites and content it has enabled is so massive, that it works as well.
I’m looking forward to that.