• DandomRude@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 hour ago

    Yes, that could well be the case. Perhaps I am overly suspicious, but because the potential of LLMs to influence public opinion is so high due to their reach and the way they present information, I think it is highly likely that the companies offering them are already profiting from this, or at least will do so very soon.

    Musk is already demonstrating in his clumsy way that it is easily possible to manipulate the output in a targeted manner if you have full control over the model – and this isn’t the first time he has attracted attention for doing so. You almost have to be grateful to him for it, because it’s so obvious. If you do it more subtly, it’s even more dangerous.

    In any case, the fact is that the more people use LLMs, the more “interpretive authority” will be centralized, because the development and operation of LLMs is so costly that only a few large corporations can afford it – and they want to make money and are unscrupulous in doing so.

    In any case, we will not be able to rely on people’s ability to recognize attempts at manipulation. I think this is already evident from the fact that obvious misinformation on mainstream social media platforms and elsewhere is believed unquestioningly by so many people. Unfortunately, the effects are disastrous: if people were more critical, Trump would never have become US president, for example – certainly not twice.