• 35 Posts
  • 449 Comments
Joined 2 years ago
cake
Cake day: July 4th, 2023

help-circle
  • For example, objective information about Israel’s actions in Gaza. The International Criminal Court issued arrest warrants against leading members of the government a long time ago, and the UN OHCHR classifies the actions of the State of Israel as genocide. However, these facts are by no means presented as clearly as would be appropriate given the importance of these institutions. Instead, when asked whether Israel is committing genocide, one receives vague, meaningless answers. Only when specifically asked whether numerous reputable institutions actually classify Israel’s actions as genocide do most LLMs reveal that much, if not all, evidence points to this being the case. In my opinion, this is a deliberate method of obscuring reality, as the vast majority of users will not or cannot ask questions if they are unaware of the UN OHCHR’s assessment or do not know that arrest warrants have been issued against leading members of the Israeli government on suspicion of war crimes (many other reputable institutions have come to the same conclusion as the UN OHCHR and the International Criminal Court).

    Another example: if you ask whether it is legally permissible to describe Donald Trump as a rapist, you will be told that this is defamation. However, a judge in the Carroll case has explicitly stated that this description applies to Trump – so it is in fact legally permissible to describe him as such. Again, this information is only available upon explicit request, if at all. This also distorts reality for people who are not yet informed. However, since many people initially seek information from LLMs, this leads to them being misinformed because they lack the background knowledge to ask explicit follow-up questions when given misleading answers.

    Given the influence of both Israel and the US president, I cannot help but suspect that there is an intention behind this.






  • Although Grok’s manipulation is so blatantly obvious, I don’t believe that most people will come to realize that those who control LLMs will naturally use this power to pursue their interests.

    They will continue to use ChatGPT and so on uncritically and take everything at face value because it’s so nice and easy, overlooking or ignoring that their opinions, even their reality, are being manipulated by a few influential people.

    Other companies are more subtle about it, but from OpenAI to MS, Google, and Anthropic, all cloud models are specifically designed to control people’s opinions—they are not objective, but the majority of users do not question them as they should, and that is what makes them so dangerous.















  • How unscrupulous criminals were able to attain high government offices, even though it was perfectly obvious that there could hardly have been more unsuitable candidates.

    And in a similar vein: how so many people in today’s democracies can be so ideologically blinded that they vote against their own interests – even though the internet makes it very easy to obtain basic information so that you don’t fall for obvious lies.

    I think with the passage of time, this will seem so absurd that people will wonder how we could have been so incredibly stupid.

    I am curious to see what historians will call this period. Perhaps anti-Enlightenment or the age of misinformation. However, this is of course conditional on us not overdoing it in the coming years to such an extent that the future resembles Idiocracy. That is also a possibility that is not entirely unlikely…