I genuinely don’t understand the people who are dismissing those sounding the alarm about AGI. That’s like mocking the people who warned against developing nuclear weapons when they were still just a theoretical concept. What are you even saying? “Go ahead with the Manhattan Project - I don’t care, because I in my infinite wisdom know you won’t succeed anyway”?
Speculating about whether we can actually build such a system, or how long it might take, completely misses the point. The argument isn’t about feasibility - it’s that we shouldn’t even be trying. It’s too fucking dangerous. You can’t put that rabbit back in the hat.
Here’s how I see it: we live in an attention economy where every initiative with a slew of celebrities attached to it is competing for eyeballs and buy in. It adds to information fatigue and analysis paralysis . In a very real sense if we are debating AGI we are not debating the other stuff. There are only so many hours in a day.
If you take the position that AGI is basically not possible or at least many decades away (I have a background in NLP/AI/LLMs and I take this view - not that it’s relevant in the broader context of my comment) then it makes sense to tell people to focus on solving more pressing issues e.g. nascent fascism, climate collapse, late stage capitalism etc.
I think this is called the “relative privation” fallacy – it is a false choice. The threat they’re concerned about is human extinction or dystopian lock-in. Even if the probability is low, this is worth discussing.
Relative privation is when someone dismisses or minimizes a problem simply because worse problems exist: “You can’t complain about X when Y exists.”
I’m talking about the practical reality that you must prioritize among legitimate problems. If you’re marooned at sea in a sinking ship you need to repair the hull before you try to fix the engines in order to get home.
It’s perfectly valid to say “I can’t focus on everything so I will focus on the things that provide the biggest and most tangible improvement to my situation first”. It’s fallacious to say “Because worse things exist, AGI concerns doesn’t matter.”
and not only that. in your example of choosing to address the hull first over the engine, the engine problem is actually prescient. when taking time to debate about AGI, it is to debate a hypothetical future problem over real current problems that actually exist and aren’t getting enough attention to be resolved. and if we can’t address those, why do we think we’ll be able to figure out the problems of AGI?
It would be like you’re marooned at sea in a sinking ship and choose to address the risk of not having a good place to anchor when you get to the harbour instead of repairing the hull.
I genuinely don’t understand the people who are dismissing those sounding the alarm about AGI. That’s like mocking the people who warned against developing nuclear weapons when they were still just a theoretical concept. What are you even saying? “Go ahead with the Manhattan Project - I don’t care, because I in my infinite wisdom know you won’t succeed anyway”?
Speculating about whether we can actually build such a system, or how long it might take, completely misses the point. The argument isn’t about feasibility - it’s that we shouldn’t even be trying. It’s too fucking dangerous. You can’t put that rabbit back in the hat.
Sam Altman himself compared GPT-5 to the Manhattan Project.
The only difference is it’s clearer to most (but definitely not all) people that he is promoting his product when he does it…
Here’s how I see it: we live in an attention economy where every initiative with a slew of celebrities attached to it is competing for eyeballs and buy in. It adds to information fatigue and analysis paralysis . In a very real sense if we are debating AGI we are not debating the other stuff. There are only so many hours in a day.
If you take the position that AGI is basically not possible or at least many decades away (I have a background in NLP/AI/LLMs and I take this view - not that it’s relevant in the broader context of my comment) then it makes sense to tell people to focus on solving more pressing issues e.g. nascent fascism, climate collapse, late stage capitalism etc.
I think this is called the “relative privation” fallacy – it is a false choice. The threat they’re concerned about is human extinction or dystopian lock-in. Even if the probability is low, this is worth discussing.
Relative privation is when someone dismisses or minimizes a problem simply because worse problems exist: “You can’t complain about X when Y exists.”
I’m talking about the practical reality that you must prioritize among legitimate problems. If you’re marooned at sea in a sinking ship you need to repair the hull before you try to fix the engines in order to get home.
It’s perfectly valid to say “I can’t focus on everything so I will focus on the things that provide the biggest and most tangible improvement to my situation first”. It’s fallacious to say “Because worse things exist, AGI concerns doesn’t matter.”
and not only that. in your example of choosing to address the hull first over the engine, the engine problem is actually prescient. when taking time to debate about AGI, it is to debate a hypothetical future problem over real current problems that actually exist and aren’t getting enough attention to be resolved. and if we can’t address those, why do we think we’ll be able to figure out the problems of AGI?
The rephrase it as a short(ish) metaphor: