

Technically be wants peace, almost every violent aggressor throughout history has wanted peace… On their terms.


Was never going to happen. The most efficient plane uses way more fuel than even a “gas guzzler”. The common driver is dangerous enough with a land vehicle between mistakes operating and slack maintenance, imagine if that population were all flying around.


Might be nice to make that distinction, and have caps.
Millions to move an executive around makes no sense. Even if the route and timing can’t work using commercial, you can still fly a cheaper turboprop for people moving.
A freight company needing millions to move packages, ok, sure.
deleted by creator


I would appreciate more vector formatted content. Nice to have that effectively infinite resolution.
I dont get the transparent png angle though.


And yet it publishes in a bitmap format (webp). I can understand why, but still…


I think an ai could outperform my executives.
One of them sent out an email about how we weren’t making enough money. But don’t worry, he has a strategy that we will execute on and fix it.
The strategy is to raise prices and get more sales at the same time… That is literally it. Not even picking “high value” versus “high volume”, just a declaration that we can do both. If this genius plan doesn’t work, it’s just because the sales people failed to execute against his brilliant strategy well enough.


Whole it could provide some premium features (I’m imagining more like massage type features), the equivalent of 400 thousand USD seems near impossible to see that much value. Maybe 40 thousand for a luxury item for rich people could work more.
It’s just a limited run publicity stunt that will be forgotten within a few weeks.
They’ll start down that path, spend billions on starting up the program, then a new president comes in and cancels that to return to the smaller ship philosophy and spends billions to start down that path again.


Yeah, generally of a feature is vaguely expensive, they will not mandate it.
Where airbags hit the scene, they said that would work, but since it is so expensive you can do automatic seatbelts instead.
We are talking about a few dollars on a 30,000 dollar purchase…


I appreciate the online update/kill switch/repaiarability, lock out concerns, but these systems are surprisingly good for safety
On an early outing with my kid driving, we were going on a freeway next to a long line of cars waiting at an exit. Well suddenly someone pulls right in front of us, in a way that even if it happened to me I think I would have hit it, and certainly the car couldn’t brake in time and my kid swerved instead, a good call but one I’m sure would have left us running into the ditch at the speed we were going and no experience with that maneuver. However it was like a professional driver, managing to dramatically yank the car around the sudden slow car and neatly back in the lane after avoiding.
I was shocked my kid pulled that off with only 10 hours of driving experience, turns out the car had an evasive steering assist. Saved our asses.
Tons of videos about the emergency braking tests that should easily convince anyone of their value to safety.


In my car I haven’t figured out what sets it off, it happens all the the with nothing in the backseat.
I appreciate the intent, but at least in my car the false positive rate is so high I could imagine ignoring it
It’s pretty much a vibe coding issue. What you describe I can recall being advocated forevet, the project manager’s dtram that you model and spec things out enough and perfectly model the world in your test cases, then you are golden. Except the world has never been so convenient and you bank on the programming being reasonably workable by people to compensate.
Problem is people who think they can replace understanding with vibe coding. If you can only vibe code, you will end up with problems you cannot fix and the LLM can’t either. If you can fix the problems, then you are not inclined to toss overly long chunks of LLM stuff because they generate ugly hard to maintain code that tends to violate all sorts of best practices for programming.


Nah, they hated what the guy did still. They may have tossed the VP under the bus to try to mitigate the backlash, but it was still Garza that exposed the mess to the public in the first place, and that’s just way worse for the bottom line than being snobby and racist and bad mouthing your own product.
So they hope attention stays on ousting the VP in the court of public opinion, and handle Garza in more formal channels, and likely win if Garza recorded without informing the other party, particularly making that conversation public.


As someone who has occasional one on one meetings with executives, I’m not surprised. They tend to ramble and talk about whatever they feel like. Admittedly I haven’t met anyone who would say this sort of things, whether because they aren’t thinking it or they are on guard somewhat, but a lot of those conversations go into weird territory, like the executive really wants some friends and treats any one on one meeting as getting with some like minded friend.


The difference is that the government is largely cutting checks to private industry with very little regulation.
Yes. People get healthcare but the private industry just raises prices so long as the blank checks keep coming.
Same problem in higher education, the more money you inject, the more they slurp up with no regulation on how much they can charge.


This all presumes that OpenAI can get there and further is exclusively in a position to get there.
Most experts I’ve seen don’t see a logical connection between LLM and AGI. OpenAI has all their eggs in that basket.
To the extent LLM are useful, OpenAI arguably isn’t even the best at it. Anthropic tends to make it more useful than OpenAI and now Google’s is outperforming it on relatively pointless benchmarks that were the bragging point of OpenAI. They aren’t the best, most useful, or cheapest. The were first, but that first mover advantage hardly matters when you get passed.
Maybe if they were demonstrating advanced robotics control, but other companies are mostly showing that whole OpenAI remains “just a chatbot”, with more useful usage of their services going through third parties that tend to be LLM agnostic, and increasingly I see people select non OpenAI models as their preference.


It is easy for them to have 0 income, and we should fix that. The means by which they can access ‘value’ of their wealth to pay for stuff without actually incurring a taxable event need to get closed. Primarily this seems to be about borrowing against wealth, which should be a taxable event, with an option to eventually get credit for loan repayment to assuage the ‘but but double taxation!’ crowd.
They were driving with reckless abandon before…
Maybe the one thing I could see is people letting go of the steering to do something thanks to lane assist, but those same people were thigh-driving before, and I might trust the system more…