• 0 Posts
  • 941 Comments
Joined 2 years ago
cake
Cake day: June 16th, 2023

help-circle








  • I think an ai could outperform my executives.

    One of them sent out an email about how we weren’t making enough money. But don’t worry, he has a strategy that we will execute on and fix it.

    The strategy is to raise prices and get more sales at the same time… That is literally it. Not even picking “high value” versus “high volume”, just a declaration that we can do both. If this genius plan doesn’t work, it’s just because the sales people failed to execute against his brilliant strategy well enough.





  • I appreciate the online update/kill switch/repaiarability, lock out concerns, but these systems are surprisingly good for safety

    On an early outing with my kid driving, we were going on a freeway next to a long line of cars waiting at an exit. Well suddenly someone pulls right in front of us, in a way that even if it happened to me I think I would have hit it, and certainly the car couldn’t brake in time and my kid swerved instead, a good call but one I’m sure would have left us running into the ditch at the speed we were going and no experience with that maneuver. However it was like a professional driver, managing to dramatically yank the car around the sudden slow car and neatly back in the lane after avoiding.

    I was shocked my kid pulled that off with only 10 hours of driving experience, turns out the car had an evasive steering assist. Saved our asses.

    Tons of videos about the emergency braking tests that should easily convince anyone of their value to safety.



  • It’s pretty much a vibe coding issue. What you describe I can recall being advocated forevet, the project manager’s dtram that you model and spec things out enough and perfectly model the world in your test cases, then you are golden. Except the world has never been so convenient and you bank on the programming being reasonably workable by people to compensate.

    Problem is people who think they can replace understanding with vibe coding. If you can only vibe code, you will end up with problems you cannot fix and the LLM can’t either. If you can fix the problems, then you are not inclined to toss overly long chunks of LLM stuff because they generate ugly hard to maintain code that tends to violate all sorts of best practices for programming.





  • This all presumes that OpenAI can get there and further is exclusively in a position to get there.

    Most experts I’ve seen don’t see a logical connection between LLM and AGI. OpenAI has all their eggs in that basket.

    To the extent LLM are useful, OpenAI arguably isn’t even the best at it. Anthropic tends to make it more useful than OpenAI and now Google’s is outperforming it on relatively pointless benchmarks that were the bragging point of OpenAI. They aren’t the best, most useful, or cheapest. The were first, but that first mover advantage hardly matters when you get passed.

    Maybe if they were demonstrating advanced robotics control, but other companies are mostly showing that whole OpenAI remains “just a chatbot”, with more useful usage of their services going through third parties that tend to be LLM agnostic, and increasingly I see people select non OpenAI models as their preference.