• 0 Posts
  • 914 Comments
Joined 10 months ago
cake
Cake day: February 10th, 2025

help-circle
  • Until it’s no longer more profitable to make their cars safer, or regulation requires they make their cars safer, or a competitor decides to take market share by making their cars safer.

    “Because they’ve become safer over time, they’ll continue to do so indefinitely” doesn’t work for me.

    That’s fine because that’s not what I said.

    Which of these do you disagree with?:

    • Human driving capability has shown no indication of improving.

    • Autonomous vehicle capabilities are showing indications of improving.

    It doesn’t take a rocket surgeon to recognize that these measures of performance will eventually intersect (unless you think there’s something fundamentally special about human driving that is impossible to replicate).



  • It’s odd that the thing that terrifies you is that nobody is able to be punished. Grandma and her dog are dead in both scenarios. We want whatever will cause that scenario to happen the least.

    I’d rather 1 grandma is run over without a clearly responsible party than 10 grandmothers be killed while 10 drivers are sent to prison.

    A person who’s not paying attention or drunk is always going to exist no matter how many grandmas are flattened. The software bug can be fixed and sensors can be improved.

    Self-driving cars are the worst they will ever be and they will only get better. Human drivers are not going to improve.









  • iPhone notification summaries were made with GPT3.5 I believe (maybe even the -turbo version).

    It doesn’t use reasoning and so when using very short outputs it can produce wild variations since there are not a lot of previous tokens in order to direct the LLM into the appropriate direction in kv-space and so you’re more at the whims of temperature setting (randomly selecting the next token from a SOFTMAX’d list which was output from the LLM).

    You can take those same messages and plug them into a good model and get much higher quality results. But good models are expensive and Apple is, for some reason, going for the budget option.


  • Anyone learning a new language massively benefits from being able to speak with native speakers.

    That being said, LLMs are better at languages and translation tasks than any pretty much anything else. If you need vocabulary help or have difficulty with grammar they’re incredibly helpful (vs Googling and hoping someone had the same issue and posted about it on Reddit).

    I mean, if you can afford a native speaker tutor that is the superior choice. But, for the average person, an LLM is a massive improvement over trying to learn via YouTube or apps.






  • I don’t know the details or the bills, but it isn’t uncommon for people in contested seats to be allowed by the party to vote in a poll-favorable way when their vote won’t change the outcome.

    Simply, counting ‘Times voted with Trump’ doesn’t say much that is useful and can be misleading, especially in the context of a post about death threats to politicians.

    Social media can have a very us or them mentality. If you’re not 100% lock step with the group then you’re an enemy to be scored and attacked. I read that comment as 'Yeah, they’re getting death threats but they voted with Trump so they deserved it (<insert Nazi bar comment>)".



  • Thanks for the recommendation, I’ll look into GLM Air, I haven’t looked into the current state of the art for self-hosting in a while.

    I just use this model to translate natural language into JSON commands for my home automation system. I probably don’t need a reasoning model, but it doesn’t need to be super quick. A typical query uses very few tokens (like 3-4 keys in JSON).

    The next project will be some kind of agent. A ‘go and Google this and summarize the results’ agent at first. I haven’t messed around much with MCP Servers or Agents (other than for coding). The image models I’m using are probably pretty dated too, they’re all variants of SDXL and I stopped messing with ComfyUI before video generation was possible locally, so I gotta grab another few hundred GB of models.

    It’s a lot to keep up with.😮‍💨