I started my IT career in 2011, I have enjoyed it, I have got to do a lot of interesting stuff and meet interesting people, I will treasure those memories forever.

But, starting with crypto turing general computing from being:

“Wow, this machine can run so many apps at the same time!” or “Holy shit, those graphics look epic!” or “Amazing, this computer has really sped up that annoying task!”

To being:

Yo! Look at how many numbers I can generate!

That brought down my enthusiasm severely, but hey, figuring out solutions to problems was still fun.

Then came AI/LLMs.

And with it, a mountain of slop.

Finding help about an issue has gone from googling and reading help articles written by something with an actual brain to mostly being rephrased manuals that only provide working answers to semi standard answers.

Add to that a general push to us AI in anything and everything, no matter how little relevance it holds for the task at hand.

I also remember how AI was sold to the us at first, we were promised to do away with boring paperwork, so we could get on with our actual job.

What did we get? An AI that takes the fun and creative parts, leaving the paperwork for the workers.

We got an AI that we need to expect to be stealing our work and data at every point, giving us shit work back, while being told that we should applaude it and be grateful for it.

And the worst thing, the worst thing is that people seem happy with it. I keep getting requests to buy another Copilot license or asking for another AI service to be added to our tenant, I am sick of it!

We got an AI that somehow has slithered onto the golden throne and can’t be questioned.


I am not able to leave the tech market at this time, but I will focus on more tangible hobbies going forward.

This year, I have given myself a project, I will try to build a model railway in a suitcase. That will be a Z-scale tiny world in a suitcase.

I have never done anything remotely like it, but I feel like I need something physical to take my mind off tech.

Sorry for the rant, but I just came off of a high from realizing and putting words to my feelings.

  • SparroHawc@lemmy.zip
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    13 days ago

    You say that, but you have to remember that LLMs produce the average output of their training materials. Not the best, but the average. And there’s a lot of code out there that is simple. Only the outliers have the magic combination of conciseness AND quality AND complexity.

    LLMs also have no understanding of context outside the immediate. Satire is completely opaque to them. Sarcasm is lost on them, by and large. And they have no way to differentiate between good and bad output. Or good and bad input, for that matter. Joke pseudocode is just as valid in their training corpus as dire warnings about insecure code.

    I read a comment once that still rings true - “Hallucinations” are a misnomer. Everything an LLM puts out is a hallucination; it’s just that a lot of the time, it happens to be accurate. Eliminating that last percentage of inaccurate hallucinations is going to be nearly impossible.

    • qqq@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      13 days ago

      I’d push back on your point here with a few things:

      The primary one being: the code doesn’t need to be perfect or even above average – average is perfectly fine. The idea here is comparing the AI to a human, not to perfection. I see this constantly with AI and I find it a bit disingenuous.

      I do truly believe what I said above will be possible within my career (I’m in my mid 30s), but it’s not really what I’m worried about right now. I think the current code I see being generated is generally “good enough”. I’m not comparing it to perfect: I’m comparing it to people.

      I read a comment once that still rings true - “Hallucinations” are a misnomer. Everything an LLM puts out is a hallucination; it’s just that a lot of the time, it happens to be accurate. Eliminating that last percentage of inaccurate hallucinations is going to be nearly impossible.

      I don’t see any reason you have to remove all hallucinations to get a good tool for autonomous development: humans aren’t perfect either. We compensate for that with processes and checking each others work, but plenty still falls through the cracks.

      LLMs also have no understanding of context outside the immediate. Satire is completely opaque to them. Sarcasm is lost on them, by and large. And they have no way to differentiate between good and bad output. Or good and bad input, for that matter. Joke pseudocode is just as valid in their training corpus as dire warnings about insecure code.

      Have you seen output in which satirical code is actually included? I’m well aware of things like https://www.anthropic.com/research/small-samples-poison and the potential here. And do you not believe that either (a) these types of trivial issues would be caught by a person whose job was just to audit output or even (b) this type of issue could be caught by specially trained domain limited AIs designed to check output?

      • SparroHawc@lemmy.zip
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        12 days ago

        I think the current code I see being generated is generally “good enough”. I’m not comparing it to perfect: I’m comparing it to people.

        If this were true, then open source projects would have much less of an issue with pull requests from sloperators.

        Have you seen output in which satirical code is actually included?

        I wouldn’t expect to see it. Satirical code requires more thought than an LLM is capable of putting into its writing - you need to understand what is expected of whoever you’re satirizing, and then you have to take that expectation and take it a step further into the absurd. Without having that context of something that is specifically being satirized, what you have instead is just incorrect code. And again, the LLM is incapable of valuing proper code over intentionally wrong code, so it’s going to poison the database to some extent.

        And LLMs don’t drop big chunks of copy-pasted code from Stack Exchange like an intern would. They work one token at a time. (Which is why trying to get them to understand that quotations need to be all in one piece is a futile endeavor.)

        Besides, ‘satirical code’ is just one example of the many things that can poison the training. I couldn’t even begin to enumerate all the things that could mess with it, and honestly I’m surprised that LLMs do as well as they do considering they likely have all sorts of cross-language screwball connections (which may be why it has such a tendency to make up libraries; it doesn’t necessarily understand that a common PHP library doesn’t exist in Java).

        do you not believe that either (a) these types of trivial issues would be caught by a person whose job was just to audit output or even (b) this type of issue could be caught by specially trained domain limited AIs designed to check output?

        These issues could be caught by someone whose job it is to audit code, sure. The problem is that sloperators often don’t audit their own stuff well enough. They leave it to the open source repo’s admins. When pull requests from overeager noobs were infrequent, it wasn’t the problem; they could gently correct them, the repo would stay high-quality, the noob would learn, everyone is fine. But now, sloperators are dumping low-quality pull requests on the repos faster than the admins can sort through them - because it now takes less time to produce slop code than it takes to determine whether or not the slop is worth including. The admins are swamped, because they can’t sort the wheat from the chaff fast enough.

        A domain-limited AI designed to check output would be useful - if it could be trusted. Open-source project admins are some of the best coders out there, and they vastly outstrip the capabilities of LLMs. You’re suggesting that we replace THEM with an agent. They are in that position because they’re right far more often than they’re wrong when it comes to understanding the code as it exists, and how incoming code would impact it - or at least they’re right often enough to keep the project alive. LLMs will be worse at that job, I guarantee it. They’d be fast, but they’d be wrong too often. This is the primary issue with LLM agents.

        • qqq@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          10 days ago

          You’re suggesting that we replace THEM with an agent.

          I am not suggesting we replace anyone, least of all the open source community, so let’s not put words in my mouth

          I think the current code I see being generated is generally “good enough”. I’m not comparing it to perfect: I’m comparing it to people.

          If this were true, then open source projects would have much less of an issue with pull requests from sloperators.

          This doesn’t follow to me. A good tool in the hand of a crappy user doesn’t suddenly make good output. I specifically said that LLMs write good code in a specific setting. Clearly random person generating thousands of lines at a time for a project they don’t understand isn’t that setting.

          You seem to be very focused on crappy code generated by people that don’t know what they’re doing, the technology isn’t good enough for that, so yes, it won’t work in that setting, I agree.