There’s a bit of drama going on with the popular game manager Lutris right now, with users pointing out the developer using AI generated code via Anthropic’s Claude.

Seems like something relevant to talk about, with AI tools being a huge cause of problems in the hardware industry. Like how the Steam Deck is constantly sold out and Valve can’t even give us a price or release date of their upcoming hardware. All because these AI companies are sucking up all component manufacturing for their data centres. Every extra person using all these AI tools is only adding to the issue.

A user asked on the official Lutris GitHub two weeks ago “is lutris slop now” and noted an increasing amount of “LLM generated commits”. To which the Lutris creator replied:

It’s only slop if you don’t know what you’re doing and/or are using low quality tools. But I have over 30 years of programming experience and use the best tool currently available. It was tremendously helpful in helping me catch up with everything I wasn’t able to do last year because of health issues / depression.

There are massive issues with AI tech, but those are caused by our current capitalist culture, not the tools themselves. In many ways, it couldn’t have been implemented in a worse way but it was AI that bought all the RAM, it was OpenAI. It was not AI that stole copyrighted content, it was Facebook. It wasn’t AI that laid off thousands of employees, it’s deluded executives who don’t understand that this tool is an augmentation, not a replacement for humans.

I’m not a big fan of having to pay a monthly sub to Anthropic, I don’t like depending on cloud services. But a few months ago (and I was pretty much at my lowest back then, barely able to do anything), I realized that this stuff was starting to do a competent job and was very valuable. And at least I’m not paying Google, Facebook, OpenAI or some company that cooperates with the US army.

Anyway, I was suspecting that this “issue” might come up so I’ve removed the Claude co-authorship from the commits a few days ago. So good luck figuring out what’s generated and what is not. Whether or not I use Claude is not going to change society, this requires changes at a deeper level, and we all know that nothing is going to improve with the current US administration.

  • andioop@programming.dev
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 day ago

    As a person who vehemently doesn’t like use of AI in coding projects, you didn’t really do anything worthy of a downvote. Wish people could stop using it as a “disagree” button and would only use it to downvote

    • fuck you you stupid idiot can’t you see ai sucks, you personally are everything that is making the world worse and i hope you die, subhuman scum (incivility and personal attacks, this includes if they are being uncivil to someone you disagree with)
    • sign up for my new site at http://scamsite.com/!
    • did you know that birds fly? here’s a picture of a bird (in the open source community, on a topic that has nothing to do with birds)
    • bleep bloop, i’m definitely not a robot pretending to be human, definitely no astroturfing here
    • 9WhiteTeeth@lemmy.today
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 day ago

      I appreciate that but I use down votes as a disagree button all the time lol, it’s alright People are righteously pissed about AI b/c it is absolutely being abused by hyper Capitalists & bad faith actors.

      But I wonder how many folks are as vehemently opposed to offline, private models that run locally to assist with simple things like creating nvim .config files & other tedious but helpful chores?

      The conversation has been poisoned but AI isn’t going away so imo we need to continue having open, honest discussions about it.

      • andioop@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 day ago

        I’m mostly just displeased about honest discussion being stifled.

        Also, it probably feels bad (yes, it’s not a big deal in the grand scheme of things, but if I can avoid making people feel bad I think it’s worth it) to type a constructive comment only to receive downvotes because your opinion isn’t popular. Maybe not for you, but for others. I don’t want people to be discouraged from expressing opinions that aren’t immediately harmful. I imagine once upon a time abolition (obvious good) was unpopular too.

        I also think we need an honest discussion on AI. From my own hater position, I’m skeptical of proponents because the well has been poisoned: are you a normal person, or are you an unironic “get on board or get left behind! And if you get left behind, your Luddism means you deserved to get left behind. Can’t wait till these people are suffering for not getting on board” type? But you seem actually genuine and not that type, so I’m happy to have the conversation with you.

        I know it becoming widely adopted is against the interest of me and people I know. It devalues my skills (whether in reality it can replace me or if it can’t but out-of-touch CEOs buy the hype it can and act as if it can, the end result of how my skills are valued is the same), and as a person with a job I don’t want to have to reskill on my precious free time, or become an AI’s babysitter because it can hypothetically what I do 10x faster with 5x the mistakes instead of just letting me do the job I enjoy. (I do not know if it could actually do my job faster than me or what the mistake ratio would be.) “Get on board or get left behind” feels really callous and unwilling to address peoples’ complaints that aren’t just about self-interest but also about the tech’s reliability, environmental concerns (saw some debunks, never actually investigated myself if those “actually it is not the environmental disaster you think it is” things were true so I do not know what to think here); the ways it can be used and is probably being used right now to astroturf, push narratives, surveil people; “move fast and break things” with no regard for the consequences and treating people who want to be careful as obstacles to be broken so they can move fast to higher profits.

        Right now I mostly see people using it for bad things so I end up perceiving it wholly as a bad thing. I might have felt differently if most people approached it cautiously, as a thing capable of hallucinations that has to be double-checked in LLM form. If we were genuinely moving towards a world where you do not have to work to survive instead of “you have to work to survive, but also we want to take you out of a job and will give you no help in transitioning to a new one, just a callous ‘get on board or get left behind’.” If we knew there was an environmentally-responsible approach around it. If there were laws or some societal development helping us out against the deception and astroturfing it can be used for, if deepfakes stayed in the realm of funny things like “US presidents rank Zelda characters on a tier list” instead of “Here’s a picture of you naked so I can paint you as promiscuous in a hiring/social environment that looks down on it. Here’s a realistic video of you throwing a bomb so I can get you arrested. Here’s a politician who didn’t actually throw a bomb throwing a bomb so I can present ‘proof’ they did and influence public sentiment to believe something untrue.” I appreciate the cancer detection though.

        It is a tool, but mostly being used for bad as far as I know and I’m very very scared of that, and feel the people praising it and wanting it are overlooking those things. Of course, it’s possible they are aware of and against the bad things, and just don’t want to preface every statement on AI with “yeah I know about the bad stuff” because that can get pretty tiring! But every pro-AI statement just makes me fear further societal adoption and approval of a technology that I do not trust them to use wisely and constructively without hurting many others around them, and that in my country will not be regulated for safety anytime soon. I feel like it’s like giving children a car in a world where driving lessons are very optional and driver’s licenses are unnecessary. In that world I’d probably hate cars. Then again, I guess you could say the same of the internet, and I have no issue with the internet because I grew up on it and am better able to decouple all the bad actors on it from the internet itself.

        I do understand the benefit of moving people to more productive ways of doing things and incentivizing that while deincentivizing less efficient ways of doing things, especially since people are resistant to change. In general, we want better things for cheaper. We want doctors using the vaccines that are 95% effective, not 50% just because the 50% vaccine is the one they know better and they do not like change. The promises of capitalism. I too would like my 4-hour-a-day work week, robots doing my domestic chores, and a cure for arthritis. So yes, I understand the whole “we think AI makes people more effective, and will financially incentivize using it while deincentivizing those who do not,” I just also don’t think it does make people more effective or that the cost in the current climate is worth it. Or that anyone who is not a multimillionaire will end up seeing any of the fruits of those productivity gains—they’ll just be made to work the same hours, having to outsource all the parts of their work they find fun or relaxing to an LLM because it’s more efficient to have it do it so all they get is the sucky part where they play prompt engineer/nanny, for the same wage. Also don’t think we’re set up in an economy that can handle the massive displacement of workers it is promising. I daresay that if I had to put one doctor/biomedical scientist out of a job with AI knowing that would unlock the arthritis cure I’d take that deal while also feeling bad. But if I decrease all white-collar fields by like 50%… I want that arthritis cure but there’d better be some safety net to help that mass suffering (and drastically reduced consumer spending, bad for the economy and the wellbeing of those in that economy). If I had to suck all the joy out of my job and become a glorified prompt engineer to provide an actual benefit to lots of people I might take that deal. Doing it to provide the boss 3% more profit by cutting the cost of employees, no thanks.

        Also just not comfortable with trusting the outputs of inherently nondeterministic technology. Way less testable, especially with LLMs as opposed to something that we expect to just spit out a probability or classify.