I’m a software developer in Germany and work for a small company.

I’ve always liked the job, but I’m getting annoyed recently about the ideas for certain people…

My boss (who has some level of dev experience) uses “vibe coding” (as far as I know, this means less human review and letting an LLM produce huge code changes in very short time) as a positive word like “We could probably vibe-code this feature easily”.

Someone from management (also with some software development experience) makes internal workshops about how to use some self-built open-code thing with “memory” and advanced thinking strategies + planning + whatever that is connected to many MCP servers, a vector DB, has “skills”, a higher token limit, etc. Surprisingly, the people visiting the workshops (also many developers, but not only) usually end up being convinced by it and that it improved their efficiency a lot and writing that they will use it and that it changed their perspective.

Our internal slack channels contain more and more AI-written posts, which makes me think: Thank you for throwing this wall of text on me and n other people. Now, n people need to extract the relevant information, so you are able to “save time” not writing the text yourself. Nice!!!

I see Microsoft announcing that 30% of code is written by AI which is advertisement in my opinion and an attempt to pressure companies to subscribe to OpenAI. Now, my company seems to not even target that, but target the 100%???

To be clear: I see some potential for AI in software development. Auto-completions, location a bug in a code base, writing prototypes, etc. “Copilot” is actually a good word, because it describes the person next to the pilot. I don’t think, the technology is ready for what they are attempting (being the pilot). I saw the studies questioning how much the benefit of AI actually is.

For sure, one could say “You are just a developer fearing to lose their job / lose what they like to do” and maybe, that’s partially true… AI has brought a lot of change. But I also don’t want to deal with a code base that was mainly written by non-humans in case the non-humans fail to fix the problem…

My current strategy is “I use AI how and when ->I<- think that it’s useful”, but I’m not sure how much longer that will work…

Similar experiences here? What do you suggest? (And no, I’m currently not planning to leave. Not bad enough yet…).

  • Cethin@lemmy.zip
    link
    fedilink
    English
    arrow-up
    13
    ·
    7 days ago

    I’m starting to form a conspiracy theory that the “let AI write the email” concept is, in itself, an ad for AI. Not for people writing them (they are easy to convince), but now the people reading them have a bunch of bullshit to deal with. The best tool is an LLM summary to undo the LLM bullshit. They get double the usage from people (well, if the manager gets many subordinates to do this, it’s well more than double), and nothing of value was added.

    • Random Dent@lemmy.ml
      link
      fedilink
      English
      arrow-up
      4
      ·
      6 days ago

      I’m reasonably sure that’s what’s happening with resumes largely now. People get AI to write their resumes because it’s a boring task to do, and then employers are using AI to read the resumes they receive and provide summaries. So it’s pretty much just AIs talking to each other about who should get the job.

    • Derpgon@programming.dev
      link
      fedilink
      arrow-up
      3
      ·
      7 days ago

      Jokes on em, I don’t read work emails. Partially because I refuse to dedicate any time of the day using Outlook, especially in a web browser, because the oh-so-wise IT departnent does not allow to use a different client, and as I can’t use Outlook on Linux, fuck em.

      And no, IMAP or POP3 are not available. Trying to login via Thunderbird just triggers a message to contant IT dept to allow me using it. It’s Teams or nothing.

    • Rednax@lemmy.world
      link
      fedilink
      arrow-up
      7
      ·
      6 days ago

      To complete this 2 sentence horror story:

      Such code bases are what the Vibe coding AI is trained on.

      • Randelung@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        6 days ago

        At least Claude agrees that interfaces for almost every Java class is a little overkill, and if there’s only one implementation, it’s a code smell. It’s harder to convince the architect who learned it at uni and blindly follows suit.

  • Evotech@lemmy.world
    link
    fedilink
    arrow-up
    7
    arrow-down
    3
    ·
    edit-2
    7 days ago

    You should try it yourself. See what it can, see what it can’t instead of just arguing with your boss about something you don’t really know anything about

    And then you can actually bring the facts to your boss.

    • flying_sheep@lemmy.ml
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      6 days ago

      Yeah, you’ll soon see the boundaries of what it’s useful for and will then be able to make informed decisions.

      Vibe-coding UI for a one-off questionnaire? Why not. Vibe-coding something you need to maintain? Oof.

      • Nighed@feddit.uk
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 days ago

        It can be maintainable, but I only if you are actually reviewing what it spits out and correcting it (either manually or with more AI).

        That removes a lot of the benefit in a lot of cases.

        It is nice to be able to have it scaffold thousands of initial lines before you dive in though!

        • flying_sheep@lemmy.ml
          link
          fedilink
          arrow-up
          2
          ·
          6 days ago

          Exactly, but if you review it, it’s no longer vibe-coding. At least as far as I know, vibe-coding means specifically to not look at the code at all.

          • Nighed@feddit.uk
            link
            fedilink
            English
            arrow-up
            1
            ·
            6 days ago

            That’s fair, I’m not sure there is a nice snappy term for using it properly though.

  • Nighed@feddit.uk
    link
    fedilink
    English
    arrow-up
    2
    ·
    6 days ago

    Have you been to one of the workshops? It CAN be very useful when used right, setting up an MCP server pointing at internal docs/best practices made a huge difference etc.

    Any criticism of AI you give will have a lot more weight if it comes from a base of knowledge. That means learning how it does (and doesn’t) work so you can critique it without coming across as “that anti AI guy”.

    Make sure you go through everyone else’s PRs carefully and pull out all the stupid AI stuff, it’s great fun ripping them to shreds sometimes.

  • zbyte64@awful.systems
    link
    fedilink
    arrow-up
    1
    ·
    6 days ago

    My advice is to stick to making specific observations as not to sound hyperbolic.

    Yet at the same time what is needed is a counter-narrative to “AI can do the job”. My observation is the industry cycles between agile and waterfall development and people (who are the loudest) are using AI to do waterfall development. This is a bad idea for the same reasons waterfall development is bad: trying to write a spec that covers all situations is counterproductive.

    The alternative I see is that we borrow from Agile and treat AI as a pair developer that you outsource small (or largely repetitive) tasks to. This is not vibe coding nor is it TDD as you are still actively developing all levels of the code, but your feedback loop (OODA) is kept short.

    • Leon@pawb.social
      link
      fedilink
      arrow-up
      60
      ·
      edit-2
      8 days ago

      I try to push on the maintenance aspect. Developing something new is easy, and my company does do that, but the group I’m in is primarily doing maintenance on existing software. Bug fixes, feature additions, etc. If we generate applications entirely using LLMs, none of us will be experts on the applications we push to the customers.

      They push corpo buzzwords like “responsibility”, but who takes responsibility when no one has done the work to begin with? It feels like a liability nightmare, and the idea of sitting there cleaning slopcode just isn’t very appealing to me.

    • dan1101@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      8 days ago

      That’s going to be a problem, almost like a money laundering scheme. AI can spit out content that’s 99% derived from copyrighted content but is itself free of copyright.

  • gigachad@piefed.social
    link
    fedilink
    English
    arrow-up
    56
    ·
    8 days ago

    I am making similar experiences, but is is not as bad as you are describing it yet. We have a new member in the team who is not a developer by himself, but he has gotten the task to make our way of working more professional (we are mainly scientists and not primarily software engineers, so that’s a good thing).

    His first task was to create programming guidelines and standards. He created 8 pages of LLM generated text and example non sense code. He honestly put a lot of effort in it, but of course there are a lot of things in it that are wrong. But the worst thing is the wall of text. You are nailing it - it is my task now to go through this whole thing and extract the relevant information. It sucks. And I am afraid that soon I will need to review more and more low quality MRs generated by people who have little experience in programming.

    • ch00f@lemmy.world
      link
      fedilink
      arrow-up
      27
      ·
      8 days ago

      We had a dev drop a combined total of 8,300 lines of readme files into the code base over a weekend. I want to nuke all of them, my boss suggests reviewing and updating them.

      • 87Six@lemmy.zip
        link
        fedilink
        arrow-up
        5
        ·
        7 days ago

        8,300 lines

        rookie numbers

        I think my team is in the tens of thousands of AI generated “documentation”.

        They claim the AI can use it to code better in the project.

        Bullshit. The AI can’t load in a single one of these files without filling half the context.

        • 87Six@lemmy.zip
          link
          fedilink
          arrow-up
          3
          ·
          7 days ago

          I was recently instructed to have gander at it.

          I warned that it seemed inconsistent with the actual code.

          Was told I’m right and they brushed it off.

          “We should update this to reflect reality”

          They brushed it off and we moved on. The misleading doc is still there, waiting for its next victim.

      • namingthingsiseasy@programming.dev
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        8 days ago

        “I don’t have time to read through that much bullshit.”

        Maybe phrase it a little more kindly, but that’s what I’d try at the very least. “I have other priorities at the moment” could work too.

  • bcgm3@lemmy.world
    link
    fedilink
    English
    arrow-up
    52
    ·
    8 days ago

    Our internal slack channels contain more and more AI-written posts, which makes me think: Thank you for throwing this wall of text on me and n other people. Now, n people need to extract the relevant information, so you are able to “save time” not writing the text yourself. Nice!!!

    My last software dev employer did this, except with the “voice recording” feature. Instead of composing messages in text in a text chat (because that takes too long), he’d hit the record button and just start talking it out, then send the recording. Easy! Then the team had to download and listen to ~5 minutes of verbal diarrhea, pausing and rewinding for twice that long in an attempt to glean something useful from it. This particular kind of delusion existed before AI.

        • Michal@programming.dev
          link
          fedilink
          arrow-up
          6
          ·
          edit-2
          7 days ago

          Everyone. One person is too lazy to write a message, the others can’t be bothered to listen to the whole thing 🤷‍♂️

          The transcription should be attached to the audio recording so if the sender cares about it being correct they should be able to comment or add correction.

  • LeapSecond@lemmy.zip
    link
    fedilink
    arrow-up
    47
    ·
    8 days ago

    I had a manager who pushed AI a lot. When he left, all the pressure to use it seemed to die down. So maybe it’s just a couple of people creating this environment and if you can get away with avoiding them it’s better.

    The problem with AI code we saw is that often no human has actually looked at it. During reviews you won’t check every line and you’ll have to trust much of the code that seems to do obvious things. But that assumes it was written by a human you also trust. When that human hasn’t reviewed the code either, you end up with code no one in the company has seen (and may not even know how it works).

    • Leon@pawb.social
      link
      fedilink
      arrow-up
      12
      ·
      edit-2
      8 days ago

      Your entire comment echoes my thoughts. Things aren’t exactly improved by the idea of adding LLMs to the review process either. Gods.

    • 87Six@lemmy.zip
      link
      fedilink
      English
      arrow-up
      8
      ·
      7 days ago

      This.

      I’ve already got my manager to tell me to not use AI on a task. I see this as an absolute win and I’m gunning for more.

      He ALWAYS uses AI first when he needs to figure something out. ALWAYS tells us to use AI for the quick start. But when we do it, and it ends up wasting time, somehow it’s our fault, and we didn’t prompt it properly.

      Also, am I mad, or does Cursor (specifically Sonnet) sometimes act dumb on purpose? Sometimes it codes a feature nearly entirely without many issues, other times it seems unable to comprehend that it’s using the wrong property in a class.I feel like it’s made to make us question each other’s ability to use AI tools and cause internal team unrest.

      • Sunsofold@lemmings.world
        link
        fedilink
        arrow-up
        3
        ·
        6 days ago

        Never forget that it isn’t thinking, at all. It comprehends nothing. It’s just a very big, expensive autocomplete. It didn’t understand when it was using the right property, it just rolled its d10000 and got something that fit requirements, but on the time it failed, it rolled outside of the desired range. No thought, just numbers.

    • Paddzr@lemmy.world
      link
      fedilink
      arrow-up
      3
      arrow-down
      14
      ·
      7 days ago

      Spoken like an unemployed person…

      Why would you sabotage or stagnate your career?

      • moonshadow@slrpnk.net
        link
        fedilink
        arrow-up
        22
        arrow-down
        2
        ·
        edit-2
        7 days ago

        Principles or ideals prioritized above comfort and stability, fucked up you have to ask. Spoken like a hollow bootlicker

        • Paddzr@lemmy.world
          link
          fedilink
          arrow-up
          1
          arrow-down
          5
          ·
          7 days ago

          Sounds like I walked into reddit antiwork crowd! Always black and white with you lot… If you’re in industry and market that allows that? I’m happy for you.

          • moonshadow@slrpnk.net
            link
            fedilink
            arrow-up
            4
            ·
            7 days ago

            You’re the one who crashed in with the judgment, name calling, and confrontational attitude. You couldn’t be more thoroughly shaped by your “industry and market” and I’m not sure if it’s more gross or sad. Corporations might be people now but they’re sure as shit not gonna cry at your funeral, get a life outside your job

            • Paddzr@lemmy.world
              link
              fedilink
              arrow-up
              1
              arrow-down
              2
              ·
              7 days ago

              Some very bold assumptions.

              Reality is, attitude like that doesn’t get you hired. I don’t hire based on people’s view on AI and LLM, but I do hire on attitude and one’s ability to be molded to fit the job I’m hiring for. Hasn’t failed me for 20 years and this is far from the first fear mongering in tech.

              • moonshadow@slrpnk.net
                link
                fedilink
                arrow-up
                3
                arrow-down
                1
                ·
                7 days ago

                Ability to be molded as a primary virtue is completely alien to me man. Maybe it’s a cultural thing. “Reality is,” I see forming your self to fit the needs of this system as a tragic waste of life. There are ways to sustain yourself beyond blind acquiescent compliance, and basic human needs a career can never meet. I would feel like a failure having spent the last twenty years of my life guided by what was best for a business.

                • Paddzr@lemmy.world
                  link
                  fedilink
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  6 days ago

                  I’m not sure where your belief that work somehow defines me come from… Maybe it’s something from “your culture”.

                  You make so many assumptions is wild and tiring to even bother to repute them. I’ll leave it here as I’m not convinced you’re actively engaging in any sort of conversation anyway.

  • jubilationtcornpone@sh.itjust.works
    link
    fedilink
    arrow-up
    31
    arrow-down
    1
    ·
    8 days ago

    Any idiot can write code. “Vibe coding” is just the new pasting code from stack overflow. For that matter, a lot of LLM generated code probably came from stack overflow.

    Your value as a developer is not in your ability to rapidly pump out code. Your value is in your ability to design and build complex systems using the tools at your disposal.

    As an industry, software engineering has not yet been forced to reckon with the consequences of “vibe coding.” The consequences being A.) the increasing number of breaches that will occur due to poor security practices and B.) the completely unmanageable mountain of technical debt. A lot of us have been here before. Particularly on the tech debt front. If you’ve ever been on a project where the product team continually pushes to release features as fast as possible, everything else be damned, then you know what I mean. Creating new code is easy. Maintaining old code is hard.

    Everything starts out great. The team keeps blowing through milestones. Everyone on the business side is happy. Then, a couple years into the project, strange things start happening. It’s kind of innocuous at first. Seemingly easy tickets take longer to complete than they used to. The PR change logs get longer and longer. Defect rates skyrocket. Eventually, new feature development grinds to a halt and the product team starts frantically asking, “what the hell is going on?”

    A question to which maybe one or two of the more, senior devs respond, "Well, uh, we have a lot of technical debt. I mean A LOT. We’re having to spend tons of time refactoring just to make minor changes. And of course, unplanned refactoring tends to introduce bugs.

    The product team gets an expression on their face like Wyle E. Coyote as the shadow of a falling ACME anvil closes in around him. At this moment, they have two choices. Option A.) develop a plan to mitigate the existing tech debt and realign the dev teams objectives to help prevent this situation again by focusing on quality over quantity. Option B.) ignore the problem and try to ram feature development back on track by sheer force of will.

    Only one of these options will achieve meaningful outcome and it’s not “B”. Unfortunately, in my experience that’s often the chosen option. The product team does not understand that while Option A impedes feature development, it’s only temporary. Option B impedes feature development permanently.

    We’re going to see a very similar cycle with vibe coding. It just takes time to materialize. Personally, I think the tech debt for vibe codes projects will be compounded due to the sheer verbosity of LLM’s and the fact that no one actually understands a vibe coded project well enough to fix it.

    That said, these issues are rooted in hubris and ignorance. Failure to appreciate the “engineering” part of software engineering. This is not something you alone can change.

    The AI hype is going to disappear. Probably sooner than later. Just like every other tech hype cycle before it. But, LLM’s are probably here to stay so we have to make the best of it. I don’t usually use LLM’s for code generation. There are better tools for that already. I do use them frequently for research. Honestly, using an LLM with search incorporated is often a lot faster than scouring dozens of websites to figure out how to do something. You still have to take the information with a grain of salt as much as you would with anything on the Internet because LLM’s have no understanding of the text they spit out and will feed you incorrect information without missing a beat.

    If I were you, I would focus on quality over quantity. Closing tickets faster is pointless if you’re introducing a bunch of new bugs. If your bosses don’t know that already, they will learn it soon enough.

    • pinball_wizard@lemmy.zip
      link
      fedilink
      arrow-up
      4
      ·
      7 days ago

      Closing tickets faster is pointless if you’re introducing a bunch of new bugs.

      Objectively true, but if my bonus reflects tickets rather than bugs, I’m gonna close so many tickets, anyway, because I don’t own the place.

      Which is also why wise companies grant their employees stock.

  • zkfcfbzr@lemmy.world
    link
    fedilink
    English
    arrow-up
    29
    ·
    8 days ago

    Our internal slack channels contain more and more AI-written posts, which makes me think: Thank you for throwing this wall of text on me and n other people. Now, n people need to extract the relevant information, so you are able to “save time” not writing the text yourself. Nice!!!

    I think this is one of your best bets as far as getting a real policy change. Bring it up, mention that posts like that may take less time to “write”, but that they’re almost always obnoxiously verbose, contain paragraphs that say essentially nothing, and take far longer to read than a hand-typed message would. The argument that one person is saving time at the expense of dozens (?) of people losing time may carry a lot of weight, especially if these bosses are in and read the same Slack channel.

    Past that I’d just let things go as they are, and take every opportunity to point out when AI made a problem, or made a problem more difficult to solve (while downplaying human-created problems).

  • kunaltyagi@programming.dev
    link
    fedilink
    arrow-up
    26
    ·
    8 days ago

    For centuries, we spent less effort consuming content than it took to produce the content.

    Good teammates and content producers understand that their content needs to have an intrinsic value and benefits beyond the mere existence of content.

    If you make your team 10x slower by sending LLM generated 10 page content instead of a one liner, you are actively hindering the team.

    Efficiency is not just production of content (slop or not), but the overall system. That’s why the corpo speak has always been such a waste. Too many words to say nothing in a mandatory all-hands. Now the dial is up to 11 with the same time waste everywhere

  • Etterra@discuss.online
    link
    fedilink
    English
    arrow-up
    23
    ·
    7 days ago

    Step 1, update your resume.

    Step 2, follow your boss’ instructions until it all breaks.

    Step 2.5, document everything so be can’t blame you later.

    Step 3, go have a beer; you don’t get paid enough to give a shit.