• TurdBurgler@sh.itjust.works
    link
    fedilink
    arrow-up
    11
    arrow-down
    17
    ·
    edit-2
    23 hours ago

    While it’s possible to see gains in complex problems through brute force, learning more about prompt engineering is a powerful way to save time, money, tokens and frustration.

    I see a lot of people saying, “I tried it and it didn’t work,” but have they read the guides or just jumped right in?

    For example, if you haven’t read the claude code guide, you might have never setup mcp servers or taken advantage of slash commands.

    Your CLAUDE.md might be trash, and maybe you’re using @file wrong and blowing tokens or biasing your context wrong.

    LLMs context windows can only scale so far before you start seeing diminishing returns, especially if the model or tools is compacting it.

    1. Plan first, using planning modes to help you, decomposition the plan
    2. Have the model keep track of important context externally (like in markdown files with checkboxes) so the model can recover when the context gets fucked up

    https://www.promptingguide.ai/

    https://www.anthropic.com/engineering/claude-code-best-practices

    There are community guides that take this even further, but these are some starting references I found very valuable.

      • expr@programming.dev
        link
        fedilink
        arrow-up
        22
        arrow-down
        2
        ·
        22 hours ago

        Yup. It’s insanity that this is not immediately obvious to every software engineer. I think we have some implicit tendency to assume we can make any tool work for us, no matter how bad.

        Sometimes, the tool is simply bad and not worth using.

      • TurdBurgler@sh.itjust.works
        link
        fedilink
        arrow-up
        3
        arrow-down
        4
        ·
        edit-2
        11 hours ago

        Early adopters will be rewarded by having better methodology by the time the tooling catches up.

        Too busy trying to dunk on me than understand that you have some really helpful tools already.

    • jacksilver@lemmy.world
      link
      fedilink
      arrow-up
      9
      arrow-down
      1
      ·
      19 hours ago

      While you’re right that it’s a new technology and not everyone is using it right, if it requires all of that setup and infrastructure to work then are we sure it provides a material benefit. Most projects never get that kind of attention at all, to require it for AI integration means that currently it may be more work than it’s worth.

        • jacksilver@lemmy.world
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          8 hours ago

          My point was the investment in vs the value out may not be worth it for many projects. Beyond that, it may not be maintainable for all projects (at least with how fast things have been changing in this space and the heavy reliance on 3rd party systems to make it work).

          • TurdBurgler@sh.itjust.works
            link
            fedilink
            arrow-up
            2
            ·
            edit-2
            5 hours ago

            It’s professional development of an emerging technology. You’d rather bury your head in the sand and say it’s not useful?

            The reason not to take it seriously is to reinforce a world view instead of looking at how experts in the field are leveraging it, or having discourse regarding the pitfalls you have encountered.

            The Marketing AI hype cycle did the technology an injustice, but that doesn’t mean the technology isn’t useful to accelerate determistic processes.