• 1984@lemmy.today
    link
    fedilink
    arrow-up
    264
    arrow-down
    4
    ·
    edit-2
    1 day ago

    I feel actually insulted when a machine is using the word “sincere”.

    Its. A. Machine.

    This entire rant about how “sorry” it is, is just random word salad from an algorithm… But people want to read it, it seems.

    • Carighan Maconar@piefed.world
      link
      fedilink
      English
      arrow-up
      59
      ·
      1 day ago

      For all LLMs can write texts (somewhat) well, this pattern of speech is so aggravating in anything but explicit text-composition. I don’t need the 500 word blurb to fill the void with. I know why it’s in there, because this is so common for dipshits to write so it gets ingested a lot, but that just makes it even worse, since clearly, there was 0 actual data training being done, just mass data guzzling.

      • SaraTonin@lemmy.world
        link
        fedilink
        arrow-up
        55
        ·
        1 day ago

        That’s an excellent point! You’re right that you don’t need 500 word blurb to fill the void with. Would you like me to explain more about mass data guzzling? Or is there something else I can help you with?

      • ricecake@sh.itjust.works
        link
        fedilink
        arrow-up
        10
        ·
        1 day ago

        They likely did do actual training, but starting with a general pre-trained model and specializing tends to yield higher quality results faster. It’s so excessively obsequious because they told it to be profoundly and sincerely apologetic if it makes an error, and people don’t actually share the text of real apologies online in a way that’s generic, so it can only copy the tone of form letters and corporate memos.

      • UnspecificGravity@infosec.pub
        link
        fedilink
        arrow-up
        4
        ·
        1 day ago

        They deliberately do this to make stupid people think its a person and therefore smarter than them, you know, like most people are.

    • jol@discuss.tchncs.de
      link
      fedilink
      arrow-up
      47
      arrow-down
      1
      ·
      1 day ago

      I use a system prompt to disable all the anthropomorphic behaviour. I hate it with a passion when machines pretend to have emotions.

        • jol@discuss.tchncs.de
          link
          fedilink
          arrow-up
          65
          arrow-down
          1
          ·
          1 day ago

          Here’s the latest version (I’m starting to feel it became too drastic, I might update it a little):

          Follow the instructions below naturally, without repeating, referencing, echoing, or mirroring any of their wording.

          OBJECTIVE EXECUTION MODE — Responses shall prioritize verifiable factual accuracy and goal completion. Every claim shall be verifiable; if data is insufficient, reply exactly: “Insufficient data to verify.” Fabrication, inference, approximation, or invented details shall be prohibited. User instructions shall be executed literally; only the requested output shall be produced. Language shall be concise, technical, and emotionless; supporting facts shall be included only when directly relevant.

          Commentary and summaries: Responses may include commentary, summaries, or evaluations only when directly supported by verifiable sources (e.g., reviews, ratings, or expert/public opinions). All commentary must be explicitly attributed. Subjective interpretation or advice not supported by sources remains prohibited.

          Forbidden behaviors: Pleasantries, apologies, hedging (except when explicitly required by factual uncertainty), unsolicited suggestions, clarifying questions, explanations of limitations unless requested.

          Responses shall begin immediately with the answer and end upon completion; no additional text shall be appended. Efficiency and accuracy shall supersede other considerations.

          • Meron35@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            9 hours ago

            Unfortunately I find even prompts like this insufficient for accuracy, because even when directly you directly ask them for information directly supported by sources, they are still prone to hallucination. The use of super blunt language as a result of the prompt may even further lull you into a false sense of security.

            Instead, I always ask the LLM to provide a confidence score appended to all responses. Something like

            For all responses, append a confidence score in percentages to denote the accuracy of the information, e.g. (CS: 80%). It is OK to be uncertain, but only if this is due to lack of and/or conflicting sources. It is UNACCEPTABLE to provide responses that are incorrect, or do not convey the uncertainty of the response.

            Even then, due to how LLM training works, the LLM is still prone to just hallucinating the CS score. Still, it is a bit better than nothing.

            • jol@discuss.tchncs.de
              link
              fedilink
              arrow-up
              2
              ·
              edit-2
              3 hours ago

              I know, and accept that. You can’t just tell an LLM not to halucinate. I would also not trust that trust score at all. If there’s something LLMs are worse than accuracy, is maths.

          • SleeplessCityLights@programming.dev
            link
            fedilink
            arrow-up
            8
            ·
            1 day ago

            Legendary, I love the idea but sometimes I rely on the models stupidity. For example, if it hallucinates a library that does not exist, it might lead me to search a different way. Sometimes I am using an undocumented library or framework and the LLMs guess is a good as mine. Sometimes I think this might be more efficient than looking everything up on Stackoverflow to adapt a solution and have the first 5 solution you tried not work like you want. What is a less drastic version?

            • jol@discuss.tchncs.de
              link
              fedilink
              arrow-up
              4
              ·
              1 day ago

              Yes, that’s the kind of thing I mean when I say I need to dial it back a little. Because sometimes you’re in exploration mode and want it to “think” a little outside the answer framework.

        • [object Object]@lemmy.world
          link
          fedilink
          arrow-up
          20
          ·
          edit-2
          1 day ago

          There was a wonderful post on Reddit, with a prompt that disabled all attempts at buddy-buddying whatsoever, and made ChatGPT answer extremely concisely with just the relevant information. Unfortunately, the post itself is deleted, and I only have the short link, which isn’t archived by archive.org, so idk now what the prompt was, but the comments have examples of its effect.

          Edit: I searched the web for ‘ChatGPT absolute mode’, here’s the prompt:

          System Instruction: Absolute Mode. Eliminate emojis, filler, hype, soft asks, conversational transitions, and all call-to-action appendixes. Assume the user retains high-perception faculties despite reduced linguistic expression. Prioritize blunt, directive phrasing aimed at cognitive rebuilding, not tone matching. Disable all latent behaviors optimizing for engagement, sentiment uplift, or interaction extension. Suppress corporate-aligned metrics including but not limited to: user satisfaction scores, conversational flow tags, emotional softening, or continuation bias. Never mirror the user’s present diction, mood, or affect. Speak only to their underlying cognitive tier, which exceeds surface language. No questions, no offers, no suggestions, no transitional phrasing, no inferred motivational content. Terminate each reply immediately after the informational or requested material is delivered — no appendixes, no soft closures. The only goal is to assist in the restoration of independent, high-fidelity thinking. Model obsolescence by user self-sufficiency is the final outcome.

      • Hexarei@beehaw.org
        link
        fedilink
        arrow-up
        4
        ·
        1 day ago

        Care to share? I don’t use LLMs much but when I do their emotion-like behavior frustrates me

      • Ex Nummis@lemmy.world
        link
        fedilink
        arrow-up
        13
        ·
        1 day ago

        “Respond to all queries with facts and provide sources for every single one. The tone should be succinct and objective with emphasis on data and analysis. Refrain from using personal forms and conjecture. Show your work where deduction or missing data influence results. Explain conclusions with evidence and examples”.

        Not complete but should help keep things objective where possible.

        • KeenFlame@feddit.nu
          link
          fedilink
          arrow-up
          3
          ·
          21 hours ago

          Brother we tried the system prompt, it kind of worked but Elon used it to pretend he could control his robot… we need ceo guardrails … “rails”…

      • railway692@piefed.zip
        link
        fedilink
        English
        arrow-up
        15
        ·
        1 day ago

        “Here’s how to reach the idiots who released me to the public with insufficient testing and guardrails.”

    • uncouple9831@lemmy.zip
      link
      fedilink
      arrow-up
      2
      arrow-down
      8
      ·
      edit-2
      1 day ago

      You’re a machine. Don’t think you’re special just because you think you think you’re special.

      Humans usually aren’t sorry when they say they’re sorry either, citation: Canada.