• Meron35@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    11 hours ago

    Unfortunately I find even prompts like this insufficient for accuracy, because even when directly you directly ask them for information directly supported by sources, they are still prone to hallucination. The use of super blunt language as a result of the prompt may even further lull you into a false sense of security.

    Instead, I always ask the LLM to provide a confidence score appended to all responses. Something like

    For all responses, append a confidence score in percentages to denote the accuracy of the information, e.g. (CS: 80%). It is OK to be uncertain, but only if this is due to lack of and/or conflicting sources. It is UNACCEPTABLE to provide responses that are incorrect, or do not convey the uncertainty of the response.

    Even then, due to how LLM training works, the LLM is still prone to just hallucinating the CS score. Still, it is a bit better than nothing.

    • jol@discuss.tchncs.de
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      6 hours ago

      I know, and accept that. You can’t just tell an LLM not to halucinate. I would also not trust that trust score at all. If there’s something LLMs are worse than accuracy, is maths.