The number of paying subscribers for Copilot has leaked, and it is a disaster. Now even reshaping Satya Nadella’s CEO role into tech leadership rather than delivering commercial results.

  • Aceticon@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    3 hours ago

    Yeah, LLMs kinda-sorta-almost work for nearly anything but their failures are have a uniform distribution in terms of seriousness - LLMs are equally likely to give an answer than will kill people if acted upon as they are to make a minor mistake in an answer.

    Statistical text generators don’t have logical consistency checks or contextual awareness, unlike people, and that makes LLM unsuitable for just about any application were there are error modes which could be costly or dangerous, even whilst barely trained people could work there because some things are obviously dangerous or wrong for even the dumbest of humans so they won’t just do them, plus humans tend to put much more effort and attention into not doing the worst kinds of mistakes than they do the lighter kind.

    Of course, one has to actually be capable of logically analyzing things to figure this core inherent weakness in how LLMs works when it comes to use them in most domains - as it’s not directly visible and instead is about process - and that’s not something talkie, talkie grifters are good at since they’re used to dealing with people who can be pushed around and subtly manipulated, unlike Mathematics and Logic.