The number of paying subscribers for Copilot has leaked, and it is a disaster. Now even reshaping Satya Nadella’s CEO role into tech leadership rather than delivering commercial results.
The number of paying subscribers for Copilot has leaked, and it is a disaster. Now even reshaping Satya Nadella’s CEO role into tech leadership rather than delivering commercial results.
Right, should say deep neural networks. Perceptrons hit a brick wall because there’s some problems they cannot handle. Multi-layer networks stalled because nobody went ‘what if we just pretend there’s a gradient?’ until twenty-goddamn-twelve.
Broad applications will emerge and succeed. LLMs kinda-sorta-almost work for nearly anything. What current grifters have proven is that billions of dollars won’t overcome fundamental problems in network design. “What’s the next word?” is simply the wrong question, for a combination chatbot / editor / search engine / code generator / puzzle solver / chess engine / air fryer. But it’s obviously possible for one program to do all those things. (Assuming you place your frozen shrimp directly atop the video card.) Developing that program will closely resemble efforts to uplift LLMs. We’re just never gonna get there from LLMs specifically.
Yeah, LLMs kinda-sorta-almost work for nearly anything but their failures are have a uniform distribution in terms of seriousness - LLMs are equally likely to give an answer than will kill people if acted upon as they are to make a minor mistake in an answer.
Statistical text generators don’t have logical consistency checks or contextual awareness, unlike people, and that makes LLM unsuitable for just about any application were there are error modes which could be costly or dangerous, even whilst barely trained people could work there because some things are obviously dangerous or wrong for even the dumbest of humans so they won’t just do them, plus humans tend to put much more effort and attention into not doing the worst kinds of mistakes than they do the lighter kind.
Of course, one has to actually be capable of logically analyzing things to figure this core inherent weakness in how LLMs works when it comes to use them in most domains - as it’s not directly visible and instead is about process - and that’s not something talkie, talkie grifters are good at since they’re used to dealing with people who can be pushed around and subtly manipulated, unlike Mathematics and Logic.
‘LLMs specifically won’t work.’
‘No, see, LLMs won’t work.’
Okay.
I’m not disagreeing, rather I’m expanding on your point.