- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
Despite the rush to integrate powerful new models, about 5% of AI pilot programs achieve rapid revenue acceleration; the vast majority stall, delivering little to no measurable impact on P&L.
The research—based on 150 interviews with leaders, a survey of 350 employees, and an analysis of 300 public AI deployments—paints a clear divide between success stories and stalled projects.
Wait, we have AI flying planes now?
It took me a while to realize it is an Otto pilot…
I know you’re joking, but for those who don’t, the headline means “startups” and they just wanted to avoid the overused term.
Also, yeah actually it’s far easier to have an AI fly a plane than a car. No obstacles, no sudden changes, no little kids running out from behind a cloud-bank, no traffic except during takeoff and landing, and those systems also can be automated more and more.
In fact, we don’t need “AI” we’ve had autopilots that handle almost all aspects of flight for decades now. The FA-18 Hornet famously has hand-grips by the seat that the pilot is supposed to hold onto during takeoff so they don’t accidentally touch a control.
Conversely, AI running ATC would be a very good thing. To a point.
It’s been technically feasible for a while to handle 99% of what an ATC does automatically. The problem is that you really want a human to step in on those 1% of situations where things get complicated and really dangerous. Except, the human won’t have their skills sharpened through constant use unless they’re handling at least some of the regular traffic.
Trick has been to have the AI do, say, 70% of the job, but having a human step in sometimes. Deciding on when to have a human step in is the hard problem.
what do you think an autopilot is?
A finely refined model based on an actual understanding of physics and not a glorified Markov chain.
To be fair, that also falls under the blanket of AI. It’s just not an LLM.
No, it does not.
A deterministic, narrow algorithm that solves exactly one problem is not an AI. Otherwise Pythagoras would count as AI, or any other mathematical formula for that matter.
Intelligence, even in terms of AI, means being able to solve new problems. An autopilot can’t do anything else than piloting a specific aircraft - and that’s a good thing.
Not sure why you’re getting downvoted. Well, I guess I do. AI marketing has ruined the meaning of the word to the extent that an if statement is “AI”.
Because they are wrong. Airplane Autopilot is not “one model”, it’s a complex set of systems that take actions based on a trained model. The training of that model used standard ML practices. Sure, it’s a base algorithm, but it follows the same principles. That’s textbook AI.
No one would have debated this pre-LLM. That being said, if I was in the industry, I’d be calling it an algorithm instead of AI, because those out of the know, well, won’t get it.
Intelligence, even in terms of AI, means being able to solve new problems.
I’d argue that an artificial intelligence is (usually computational) a system that can mimic an specific behavior that we consider intelligent, deterministic or not, like playing chess, writing text, piloting an aircraft, etc.
And you’d argue wrong here, that is simply not the definition of intelligence.
Extend your logic a bit. Playing an instrument requires intelligence. Is a drum computer intelligent? A mechanical music box?
Yes, the definition of intelligence is vague, but that doesn’t mean you can extend it indefinitely.
I wanna point out three things:
- How can you tell someone is wrong when you have no idea?
- I think you missed the point, I said artificial intelligence, not intelligence as a whole.
- Yes, playing an instrument in a way that makes sense requires certain degree of intelligence, the music box inherently is not intelligent, but intelligence was required to build it.
I don’t know where you’re getting your definitions but you are wrong.
Artificial intelligence (AI) is the capability of computational systems to perform tasks typically associated with human intelligence, such as learning, reasoning, problem-solving, perception, and decision-making.
For example, the humble A* Pathfinding Algorithm falls under the domain of AI, despite it being a relatively simple and common process. Even fixing small problems is still considered problem solving.
Can text generators solve new problems though?
To a certain extent, yes.
ChatGPT was never explicitly trained to produce code or translate text, but it can do it. Not super good, but it manages some reasonable output most of the time.
Mild height and bearing corrections.
Someone will be around to say “not real AI”, and I think that’s the wrong way to look at it.
It’s more “real AI” thank the LLM slop companies are desperately trying to make the future
That’s terrifying, but I don’t see why my regional train can’t drive on AI in the middle of the night.
It’s a bubble. This article is by someone realizing that who has yet to move their investments.
Yeah, 95% of AI companies either have no functional product or a chatGPT token account and a prompt.
Most of them could be replaced by a high school student and an N8N instance.
Have you talked to the average high school student these days? Not that the typical AI LLM response is much better, but I honestly feel sorry for the kids.
Probably highly subjective to Schools, States and Families. I’m around a lot of kids in GT and Engineering classes.
You mention N8N. Last week I had a sales VP mention it as well. Could you elaborate on your perspective? I’ve been building databases in BigQuery for the past month and will start utilizing ML for a business need so I probably missed some write up about it.
I’m a program manager, have some small coding experience.
N8n is like Legos of API access you can generate tons of integrations that would have otherwise been imposible with just a few hours of work. We have an issue where people don’t complete their slack profiles. Using n8n I made an integration between our HR software and slack so that it automatically populates most fields without having to bug people.
And after that, it runs a check for what manual thing they are missing and sends them a message.
You put an http block, behind a filter block, behind a slack blog and it handles everything for you.
Would recommend you give it a try, I have it running on the work instance but I also have a local one running in my raspberry that I plan to use to fool around.
N8N is like IFTTT (if this then that)
It’s a mostly codeless solution for wiring things together, meaning you can use semi-non-skilled labor to do somewhat difficult things.
This guy can be a little hard to stomach for some, but he goes into great depth on setting up some n8n use cases, and he doesn’t waste a lot of time doing it. https://www.youtube.com/watch?v=ONgECvZNI3o
Right now, we use it so that if IT puts a certain emoji on a slack message, it makes a jira ticket, letting us know that work has been triaged and created, but if a user does it, it fails.
You could have N8N read a slack channel, or load an RSS feed, or take input from a website, send that data through an LLM prompt to transform the data and then have it or an agent do some work or respond to the input, with minimal need to write code. Really the limits are what services it supports (or your ability to add that API) and your imagination.
In Chuck’s example, he had N8N load several RSS feeds, make thumbnails from them, read the description, and use an LLM to shorten the text without losing meaning and provide a clean list of media to a Discord channel.
https://n8n.io/integrations/google-bigquery/and/openai/
You could define a trigger, say have a chatbot or Slack channel, have it hit your BigQuery, send the data to GPT to make it human-readable, and respond to requests in the channel with some futzing around in logins, flowcharting, and JavaScript variable names…
Most of them could be replaced by a high school student and an N8N instance.
Not really sure if the high school students have cheated their way out with ChatGPT.
At the moment, there are probably doing pretty good. Kind of like using calculators, when we got out of school we all had a calculator.
Were the rubber is going to meet the road It’s when the AI bubble bursts and there’s no over generous evaluations and free venture capital, and we actually need to pay a sustainable fee for the tokens.
They’re going to need some really expensive calculators
Well, I didn’t think about it like that!
Hopefully there’s still students that use it as a mere tool rather than as a way to pass by without actually learning.
Hopefully anyone majoring in a subject is there because they want to learn the subject and we won’t lose the capability. The people that aren’t in it for the education won’t fair as well when LLM’s become the new rent :)
The real damage is the companies paying for the LLMs for their subpar, cheap employees, though. Those English majors are having a hard enough time finding work.
Yeah every single day the top 5 new products on ProductHunt are AI trash. It’s wild what the bubble has become
Today:
Oh shit, I see elevenlabs on that list, They do tend to stir stuff up.
They used to have paid voice actors training imitations of real celebrities. You could do stuff like search out ship captain and get somebody knocking off Picard.
Looks like they released a music model trained on (paid) licensed material. Even their best sample stuff is kind of marginal, but it is real.
This sounds about right. Figure 50% are just screaming at their employees to use ai and at managers to lower headcount and make it up with ai and such. Then like 25% more buy some companies ai solution and expect sorta the same from there. Then like 15% actually try to identify where ai could be helpful but don’t really listen to feedback and just doggedly move forward. Eventually you get to the ones that identify where it might help and offer options to employees to use it much like any other software where they can request a license and let it grow and help organically and look more to just improve results or productivity.
Feels very much like the push in the 90’s for every company to have a website before companies understood what websites were for.
How’d that end up? Totally fine, right?
Completely agree.
I’ve got clients who I can see immediate benefits right now, and I’ve got clients where I don’t think it’s a good idea yet. Most of those that could benefit it’s small tweaks to workflow processes to save a few FTE here and there, not these massive scale rollouts we’re seeing.
Unfortunately Microsoft, along with other companies, are selling fully scale sexy to executive when full scale sexy isn’t actually ready yet. What’s available does work for some things, but it’s hard to get an executive team to sign off on a project for testing to save only 10 employees worth of work in a 2000 person company when they’re simultaneously a) worried about it going horribly wrong, and b) worried about falling behind other companies by not going fast enough.
Figure 50% are just screaming at their employees to use ai and at managers to lower headcount and make it up with ai and such.
Immediately imagined it being screamed in this voice:
“Use AI and make it lame!”
Shocked that LLM wrapper slop that isn’t deterministic only has limited use cases. Sam Altman is the biggest con artist of our time
He’s the second coming of Joseph Smith.
JS was a charismatic grifter by nature and upbringing who sold folks on the existence of a magic gold book that had extra-special info about American Jesus. He told them he found it after G*d told him where to dig.
This was just a few years after he had been hauled into court to face charges of running a ‘treasure hunting’ scheme on local farmers.
Now that I think about it more, the parallels are many.
In conclusion, shysters gonna shyst.
A few years ago we haf these stupid mandatory AI classes all about how AI could help you do your job better. It was supposed to be multiple parts but we never got passed the first one. I think they realized it wouldn’t help most of the company but did leave our bespoke chatbot up for our customers/sales people. It is pretty good at helping with our products but I assume a lot of tuning has been done. I assume if we fed a local AI our data we could make it helpful but none of them have more than a basic knowledge of anything I do on a day to day basis.
Usually fit those chatbots you take a trained model and use RAG, essentially turning the question into a traditional search and asking the LLM to summarize the contents from the result. So it’s frequently a convenient front end to a search engine, which is how it avoid s having to train to produce relevant responses. Is generally just prohibitively difficult in various ways to fine tune LLM through training and manage to get the desired behavior. So it can act like it “knows” about the stuff you do despite zero training if other methods are stuffing the prompts with the right answers.
Good. How do we fix the surviving 5%?
“Sir how is that going to help me do my job faster?” "Just ask it ‘how do I put in fries I’m the bag faster’ and then do what it says.’
And the other 5% are bullshitting.
deleted by creator
5% to me sounds excellent. Companies fail all the time with well established technologies and nobody should expect great results with something still so new. It’s a bet with high risks and high rewards: most people will simply fail.
If you care about AI development you would care a lot about the entire industry getting wrecked and set back decades because a bursting bubble and lack of independent funding.
This isn’t just about AI either, when an industry valued nearly half a trillion dollars crashes, it takes with it the ENTIRE FUCKING ECONOMY. I have lived through these bubbles before, this one is bigger and worse than any of them.
You won’t get your AI waifus if you have no job and nobody is hiring developers for AI waifus.
Like any technology before it, now we are beyond the hype in the area where lots of clueless people expects miracles and complain about the number of Rs in strawberry.
In one year or two it will be a regular tool like any other.
Okay but we’re talking about economics here, not the “tool” specifically. I think some people are so hung up on knee-jerk defensiveness of AI that they lose sight of everything but promoting it.
About 90% of tech startups fail. It happens all the time because it’s in the nature of innovation.
Here anything about AI is received negatively so a 5% success rate is the “demonstration” that it’s a bubble. I’m sorry if you hope so, but it’s not. 5% is not far away from the normal failure rate of new companies and here we are talking about early adopters who buy a lottery ticket trying to be the first that makes it work.
Feel free to believe the contrary. I don’t need to convince an anonymous guy on internet.
I’m sorry if you hope so,
Arguing that there’s an economic scheme threatening AI development and you translate it as “hope” that there is going to be an economy-destroying bubble burst, tells me I won’t get far in this conversation. Maybe figure out if there’s a less emotional/defensive path for looking at all this.
Maybe figure out if there’s a less emotional/defensive path for looking at all this
If you start with
I think some people are so hung up on knee-jerk defensiveness of AI that they lose sight of everything but promoting it.
you should expect to be seen as one of those with that irrational hate for a technology.
Back in topic: can this be a bubble like the dot-com? Of course. Is it? Probably not.
95% of failures should be a warning for all those fools who expects something that this technology cannot do. Nothing more than that.
So you’re one of the “some people” got it.
I have an extremely small company where I am the only employee and AI has let me build and do stuff that I would have needed a small team for the quality of what I went from to what I’m able to do now is really great and it’s thanks to ai.
I have no formal training for work experience in coding, but I taught myself python, years ago. Additionally, I don’t work in IT, so I think using ai to code has been extremely beneficial.
So you’re saying you have no professional coding experience, yet you know that a team of professionals couldn’t produce code at the quality you want?
Also, saying “extremely small company” when you mean self employed is weird. It’s fine to have your own company for a business/contracting.
I just hope you actually understand the code that has been produced and no customer or company data is at risk.
Yup. The absolute only useful task I’ve found it to handle is code documentation, which is as fast as it’s allowed to travel in my sphere.
Financially, I earn a really low amount. I e been freelance for a while, but am trying to grow the business, so it’s extremely small.
All the stuff I’m using AI for is just for presentation of internal materials. Nothing critical.
I feel similar.
The AI is great for low value tasks that eat time but aren’t difficult nor require high skill, nor are they risky. That’s the stuff that’s traditionally really difficult to automate.
When I’m actually doing the core parts of my job AI is so awful it’s clear humans are not going anywhere.
But those annoying side tasks need to get done.
I’ve set up a bunch of read only AI tool and that’s enough to speed up huge amounts of work.
That’s great but you’re not what this article and is about. There are tens of thousands of companies popping up left and right with far less ambition to succeed who just want to launch the next “AI powered toaster” and are hoping to make a fast buck and get bought out by a larger company like Google or OpenAI or Meta.
Combine that with growing public skepticism of AI and a general attitude that it’s being overused, the same attitude that makes you knee-jerk defensive about your business, an attitude which is growing and people are losing interest in AI broadly as a feature because it’s being overplayed and over-hyped and not delivering promises. This makes for a bubble that is growing, a bubble with nothing inside that becomes more and more fragile every day. Not everyone is a successful vibe-coder nor can they be.
I think you have blatant security holes that threaten your bottom line and your customers.
Decent article with a b. S agenda.
Its aimed at ages. Younger js better according to the article. So instead of focusing on what the issues with fucking a I are, they get to bring in ages.
As soon as they start that shit, you I know its to distract from the real issues
If that’s what you actually intended to type, you might have a stroke.
And here’s another bigot.
Why’s it the most intolerant who are biased against age?
Maga has nothing on you guys when it comes to agism.
Your both wrong
I don’t think you read what the commenter above you wrote. He was commenting on your disjointed thought process not the content of your comment. You’re typing like a crazy person. Take a few minutes or days away from the computer and calm down a bit.