I remember this happening with Uber too. All that VC money dried up, their prices skyrocketed, people stopped using them, and they went bankrupt. A tale as old as time.
A lot of those things have a business model that relies on putting the competition out of business so you can jack up the price.
Uber broke taxis in a lot of places. It completely broke that industry by simply ignoring the laws.
Uber had a thing that it could actually sell that people would buy.
It took years before it started making money, in an industry that already made money.
LLMs Don’t even have a path to profitability unless they can either functionally replace a human job or at least reliably perform a useful task without human intervention.
They’ve burned all these billions and they still don’t even have something that can function as well as the search engines that proceeded them no matter how much they want to force you to use it.
And yet a great many people are willingly, voluntarily using them as replacements for search engines and more. If they were worse then why are they doing that?
A great many people are using them voluntarily, a lot of people are using them because they don’t know how to avoid using them and feel that they have no alternative.
But the implication of the question seems to be that people wouldn’t choose to use something that is worse.
In order to make that assumption you have to first assume that they know qualitatively what is better and what is worse, that they have the appropriate skills or opportunity necessary to choose to opt in or opt out, and that they are making their decision on what tools to use based on which one is better or worse.
I don’t think you can make any of those assumptions. In fact I think you can assume the opposite.
The average person doesn’t know how to evaluate the quality of research information they receive on topics outside of their expertise.
The average person does not have the technical skills necessary to engage with non-AI augmented systems presuming they want to.
The average person does not choose their tools based on what is the most effective at producing the correct truth but instead on which one is the most usable, user friendly, convenient, generally accepted, and relatively inexpensive.
In order to make that assumption you have to first assume that they know qualitatively what is better and what is worse, that they have the appropriate skills or opportunity necessary to choose to opt in or opt out, and that they are making their decision on what tools to use based on which one is better or worse.
I don’t think you can make any of those assumptions. In fact I think you can assume the opposite.
Isn’t that what you yourself are doing, right now?
The average person does not choose their tools based on what is the most effective at producing the correct truth but instead on which one is the most usable, user friendly, convenient, generally accepted, and relatively inexpensive.
Yes, because people have more than one single criterion for determining whether a tool is “better.”
If there was a machine that would always give me a thorough well-researched answer to any question I put to it, but it did so by tattooing the answer onto my face with a rusty nail, I think I would not use that machine. I would prefer to use a different machine even if its answers were not as well-researched.
But I wasn’t trying to present an argument for which is “better” in the first place, I should note. I’m just pointing out that AI isn’t going to “go away.” A huge number of people want to use AI. You may not personally want to, and that’s fine, but other people do and that’s also fine.
I am optimistic to think that they will have the opportunity find that out in time to not be walked off a cliff.
I’m optimistically predicting that when people find out how much it actually costs and how shit it is that they will redirect their energies to alternatives if there are still any alternatives left.
A better tool may come along, but it’s not this stuff. Sometimes the future of a solution doesn’t just look like more of the previous solution.
I suspect it because search results require manually parsing through them for what you are looking for, with the added headwinds of widespread, and in many ways intentional degradation of conventional search.
Searching with an LLM AI is thought-terminating and therefore effortless. You ask it a question and it authoritatively states a verbose answer. People like it better because it is easier, but have no ability to evaluate if it is any better in that context.
BTW, all the modern LLMs I’ve tried that do web searching provide citations for the summaries they generate. You can indeed evaluate the validity of their responses.
I remember this happening with Uber too. All that VC money dried up, their prices skyrocketed, people stopped using them, and they went bankrupt. A tale as old as time.
A lot of those things have a business model that relies on putting the competition out of business so you can jack up the price.
Uber broke taxis in a lot of places. It completely broke that industry by simply ignoring the laws. Uber had a thing that it could actually sell that people would buy.
It took years before it started making money, in an industry that already made money.
LLMs Don’t even have a path to profitability unless they can either functionally replace a human job or at least reliably perform a useful task without human intervention.
They’ve burned all these billions and they still don’t even have something that can function as well as the search engines that proceeded them no matter how much they want to force you to use it.
And yet a great many people are willingly, voluntarily using them as replacements for search engines and more. If they were worse then why are they doing that?
These kinds of questions are strange to me.
A great many people are using them voluntarily, a lot of people are using them because they don’t know how to avoid using them and feel that they have no alternative.
But the implication of the question seems to be that people wouldn’t choose to use something that is worse.
In order to make that assumption you have to first assume that they know qualitatively what is better and what is worse, that they have the appropriate skills or opportunity necessary to choose to opt in or opt out, and that they are making their decision on what tools to use based on which one is better or worse.
I don’t think you can make any of those assumptions. In fact I think you can assume the opposite.
The average person doesn’t know how to evaluate the quality of research information they receive on topics outside of their expertise.
The average person does not have the technical skills necessary to engage with non-AI augmented systems presuming they want to.
The average person does not choose their tools based on what is the most effective at producing the correct truth but instead on which one is the most usable, user friendly, convenient, generally accepted, and relatively inexpensive.
Isn’t that what you yourself are doing, right now?
Yes, because people have more than one single criterion for determining whether a tool is “better.”
If there was a machine that would always give me a thorough well-researched answer to any question I put to it, but it did so by tattooing the answer onto my face with a rusty nail, I think I would not use that machine. I would prefer to use a different machine even if its answers were not as well-researched.
But I wasn’t trying to present an argument for which is “better” in the first place, I should note. I’m just pointing out that AI isn’t going to “go away.” A huge number of people want to use AI. You may not personally want to, and that’s fine, but other people do and that’s also fine.
A lot of people want a good tool that works.
This is not a good tool and it does not work.
Most of them don’t understand that yet.
I am optimistic to think that they will have the opportunity find that out in time to not be walked off a cliff.
I’m optimistically predicting that when people find out how much it actually costs and how shit it is that they will redirect their energies to alternatives if there are still any alternatives left.
A better tool may come along, but it’s not this stuff. Sometimes the future of a solution doesn’t just look like more of the previous solution.
For you, perhaps. But there are an awful lot of people who seem to be finding it a good tool and are getting it to work for them.
I suspect it because search results require manually parsing through them for what you are looking for, with the added headwinds of widespread, and in many ways intentional degradation of conventional search.
Searching with an LLM AI is thought-terminating and therefore effortless. You ask it a question and it authoritatively states a verbose answer. People like it better because it is easier, but have no ability to evaluate if it is any better in that context.
So it has advantages, then.
BTW, all the modern LLMs I’ve tried that do web searching provide citations for the summaries they generate. You can indeed evaluate the validity of their responses.
That was the intended path.