If you have an account, you can tell it things about yourself. I used my boss’s account for a project at work (felt gross). I made the mistake of saying “good morning” to it one day, and it proceeded to ask me if I was going to do (activities related to my boss’s personal life - and the details were accurate). I was thinking, “why does he tell it so much about himself?”
My girlfriend gave me a mini heart attack when she told me that my favorite band broke up. Turns out it was chat gpt making shit up, came up with a random name for the final album too.
I was using it to blow through an online math course I’d ultimately decided I didn’t need but didn’t want to drop. One step of a problem I had it solve involved finding the square root of something; it spat out a number that was kind of close, but functionally unusable. I told it it made a mistake three times and it gave a different number each time. When I finally gave it the right answer and asked, “are you running a calculation or just making up a number” it said that if I logged in, it would use real time calculations. Logged in on a different device, asked the same question, it again made up a number, but when I pointed it out, it corrected itself on the first try. Very janky.
ChatGPT doesn’t actually do calculations. It can generate code that will actually calculate the answer, or provide a formula, but ChatGPT cannot do math.
You need multi-shot prompting when it comes to math. Either the motherfucker gets it right, or you will not be able to course correct it in a lot of cases. When a token is in the context, it’s in the context and you’re fucked.
Alternatively you could edit the context, correct the parameters and then run it again.
On the other side of the shit aisle
Shoutout to my man Mistral Small 24B who is so insecure, it will talk itself out of correct answers. It’s so much like me in not having any self worth or confidence.
I feel like a lot of people in this community underestimate the average person’s willingness to trust an AI. Over the past few months, every time I’ve seen a coworker ask something and search it up, I have never seen them click on a website to view the answer. They’ll always take what the AI summary tells them at face value
I’ve only really found it useful when you provide the source of information/data to your prompt. E.g. say you want to convert one data format to another like table data into JSON
It works very consistently in those types of use cases. Otherwise it’s a dice roll.
Why not both? Use the LLM to refine what you’re looking for, and better sources for details. It’s like skimming summaries in search results before picking a web page, or asking friends/family before looking up actual sources.
LLMs are great for interactive information retrieval and figuring out what information you actually need. They don’t do everything, but they do a lot more than detractors claim and lot less than proponents claim. Find that happy middle-ground and it’ll be a great tool.
So asking family and friends about things you don’t know isn’t worth it? Reading personal blogs isn’t worth it?
As long as you go in knowing what it offers, it can be a great tool, like those little summaries on search results. I use them to find information when the search engine isn’t going to be a great option, or at my day job to generate code for a poorly documented library. I tend to use it a handful of times per week, if that, and only for things that I know how to verify (i.e. manually test, confirm w/ official sources, etc).
As I said in another comment, searching for what’s right or wrong takes a lot of time. Using it as a tool that is untrustworthy isn’t great. For simple shit, maybe? But then simple shit is covered by most other places.
LLMs shouldn’t be used to search for what’s right or wrong, they should be used to quickly get a wide breadth of information on a given topic or provide a starting point on code or text projects that you can refine later.
For example, I wanted to use a new library in a project, and the documentation was a bit cryptic and examples weren’t quite right either. So I asked an LLM tuned for coding tasks to generate some example code for our use case (described the features I wanted to use), and the code worked. I needed to tweak some stuff (like I would w/ any example), but it worked. I used the LLM because I knew there would be a bunch of public code projects using this library with a variety of use cases, and that the LLM would probably do a reasonable job of picking out a decent one to build from, and I was right.
On another topic, I needed to do research on an unfamiliar topic, so I asked the LLM to provide a few examples in that domain w/ brief descriptions. That generated a ton of useful keywords that I used to find more reputable sources (keywords that would’ve taken hours of searching to generate), so I was able to quickly find reliable information starting from a pretty vague notion of what I wanted to learn.
LLMs have a lot of limitations, but if they’re used to accomplish common tasks quickly, they can be incredibly useful. I don’t think they’ll replace anyone’s job (unless your job is already pointless), traditional search engines (as much as Google wants it to), or anything like that, but they are useful tools that can make some of the more annoying parts of my job more efficient.
LLMs have flat out made up functions that don’t exist when I’ve used them for coding help. Was not useful, did not point me in a good direction, and wasted my time.
Not everyone has a bank of experts waiting for whatever questions they have.
If you have better options, fine. But if you are spending two hours googling instead of just asking chatgpt to spend five minutes finding you mostly the same links, you are just wasting time. Its easy to have it pull sources that you can quickly verify and then base your actually documentation on. FYI, I personally do both (I search while it does).
Yes, you are still expected to participate and verify what is said. I also dont copy paste stuff from websites without verification since god knows the internet in general isn’t always right either.
It’s a productivity tool meant to help you, not do the job for you.
Except, I’m constantly trying to figure out what’s right or wrong. At least wikipedia has people fighting about the truth, LLMs just state incorrect shit as truth and then stares at you.
LLMs are great for tech bros and CEOs who want maximum profit with minimum effort all while stealing work that isn’t theirs and poisoning the planet at the same time.
They’re also great for non-tech bros who just want to get stuff done, and they don’t have to poison the planet at all. We run a few in our office on a Mac Mini, which sips power.
Those tech bros and CEOs are mostly fleecing investors, so I guess I’m not very concerned about them.
It’s good if the answers exist, but you don’t know how to find them. They’re like search engines that can generated related terms. or regurgitate common answers.
I find LLMs help me use existing search engines a lot better, because I can usually get it to spit out domain-specific terms to a more general query.
I’m well aware, but I think clearing misconceptions is valuable, and since I’m getting a fair amount of votes in both directions and discussion, hopefully that means people have read and considered my point.
I’m not going to recommend people use LLMs for everything or even claim that they’re perfect for everyone (in fact, I don’t like using them), just that they do have valid uses and, if it comes up, can be used efficiently (i.e. not burn down the planet). I use them a handful of times in a given week, often less, and mostly to get more keywords to search on a traditional search engine.
So yeah, it’s whatever. I very much dislike both extremes here, and am trying to drive home the point that there is a happy middleground.
Yeah, my big gripe with Lemmy is the hivemind that decides the " ideologically correct" way to post. One can hope one reaches an open mind at some point, but such is social media :/
That’s true of all social media. It turns out, collecting information into groups tends to attract people w/ strong opinions about that type of information. If you have two groups, one very positive about something and one very negative, they’ll form separate groups because people prefer validation to conflict. It’s the natural consequence of social media, people like to form groups w/ like-minded people.
I didn’t come to Lemmy because I disliked the Reddit hive-mind issue, I came because I disliked how they treated third party developers and volunteer moderators. I self-corrected for Reddit’s hive-mind by joining a bunch of subreddits that attracted different perspectives (i.e. some for leftists, some for conservatives, some for anarchists, etc) so I’d hopefully get a decent mix, and I do the same here on Lemmy (though it seems Lemmy is a bit more leftist than Reddit, so there’s a bit less diversity in politics at least). I do the same for news sources and in my use of LLMs (ask it to find issues w/ a previous answer it gave).
So I sometimes post alternative viewpoints in threads like these to hopefully give someone a chance to reconsider their opinions. Sometimes those comments get traction, sometimes they don’t, but hopefully someone down the line will see them and appreciate it.
Nah Lemmy in particular is a worse dump than Miyazaki’s poison swamps. The level of zeal on lemmy is staggering (I mean, it’s already resulted in one terrorist attack)
I feel like this is because it’s much smaller than alternatives. It starts to feel like you’re circlejerking the same dicks every day.
And I don’t think it’s necessarily worse, but it really depends on the community and instance. Hexbear, ml, and lemmygrad are absolute dumpster fires, but the other instances are a lot more chill. But each has its own form of group think.
it only takes a couple times of getting a made-up bullshit answer from chatgpt to learn your lesson of just skip asking chatgpt anything altogether
I stopped using it when I asked who I was and then it said I was a prolific author then proceeded to name various books I absolutely did not write.
I just read “The Autobiography of QueenHawlSera”!
Have I been duped?
Why the fuck would it know who you are?
If you have an account, you can tell it things about yourself. I used my boss’s account for a project at work (felt gross). I made the mistake of saying “good morning” to it one day, and it proceeded to ask me if I was going to do (activities related to my boss’s personal life - and the details were accurate). I was thinking, “why does he tell it so much about himself?”
So it’s working as intended.
and I’m apparently a famous Tiktoker and Youtuber.
But chatgpt always gives such great answers on topics I know nothing at all about!
Oh yeah, AI can easily replace all the jobs I don’t understand too!
Gell-mann amnesia. Might have to invent a special name for the AI flavour of it.
My girlfriend gave me a mini heart attack when she told me that my favorite band broke up. Turns out it was chat gpt making shit up, came up with a random name for the final album too.
I was using it to blow through an online math course I’d ultimately decided I didn’t need but didn’t want to drop. One step of a problem I had it solve involved finding the square root of something; it spat out a number that was kind of close, but functionally unusable. I told it it made a mistake three times and it gave a different number each time. When I finally gave it the right answer and asked, “are you running a calculation or just making up a number” it said that if I logged in, it would use real time calculations. Logged in on a different device, asked the same question, it again made up a number, but when I pointed it out, it corrected itself on the first try. Very janky.
ChatGPT doesn’t actually do calculations. It can generate code that will actually calculate the answer, or provide a formula, but ChatGPT cannot do math.
It’s just like me fr fr
So it forced you to ask it many times? Now imagine that you paid for it each time. For the creator then, mission fucking accomplished.
You need multi-shot prompting when it comes to math. Either the motherfucker gets it right, or you will not be able to course correct it in a lot of cases. When a token is in the context, it’s in the context and you’re fucked.
Alternatively you could edit the context, correct the parameters and then run it again.
On the other side of the shit aisle
Shoutout to my man Mistral Small 24B who is so insecure, it will talk itself out of correct answers. It’s so much like me in not having any self worth or confidence.
I feel like a lot of people in this community underestimate the average person’s willingness to trust an AI. Over the past few months, every time I’ve seen a coworker ask something and search it up, I have never seen them click on a website to view the answer. They’ll always take what the AI summary tells them at face value
Which is very scary
I’ve only really found it useful when you provide the source of information/data to your prompt. E.g. say you want to convert one data format to another like table data into JSON
It works very consistently in those types of use cases. Otherwise it’s a dice roll.
That’s what people get when they ask me questions too but they still bother me all the time so clearly that’s not going to work.
Or you could learn to use the tool better and ask better questions. It’s pretty decent at some things, absolutely terrible for others.
Asking to explain something like shorting a stock is one of the better uses, since there are tons of relevant posts explaining exactly that.
Oooooooor, I could use something mostly reliable instead of something that’s sketch.
Nono you see… why be unsure when you could be wrong? 🤔
Why not both? Use the LLM to refine what you’re looking for, and better sources for details. It’s like skimming summaries in search results before picking a web page, or asking friends/family before looking up actual sources.
LLMs are great for interactive information retrieval and figuring out what information you actually need. They don’t do everything, but they do a lot more than detractors claim and lot less than proponents claim. Find that happy middle-ground and it’ll be a great tool.
Why though? If I can’t trust them half the time and I have to figure out when that is, nah.
So asking family and friends about things you don’t know isn’t worth it? Reading personal blogs isn’t worth it?
As long as you go in knowing what it offers, it can be a great tool, like those little summaries on search results. I use them to find information when the search engine isn’t going to be a great option, or at my day job to generate code for a poorly documented library. I tend to use it a handful of times per week, if that, and only for things that I know how to verify (i.e. manually test, confirm w/ official sources, etc).
As I said in another comment, searching for what’s right or wrong takes a lot of time. Using it as a tool that is untrustworthy isn’t great. For simple shit, maybe? But then simple shit is covered by most other places.
Then you’re using it wrong.
LLMs shouldn’t be used to search for what’s right or wrong, they should be used to quickly get a wide breadth of information on a given topic or provide a starting point on code or text projects that you can refine later.
For example, I wanted to use a new library in a project, and the documentation was a bit cryptic and examples weren’t quite right either. So I asked an LLM tuned for coding tasks to generate some example code for our use case (described the features I wanted to use), and the code worked. I needed to tweak some stuff (like I would w/ any example), but it worked. I used the LLM because I knew there would be a bunch of public code projects using this library with a variety of use cases, and that the LLM would probably do a reasonable job of picking out a decent one to build from, and I was right.
On another topic, I needed to do research on an unfamiliar topic, so I asked the LLM to provide a few examples in that domain w/ brief descriptions. That generated a ton of useful keywords that I used to find more reputable sources (keywords that would’ve taken hours of searching to generate), so I was able to quickly find reliable information starting from a pretty vague notion of what I wanted to learn.
LLMs have a lot of limitations, but if they’re used to accomplish common tasks quickly, they can be incredibly useful. I don’t think they’ll replace anyone’s job (unless your job is already pointless), traditional search engines (as much as Google wants it to), or anything like that, but they are useful tools that can make some of the more annoying parts of my job more efficient.
LLMs have flat out made up functions that don’t exist when I’ve used them for coding help. Was not useful, did not point me in a good direction, and wasted my time.
Not everyone has a bank of experts waiting for whatever questions they have.
If you have better options, fine. But if you are spending two hours googling instead of just asking chatgpt to spend five minutes finding you mostly the same links, you are just wasting time. Its easy to have it pull sources that you can quickly verify and then base your actually documentation on. FYI, I personally do both (I search while it does).
Except these “experts” are wrong a lot, so you can’t trust them. It’s the confidently wrong that’s problematic.
Yes, you are still expected to participate and verify what is said. I also dont copy paste stuff from websites without verification since god knows the internet in general isn’t always right either.
It’s a productivity tool meant to help you, not do the job for you.
How does it help productivity if you have to redo all of its work?
Except, I’m constantly trying to figure out what’s right or wrong. At least wikipedia has people fighting about the truth, LLMs just state incorrect shit as truth and then stares at you.
LLMs are great for tech bros and CEOs who want maximum profit with minimum effort all while stealing work that isn’t theirs and poisoning the planet at the same time.
They’re also great for non-tech bros who just want to get stuff done, and they don’t have to poison the planet at all. We run a few in our office on a Mac Mini, which sips power.
Those tech bros and CEOs are mostly fleecing investors, so I guess I’m not very concerned about them.
It’s a good tool so long as there are already better ways to get your answer
It’s good if the answers exist, but you don’t know how to find them. They’re like search engines that can generated related terms. or regurgitate common answers.
I find LLMs help me use existing search engines a lot better, because I can usually get it to spit out domain-specific terms to a more general query.
deleted by creator
Forget trying to say anything positive about LLM or AI. The Lemmyverse downvotes any positive comment related to AI.
I’m well aware, but I think clearing misconceptions is valuable, and since I’m getting a fair amount of votes in both directions and discussion, hopefully that means people have read and considered my point.
I’m not going to recommend people use LLMs for everything or even claim that they’re perfect for everyone (in fact, I don’t like using them), just that they do have valid uses and, if it comes up, can be used efficiently (i.e. not burn down the planet). I use them a handful of times in a given week, often less, and mostly to get more keywords to search on a traditional search engine.
So yeah, it’s whatever. I very much dislike both extremes here, and am trying to drive home the point that there is a happy middleground.
I’m dying laughing at the “NOOOOO AI BAAAAAAD” folks downvoting you for being absolutely correct on how to use the tool properly XD
Eh, it comes w/ the territory. Lemmy is generally anti-LLM, and this is a post that would specifically trigger people who hate LLMs.
I just hope a few people stop and think about whether their distaste for LLMs is reasonable or just bandwagoning.
Yeah, my big gripe with Lemmy is the hivemind that decides the " ideologically correct" way to post. One can hope one reaches an open mind at some point, but such is social media :/
That’s true of all social media. It turns out, collecting information into groups tends to attract people w/ strong opinions about that type of information. If you have two groups, one very positive about something and one very negative, they’ll form separate groups because people prefer validation to conflict. It’s the natural consequence of social media, people like to form groups w/ like-minded people.
I didn’t come to Lemmy because I disliked the Reddit hive-mind issue, I came because I disliked how they treated third party developers and volunteer moderators. I self-corrected for Reddit’s hive-mind by joining a bunch of subreddits that attracted different perspectives (i.e. some for leftists, some for conservatives, some for anarchists, etc) so I’d hopefully get a decent mix, and I do the same here on Lemmy (though it seems Lemmy is a bit more leftist than Reddit, so there’s a bit less diversity in politics at least). I do the same for news sources and in my use of LLMs (ask it to find issues w/ a previous answer it gave).
So I sometimes post alternative viewpoints in threads like these to hopefully give someone a chance to reconsider their opinions. Sometimes those comments get traction, sometimes they don’t, but hopefully someone down the line will see them and appreciate it.
Nah Lemmy in particular is a worse dump than Miyazaki’s poison swamps. The level of zeal on lemmy is staggering (I mean, it’s already resulted in one terrorist attack)
I feel like this is because it’s much smaller than alternatives. It starts to feel like you’re circlejerking the same dicks every day.
Source?
And I don’t think it’s necessarily worse, but it really depends on the community and instance. Hexbear, ml, and lemmygrad are absolute dumpster fires, but the other instances are a lot more chill. But each has its own form of group think.
The anti natalist dude who attacked a fertilization center was a lemmy radical.
Want a source? Look at the SS Headquarters (also known as lemmy.world)