Clinicallydepressedpoochie@lemmy.world to Showerthoughts@lemmy.world · edit-25 days agoIf AI was going to advance exponentially I'd have expected it to take off by now.message-squaremessage-square223fedilinkarrow-up1254arrow-down139
arrow-up1215arrow-down1message-squareIf AI was going to advance exponentially I'd have expected it to take off by now.Clinicallydepressedpoochie@lemmy.world to Showerthoughts@lemmy.world · edit-25 days agomessage-square223fedilink
minus-squarejustOnePersistentKbinPlease@fedia.iolinkfedilinkarrow-up59arrow-down5·5 days agoAnd the single biggest bottleneck is that none of the current AIs “think”. They. Are. Statistical. Engines.
minus-squarethemurphy@lemmy.mllinkfedilinkarrow-up7arrow-down1·5 days agoAnd it’s pretty great at it. AI’s greatest use case is not LLM and people treat it like that because it’s the only thing we can relate to. AI is so much better and many other tasks.
minus-squareCaveman@lemmy.worldlinkfedilinkarrow-up3·5 days agoHow closely do you need to model a thought before it becomes the real thing?
minus-squarejustOnePersistentKbinPlease@fedia.iolinkfedilinkarrow-up3·4 days agoNeed it to not exponentially degrade when AI content is fed in. Need creativity to be more than random chance deviations from the statistically average result in a mostly stolen dataset taken from actual humans.
minus-squaremoonking@lemy.lollinkfedilinkarrow-up25arrow-down21·5 days agoHumans don’t actually think either, we’re just electricity jumping to nearby neural connections that formed based on repeated association. Add to that there’s no free will, and you start to see how “think” is a immeasurable metric.
minus-squareYesButActuallyMaybe@lemmy.calinkfedilinkarrow-up5arrow-down2·5 days agoMarkov chains with extra steps
minus-squareXaphanos@lemmy.worldlinkfedilinkEnglisharrow-up4arrow-down2·5 days agoYou’re not going to get an argument from me.
minus-squaredaniskarma@lemmy.dbzer0.comlinkfedilinkarrow-up1arrow-down1·4 days agoMaybe we are statistical engines too. When I heard people talk they are also repeating the most common sentences that they heard elsewhere anyway.
And the single biggest bottleneck is that none of the current AIs “think”.
They. Are. Statistical. Engines.
And it’s pretty great at it.
AI’s greatest use case is not LLM and people treat it like that because it’s the only thing we can relate to.
AI is so much better and many other tasks.
Same
How closely do you need to model a thought before it becomes the real thing?
Need it to not exponentially degrade when AI content is fed in.
Need creativity to be more than random chance deviations from the statistically average result in a mostly stolen dataset taken from actual humans.
Humans don’t actually think either, we’re just electricity jumping to nearby neural connections that formed based on repeated association. Add to that there’s no free will, and you start to see how “think” is a immeasurable metric.
Markov chains with extra steps
You’re not going to get an argument from me.
Maybe we are statistical engines too.
When I heard people talk they are also repeating the most common sentences that they heard elsewhere anyway.