Formerly u/CanadaPlus101 on Reddit.

  • 4 Posts
  • 2.14K Comments
Joined 3 years ago
cake
Cake day: June 12th, 2023

help-circle





  • Ah, but if there’s no random element to a human cognition, it should produce the exact same output time and time again. What is not random is deterministic.

    Biologically, there’s an element of randomness to neurons firing. If they fire too randomly, that’s a seizure. If they don’t ever fire spontaneously, you’re in a coma. How they produce ideas is nowhere close to being understood, but there’s going to be an element of an ordered pattern of firing spontaneously emerging. You can see a bit of that with imaging, even.

    Anyways, however we eventually create an artificial mind, it will not be with a large language model; by now, that much is certain.

    It does seem to be dead-ending as a technology, although the definition of “mind” is, as ever, very slippery.

    The big AI/AGI research trend is “neuro-symbolic reasoning”, which is a fancy way of saying embedding a neural net deep in a normal algorithm that can be usefully controlled.


  • On actual mental illness specifically, as opposed to just “weirdness” in general, I have no hard data. If it’s caused at the physiological level, it makes sense that it wouldn’t follow the same pattern. You can of course name a bunch of mentally ill but prominent thinkers and artists from the past, but there’s almost certainly a lot of neglect of base rate going on there.

    It’s worth noting production LLMs choose randomly from a significant range of tokens they deem fairly likely, as opposed to choosing the most likely one every time. If they were too conservative with it, they too would fall on the near side of that curve.




  • By that same logic LLMs themselves (by now some AI bro had to vibe code something there)

    I’m guessing LLMs are still really really bad at that kind of programming. The packaging of the LLM, sure.

    & their trained datapoints

    For legal purposes, it seems like the weights would be generated by the human-made training algorithm. I have no idea if that’s copyrightable under US law. The standard approach seems to be to keep them a trade secret and pretend there’s no espionage, though.


  • A link to the paper itself, if like me you have a math background, and are wondering WTF that means and how you measure creativity mathematically. Or for that matter what amateur-tier creativity is. Unfortunately, it’s probably too new to pirate, if you don’t have a subscription to the Journal of Creative Behaviour.

    At least according to the article, he argues that novelty and correctness are opposite each other in an LLM, which tracks. The nice round numbers used to describe that feel like bullshit, though. If you’re metric boils down to a few bits don’t try and pad it by converting to reals.

    That’s not even the real kicker, though; the two are anticorrelated in humans as well. Generations of people have remarked at how the most creative people tend to be odd or straight-up mentally ill, and contemporary psychology has captured that connection statistically in the form of “impulsive unconventionality”. If it’s asserted without evidence that it’s not so in “professional” creative humans, than that amounts to just making stuff up.