• solomonschuler@lemmy.zip
    link
    fedilink
    arrow-up
    8
    arrow-down
    1
    ·
    16 hours ago

    That AI (as in “generative AI”) helps in learning if you give it the right prompt. There is evidence to support that when a user asks AI to implement code, that they (the user) won’t touch it because they are unfamiliar of the code it generated. The AI effectively made a psychological black box that no programmer wants to touch even for a (relatively speaking) small snippet of code to a larger program, that was programmed by another programmer or him.

    To further generalize, I fully believe AI doesn’t improve the learning process, it makes it more accessible and easier for less literate people in a field to understand. I can explain Taylor expansions and power series simplistically to my brother who is less literate and familiar with math. I would be shocked that after a brief general overview he can now approximate any function or differential equation.

    Same applies with chatGPT: You can ask it to explain simplistically taylor and power series solutions, or better yet, approximate a differential equation, it doesn’t change the fact that you still can’t replicate it. I know I’m talking about an extreme case where the person trying to learn Taylor expansions has no prior experience with math, but it still won’t even work for someone who does…

    I want to pose a simple thought experiment of my experience using AI on say (for example) taylor expansions. Lets assume i wants to learn Taylor expansion, ive already done differential calculus (the main requirement for taylor expansions) and I asks chatGPT “how to do Taylor expansions” as in what is the proof to the general series expansion, and show an example of applying Taylor expansions to a function. What happens when I try and do a problem is when I experience a level of uncertainty in my ability to actually perform it, and this is when I ask chatGPT if i did it correct or not. But you sort of see what I’m saying it’s a downward spiral of loosing your certainty, sanity, and time commitment over time when you do use it.

    That is what the programmers are experiencing, it’s not that they don’t want to touch it because they are unfamiliar with the code that the AI generated, it’s that they are uncertain in their own ability to fix an issue as they may fuck it up even more. People are terrified of the concept of failure and fucking shit up, and by using AI they “solve” that issue of theirs even though the probability of it hallucinating is higher then if someone spent time figuring out any conflicts themselves.