I have no idea how people can use LLM-generated code. In my experience they’re absolutely terrible. However it can be good for giving some insights time to time.
I only use it when I know exactly the code I’m trying to produce, but just saving time if it can write it for me. Somewhere I saw this described as ‘toil’ vs. ‘domain knowledge’, and it definitely reduces toil even if I have to correct it. Anywhere that I wouldn’t know how to correct it, I don’t trust it.
It’s definitely improving. I thought the same as you but I looked through my recent ChatGPT prompts and it’s actually decent now, at least at simple/throwaway tasks. It doesn’t stand a chance at the niche domains of my actual job.
I have no idea how people can use LLM-generated code. In my experience they’re absolutely terrible. However it can be good for giving some insights time to time.
I only use it when I know exactly the code I’m trying to produce, but just saving time if it can write it for me. Somewhere I saw this described as ‘toil’ vs. ‘domain knowledge’, and it definitely reduces toil even if I have to correct it. Anywhere that I wouldn’t know how to correct it, I don’t trust it.
I only used LLM-generated code once. I tested and made modifications to see if it was I want. It worked out.
Probably because they’re not good enough to know any better.
It’s definitely improving. I thought the same as you but I looked through my recent ChatGPT prompts and it’s actually decent now, at least at simple/throwaway tasks. It doesn’t stand a chance at the niche domains of my actual job.