Researchers have found that large language models (LLMs) tend to parrot buggy code when tasked with completing flawed snippets.
That is to say, when shown a snippet of shoddy code and asked to fill in the blanks, AI models are just as likely to repeat the mistake as to fix it.
Thank you! I’ll try this out. I’ve been mostly using it while playing around with new things rather than to expand scaffolding on existing stuff.
However what I find frustrating is that it so confidently gives you garbage sometimes. I was trying to configure some stuff in docker that needed a very extensive yaml config. It confidently gave me flags and keys to accomplish what I wanted that looked logical and fit in with rest of the style but simply did not exist.