

7·
2 months agoWell when Roosevelt was elected 4 times, it was actually legal back then. And he’s the reason why the 2 term limit amendment exists. But of course, that requires actually following the law, so…
Keyoxide: aspe:keyoxide.org:MWU7IK7RMUTL3AP6U6UWCF4LHY
Well when Roosevelt was elected 4 times, it was actually legal back then. And he’s the reason why the 2 term limit amendment exists. But of course, that requires actually following the law, so…
Because of the porn or AI? 🙃
This is probably one of the best actual uses for something like generative AI. With enough data, they should be able to vectorize and translate dolphin language, assuming there is one.
1 scenario tested is better than 0 tested.
This guy would fit in well at my previous job where the founder discouraged writing unit tests because “there are too many scenarios to test.”
Like, wtf…
That was entirely the point unfortunately.
A lot of the answers here are short or quippy. So, here’s a more detailed take. LLMs don’t “know” how good a source is. They are word association machines. They are very good at that. When you use something like Perplexity, an external API feeds information from the search queries into the LLM, and then it summarizes that text in (hopefully) a coherent way. There are ways to reduce hallucination rate and check factualness of sources, e.g. by comparing the generated text against authoritative information. But how much of that is employed by Perplexity et al I have no idea.