It stands to reason that if you have access to an LLM’s training data, you can influence what’s coming out the other end of the inscrutable AI’s network. The obvious guess is that…
That’s not how evidence works. If the original person has evidence that the software doesn’t work, then we need to look at both sets of evidence and adjust our view accordingly.
It could very well be that the software works 90% of the time, but there could exist some outlying examples where it doesn’t. And if they have those examples, I want to know about them.
That’s not how evidence works. If the original person has evidence that the software doesn’t work, then we need to look at both sets of evidence and adjust our view accordingly.
It could very well be that the software works 90% of the time, but there could exist some outlying examples where it doesn’t. And if they have those examples, I want to know about them.