

sure maybe it was tested with data already known to the researchers, but that’s not a real world test, that’s still a fully controlled environment. and the researchers, being human, aren’t perfect, the data about what the AI was meant to predict could’ve slipped into the training data. using historic data to predict slightly less historic data is a good first step, and it’s of course exciting! but we’re not done here
nobody can read AI code after its been trained, so until all possibility of human error can be fully disspelled by continuous testing it in real time and having the AI actually predict events that come to be - it’s a could, not a can.
o.o holy shit- i mean that’s a valid move, using AI for a handwritten piece sounds like a pain in the ass, but so does just writing 10 pages by hand, AI or not!
i’m glad i got through my higher education marginally before the AI boom hit (i graduated 3 years ago). i only had Turnitin yell “PLAGIARISM???” at me when i used a common phrase that another student used at some point somewhere (think - “The research suggests…”, or sometimes even the page numbers), good times good times