undefined> Both are concerning, but as a former academic to me neither of them are as insidious as the harm that LLMs are already doing to training data. A lot of corpora depend on collecting public online data to construct data sets for research, and the assumption is that it’s largely human-generated. This balance is about to shift, and it’s going to cause significant damage to future research. Even if everyone agreed to make a change right now, the well is already poisoned. We’re talking the equivalent of the burning of Alexandria for linguistics research.
It reminds me of the situation with steel where post atomic weapons its tainted. It can’t be used for scientific tools or equipment. You have to find and use it from pre-atomic bombs. https://en.wikipedia.org/wiki/Low-background_steel
There is going to be “low background data” in the future.
Thats pretty reasonable. I’m sure there are a ton of orphan accounts just lingering out there. Including accounts that other people may like to have.
All of these companies are tightening their belts. Those interest rates going up are sure making companies reassess their business models.