- cross-posted to:
- piracy@lemmy.dbzer0.com
- cross-posted to:
- piracy@lemmy.dbzer0.com
This is just one action in a coming conflict. It will be interesting to see how this shakes out. Does the record industry win and digital likenesses become outlawed, even taboo? Or does voice, appearance etc just become another sets of rights that musicians will have to negotiate during a record deal?
I wonder if these battles will shake loose the circuit split on de minimis exceptions to music samples (see https://lawreview.richmond.edu/2022/06/10/a-music-industry-circuit-split-the-de-minimis-exception-in-digital-sampling/).
Currently, it is absolutely not “cut and dried” whether the use of any given sample should be permitted. Most musicians are erring on the side of “clear everything,” but does an AI-generated “simulacrum” qualify as “sampling”?
What’s on trial here is basically “what characteristic(s) of an artist’s work do they own?” If you write a song, you can “own” whatever is written down (melody, lyrics, etc.) If you perform a song, you can own the performance (recordings thereof, etc.) Things start to get pretty vague when we start talking about “I own the sound of my voice.”
I think it’s accepted that it’s legal for an impersonator to make a living doing TikToks pretending to be Tom Cruise. Tom Cruise can’t really sue them saying “he sounds like me.” But is it different if a computer does it? It may very well be.
It’s going to be a pretty rough few years in copyright litigation. Buckle up.
What more, if they over-litigate, then the economy of the country that over-litigate will fall behind compared to the rest of the world as other country would overtake USA. There is no if or but in this scenario. For instance, poor people in third world country would absolutely leverage this technologies to boost their ability to make an income.
A lot of the AI stuff is a Pandora’s box situation. The box is already open, there’s no closing it back. AI art, AI music, and AI movies will become increasingly high quality and widespread.
The biggest thing we still have a chance to influence with it is whether it’s something that individuals have access to or if it becomes another field dominated by the same tech giants that already own everything. An example is people being against stable diffusion because it’s trained by individuals on internet images, but then being ok with a company like Adobe doing it because they snuck a line into their ToS that they can train AI off of anything uploaded to their creative cloud.
Saying that Stable Diffusion was trained by “individuals” is a bit of a stretch, it cost over half a million dollars worth of compute to train it, and Stability AI is still a company in the end of the day. If that still counts as trained by individuals, then so does Midjourney and Dalle
Original stable diffusion wasn’t trained by individuals, but clearly the current progression of the software is largely community driven. All sorts of new tech and add-ons for it, huge volumes of community trained checkpoints and Lora’s, and of course the interfaces themselves like automatic1111 and vladmatic.
And it’s something you can run yourself offline with a halfway decent graphics card.