With all the fuzz about IA image “stealing” illustrator job, I am curious about how much photography changed the art world in the 19th century.
There was a time where getting a portrait done was a relatively big thing, requiring several days of work for a painter, while you had to stand still for a while so the painter knew what you looked like, and then with photography, all you had to do was to stand still for a few minutes, and you’ll get a picture of you printed on paper the next day.
How did it impact the average painter who was getting paid to paint people once in their lifetime.
If you have a basic understanding how AI works then this argument doesn’t hold much water.
Let’s take the human approach: I’m going to look at all the works of popular painters to learn their styles. Then I grab my painting tools and create similar works.
No credit there, I still used all those other works as input and created by own based on them.
With AI it’s the same, just in a much bigger capacity. If you ask AI to redraw the Mona Lisa you won’t get a 1:1 copy out, because the original doesn’t exist in the trained model, it’s just statistics.
Same as if you tell a human to recreate the painting, no matter how good they are, they’ll never be able to perfectly reproduce the original work.
> With AI it’s the same, just in a much bigger capacity.
And this bigger capacity makes a huge difference.
I try to give you an easy example:
When a company wants to fell a tree, it is no big deal. When a company wants to fell 100.000 trees, you would maybe start to think if they should be allowed to do that. Environment and all. When a company wants to fell all the trees in the whole world, you would say No to that plan (hopefully) without much thinking.
So, you see, scale makes a difference in nearly all decisions. Legal and other.
This AI thing is already at the size of “all in the world”. It is a big deal. We need to think very carefully.
I’m just adding on to this.
Scale here comes in multiple layers, including the capacity/speed at which these technologies can be deployed, as well as the breadth of domains/fields/applications they touch upon, not to mention the unintended consequences when at scale. It’s not only they are much faster, cheaper, but they come almost all at once and have the potential to affect so many fields. Heck, “GP” means general purpose in GPT. Plus the effects of scale can be extremely unpredictable that we should not underestimate (disinformation campaigns now come much cheaper and easier, trust erodes even further).
I don’t know much about history so please correct me, but photography “replacing” painting may be quite specific, that painters could probably have adapted or switched to another professions. I think one commenter stated that the transition was “smoother”. In the case of these generative techs, this affects the livelihood of a whole bunch more of people (possibly both in absolute and per-capita number) that will need to grapple with what they’re going to do with their life, and have to do it fast.
One branch of the arguments I’ve been seeing is about capability comparison, sometimes even anthropomorphizing tech/companies. While I find that interesting and valuable intellectually, I personally think the conversations need to be more about the labor aspects.
Learning takes time and people need to eat. In the name of progress, society sometimes forgets or brushes over the “casualties” it leaves behind. I think many would benefit from this tech, but let’s hope they have a meal on their table doing so.
We don’t want a dystopian where they use these techs to generate the illusions of enjoying a feast over a big hotpot, while in reality it’s just a can of tomato soup for a family of 5.
The thing is, with a certain model you could get a perfect 1:1 copy but that’s not really the point. I have a degree that includes machine learning and I believe it’s imperative that we have legislation that protects content owners and puts restrictions on what data you’re allowed to use to train your models. Not because I don’t understand but because I do understand.
Haphazardly introducing this technology at a large scale in society will come with serious consequences, not to mention the consequences to privacy if we don’t curtail what data that companies are allowed to scrape from the internet just because they throw in buzzwords about “AI” in somewhere.
This is fundamentally not about being pro-technology or anti-technology, it’s about how we value private citizens versus corporations.
It’s called Ctrl+c & Ctrl+v
An independent artist learning new styles and gaining inspiration in creating their own work is not at all the same as a profit driven software corperation stealing other artists works on a massive scale to develop their own commercial products. That’s on top of most artists like myself prohibiting using our work for private commercial gain unless properly compensated or credited.
It looks to me like you’re talking about something else compared to the person you’re replying to.
To my eyes, he’s arguing in favor of the technology as a concept, while you’re arguing against specific products (let’s say midjourney, for the sake of the discussion). If midjourney proved beyond any doubt that their model was trained on a data set that they had rights to (by buying them from artists, or the images already being copyright free, doesn’t matter), would you still be against it?
Similarly, you said you’re against your work being used for commercial purposes, but would you be ok with me training a model on your work, and then using AI to generate images in your style that I use as, for example, character art for my DnD games that I will pay with friends? (making an assumption here, don’t know what kind of artist you are)
Let’s explore this further. When we look at the work of a human we can often see their influences (and they can often acknowledge them or even cite specific works). In a way, they are able to credit those they were inspired by.
Would an “AI” be able to do the same? I’m guessing it probably can, but more as a statistical similarity to other works. I don’t know if it can cite its sources.
A human can say that they were influenced by XYZ but they might not be crediting all of the instructors they had, or all the art books they read, all the stepping stones that got them to the point of being able to produce a work that has an identifiable influence. Then consider the people who influenced the person they’re citing as an influence, and so on and so on. I don’t know that the AI can tell you where every flourish comes from, but the person using it as a tool certainly could tell you what tags they used, which often include "in the style of "
Instructors and art books literally give permission to use them as a “stepping stone” by definition. The entire point of them is to offer input to other artists.
Also the main difference is that a human has a human mind and is making creative decisions unique to that human. The problem is that a narrow AI algorithm cannot be anything BUT derivative. They don’t think, they don’t have a mind that filters the data through a unique perspective, they just process the data like a series of conveyer belts. If you never give a human any input from other artists they can still make art, that’s why we have cave paintings. But a narrow AI algorithm needs specific input via specific pieces of art or else it can’t create anything. With that in mind permission and consent is much more important to the artists whose specific pieces are being fed into the algorithm. It’s generally considered good form to credit inspiration in derivative work, but we understand that human nature means that humans may not always remember or realize what or who inspired them. AI doesn’t have that excuse. We are perfectly capable of only feeding images that we are given permission to use into AI, and we are perfectly capable of having the AI log and report what works it used data from.
But it does hold water. The original image might not be contained within the model, but the fact that it was trained on stolen data makes it problematic. Even if humans do the same, an AI model is not a human but a product, and therefore needs to adhere to different rules.
Also: scale. If you’re a painter inspired by other painters, your output will still be limited. AI is a different story in this regard.
AI pulling from a database to recreate artstyles is much more destructive than human inspiration.
Imagine you’re a child book illustrator. Your work is out there and accessible. Now someone has the idea to get into it using AI. They really like your artstyle and tell the AI your name. Now the AI spits out illustrations in your artstyle, and many people might not even be able to tell the difference.
This random person uploads it or maybe even contacts your publisher. Worst case scenario they even buy his work and not care about the quality of the stories. Now you’re actually replaced.
Now is this not copyright infringement?
Having the AI cite sources is not a solution to this as people will simply detach them. Having signatures on your works is not a solution and it actually makes it worse because then the AI copies it and now it looks like signed work from you.
When I first saw people using AI to make great images I thought the same. It’s just a non human inspiration cycle. But human inspiration is so so different. You don’t just look at existing images. EVERYTHING you’ve ever seen is an inspiration. Everything you’ve ever read heard or done too.
Human inspiration is one thing. Creation takes skill practice and time. AI creation doesn’t. The program might have required skill to write, but that’s not an excuse for it to threaten entire industries.
>The program might have required skill to write, but that’s not an excuse for it to threaten entire industries.
We don’t live in a world where industries exist just because it would be nice for them to and people need work.
An industry is a productive environment that creates products for others to buy. If the people buying from the current art industry care about human inspiration and the uniqueness they add to art, they will continue to buy from humans. If they do not, why should the state use it’s monopoly on violence to cripple any other source of product?
Are artists some special class of people above every other group of workers who’ve lost their jobs to automation?
If a painter looks at another artist’s painting, then decided to paint something similar, is that stealing?
@RightHandOfIkaros If they are just painting for themselves to learn new techniques or styles, no. If they are purposely trying to copy it to sell or pass off as the original artist, yes. A for-profit corperation taking works that have not been authorized for commercial use in order to develop their for-profit software is indeed stealing.
Like elephants?
It does become not a technical discussion but a philosophical one pretty soon. I’m not sure humans can accurately cite their sources either—yes they can be interviewed and claim X or Y as a big influence on their artistic work. But how do they know that? Do they know that more than an AI asked the same question?
This is true, but the way AI differs in a problematic way is usually described in confused and incorrect terms like “stealing” or “training without permission”.
It is, to some degree legally and to some degree culturally, not allowed to copy someone else without their permission. For human artists this problem is contained.
If I am inspired by your work and create a painting a biiit to close to your work, intentionally or not, you have the option to talk to me and we can work something out. Or worst case take me to court.
If an AI does the same, unintentionally of course, it’s not one painting after a few weeks of work. It’s thousands per day. You have no capacity to find and initiate conversations about each of those. And worse, your conversations will not be with someone who recognizes that they were inspired by your work. It will usually be with someone who doesn’t have the affinity to see the similarities and will shrug and says “I don’t see it sorry” and you have to take the fight to the AI supplier third party’s legal team who will also shrug and hide behind terms like “algorithm”.
The difference is that we recognise humans and their history, imperfections and many many influences to be part of what makes both the human and expression unique.
A lot of the discussion doesn’t grant the machine learning models the same inherent worth as humans get, and thus is viewed as a tool trained to replicate others’ work (rather than a creative agent).
This means that where a student painter is expected to have a desire to express something, and are putting in hard work in practice and paying tutors. Replacing them with a machine without desires or stories to express, by stealing artwork without neither credit or compensation, to then replace the same people who’ve been exploited in creating the tool, seems unfair.
I take it you’re not an artist? That’s not how or why you do studies.