(Banner art is my own photography; the vehicles are not mine.)
Preface
I've seen quite a lot of very heated discussion here on Minds recently over the ethics of what's popularly referred to as "AI art" - algorithmically generated images. This being something I can't see any particular reason to be emotionally invested in, the strong feelings on this issue regularly take me by surprise, and I often wonder if there's something on the issue I'm missing. I would like to humbly offer this as an opportunity for people to convince me (a tepidly pro-algorithm non-artist) of the rightness of their anti-algorithm position.
I'll start out this post by stating my position as clearly as I can: I don't think algorithmic image generation is a big deal in the long run. The term "AI Art" is deceptive. There's no inherent moral harm in using algorithmic image generators (though they can be designed immorally or used in immoral ways). Human artists aren't going to be replaced unless they actively, affirmatively choose to be replaced, and the whole concept would not be threatening to artists if the art culture of the West was healthy in any sense of the word.
Why I think these things will take a bit longer to lay out, but I'm going to do it anyway, because it gives you as the person who disagrees with me all the information you need to formulate an effective argument that would persuade me. Again, I am not in the least emotionally invested in this issue; this is where I've come from thinking about it, applying both philosophy and expertise to the issue and considering it in the abstract. I admit the possibility I've gone wrong somewhere, and laying out where I'm coming from gives you the best chance of pointing out my errors.
Firstly, we need to break down why I think the phrase "AI Art" is two lies. We might be here a while, but this is important, as it's why I've come to prefer the more accurate term "algorithmic images" and influences several of the tenets of my position which I laid out in the preface.
Artificial Intelligence Isn't
When most people think of AI, or artificial intelligence, they think of thinking computers: Skynet from "Terminator", HAL 9000 from "2001: A Space Odyssey", Cortana from "Halo", and so on. Obviously, that's not what we're talking about when we say "AI" in terms of modern technology.
Software engineers like myself tend think of "artificial intelligence" as a marketing buzzword or even a contradiction in terms. The term really doesn't have any particular technical definition that pertains to its common use. Most of the systems we use whose creators crow about "artificial intelligence" actually use what's more properly called machine learning algorithms, to the point where some people consider "artificial intelligence" to be a synonym for machine learning.
You might say that machines that can learn sounds a lot like what you'd expect out of artificial intelligence, but the reality isn't quite that interesting. The "machine" isn't "learning" in any proper sense.
What then actually is machine learning? Well, to answer that, we have to ask what the "machine" is. What we call a computer is an instruction-following automaton; it can only, only proceed from one pre-programmed instruction to the next pre-programmed instruction. True, earlier steps can alter the program later down the line, but a way of generating those alterations themselves has to be provided. Past experience cannot change how an instruction is executed (though data from past execution can be used as inputs into that instruction, if so instructed). This fundamental limitation is also what makes computers so useful - their reliability. They are fast idiots which never make any mistakes executing instructions, but which can't spot or correct mistakes in the instructions given to them.
Machine learning is the term for sets of instructions which first examine a set of input data and develop a model from which to generate a new set of instructions that can be used on arbitrary data. The core of this process is the concept of a mathematical model, which is just a set of equations which describe the structure of a set of data, which may or may not be useful when applied to data outside that initial set of data. These algorithms are useful because they allow a computer to brute-force its way through coming up with a probable process to complete a complex task the programmer might not be able to conceptualize directly, but they are complex because they require the human to describe the problem space itself by other means initially, and to test the output of the resulting process against the desired results.
In other words, what we call "machine learning" is just computing answers to highly abstract problems, then using those answers to generate new instructions that themselves hopefully solve less abstract problems. The program "learns" its own solution given a set of constraints, then applies it.
From this description, it should be apparent why I don't consider machine learning to be artificial intelligence in any popular sense of the word, even if marketing people love to use the terms interchangeably. Calling machine learning algorithms spitting out images "AI art" thus misleads many. People - indeed, I suspect, many of the strongest opponents of AI art - are thinking of Skynet, of a thinking computer, when they say that "AI art" is computers learning to replace humans.
The algorithm isn't learning, though it may be designed to refine its mathematical model. It isn't thinking; it's following the programmer's instructions, however complex. So is it making art?
Machines Don't Create
Where then is the second lie? Well, it lies in the implications of the phrase "AI Art" - the concept that artificial intelligence algorithms as the term is commonly used can actually create art, or even create generally.
This one took some thinking about for me, and some brief inspection of how generative algorithms work. Essentially, these algorithms take a bunch of images pulled off the internet as their input problem space and generate a mathematical model for what art for the input text looks like, and then plug a random seed into that model to generate a new image. The new image is not art. It was not created, it was calculated. Given the same input problem space, the same search results, and the same random seed, the result is exactly the same every time it is run.
Creation, philosophically speaking, is the process of producing something that is more than the sum of its inputs. This means a will is required to be a creator, making it the reserve of thinking beings. Indeed, the ability to truly create art would be a far better proof of true AI, that is, conscious computers, than the vaunted but hilariously broken and pointless Turing Test (the self-evident broken-ness of Turing's formulation is a subject for another time and place).
As detailed in the previous section, a modern computer uses a deterministic architecture: it follows instructions. An instruction with the same inputs can only ever have one result. A computer in this sense can never create, because its inputs define its outputs precisely; nothing is included that is not specified, even if the specification process is highly complex (as is in the case of algorithmic image generation). There is no will involved, only a pseudo-random seed injected to mix things up a bit.
Perhaps this might change in the future with different ways of building computers, but since we don't have those machines and cannot examine the precepts of their design, that hypothetical is entirely un-analyzable here and now, and I cannot form an opinion based on a what-if of this degree. You might as well ask what if computers were powered by tiny, fast-working demons like those employed in the devices of Terry Pratchett's Discworld.
Can algorithmically generated images be beautiful? That's a totally different matter, but the answer seems to be self-evidently yes. Beauty exists in the world around us in many places where it was not created by the hands of man and can even appear natively in places where the hands of man have done their best to stamp it out. Philosophically speaking, I find it no surprise that an algorithmically generated image can possess beauty, because the algorithm is trying in part to determine the structure of aesthetically pleasing images, and because math does play a part in what we consider beautiful, so it is fitting that the algorithm sometimes hits on beauty.
Indeed, and this goes to something we'll get to shortly, I would expect the algorithm to put out a higher percentage of superficially beautiful images than the average post-modern human artist, because the post-modern artist does not think about the beauty of their creations very much, if at all.
The Return of Objective Standards
Already when you more precisely define "AI art" as algorithmically generated images, it seems incredible that anyone would be highly worked up over the existence of such. Surely the artists have realized by now that creation and art are impossible for computers? Surely this would come as a comforting realization; art is never going to be the realm of autonomous machines.
And yet, despite the self-evident impossibility of "AI art," it seems the algorithms are making many artists very concerned. Their argument seems to go that the algorithms might not be able to think or to create, but someday they might be able to pretend so well that they'll be as good as human-created art, making human artists obsolete.
What does it mean for a generated image to be as good as a piece of art, though? Obviously, the value to the creator artist in terms of satisfactory artistic expression is not comparable. The generated image can only ever, even theoretically, be "as good" as the art piece to an independent customer.
In terms of people who pay for commissioned art on the internet, the request usually comes with extensive text descriptions, because the desired art solves a problem for the person paying for it. That problem might be one of illustrating something imagined to others (character art for a D&D game or other RP), or one of quickly conveying mood and feel to that customer's own prospective customers (cover/marketing art for books, games, small business marketing copy, and so forth).
In the case of such commissions, an algorithmic image is as good as one commissioned from an artist when it does the job being expected of it at least as well as the human artist, for roughly the same cost. Nobody seems willing to pay much or anything at all for algorithmic art-substitutes at their current level of quality, so by the simple economics of the situation, a $400 art commission cannot be said to be threatened by a product which nearly nobody would even pay $1 for, as long as the customer for the commission got $400 of art for their expenditure of $400.
What it means to get $400 of art, though, is hard to quantify. Or at least it was, until algorithms started generating images, and this I think is really what most people are upset about. It might still be tough to tell the difference between the value of a $300 commission and a $400 commission, but customers are going to start expecting that their $400 commission is better for their purposes than an algorithmically munged image. This seems to make a lot of artists unhappy because for their entire lives (thanks to the horrors of post-modernism, the previous standards were murdered in the mid to late 20th century) there hasn't been a lower bound of acceptable quality on their output. Now, courtesy of the algorithm, there is. The artist has to do better than the "AI" will do for free if they're going to keep getting paid.
In other words, the very existence of the algorithmic image generator establishes the lower bound of acceptable output quality for artists. The existence of this lower bound - an objective standard that exists outside any artist's personal control - seems to be the problem for most of the people who are highly invested in this issue, at least as much as, and in many cases more than, the theoretical future possibility of raising that standard beyond any human's ability to compete.
For the record, for reasons I think are apparent from previous sections, I don't think algorithmic image generation will ever raise the bar so high that humans can't sell art. Sure, the algorithms will adapt to fix faces and hands, and to figure out how to make things symmetric when they're not viewed straight on, so people will be able to get some basic D&D character art without a commission, but in the end, the algorithm will always follow and mimic human art imperfectly. It may accidentally inspire new human art (for example, art created to capture some interesting beauty the algorithm spat out one time - I know there are artists already doing this), but it will never create new lines of creative style, or new things generally, because creation is beyond computers, period.
Why does the existence of a lower bound of quality offend artists the way it does? For many, this might seem a bit strange. In most lines of work, including my own (software engineering, as mentioned previously), the minimum quality of acceptable work is always going up. Companies never want less-clean, less-readable code, they always want cleaner and more readable and more efficient and better documented, every year.
Only in art is this not the case; only in art are there no standards, and only in art do people look aghast at you when you try to apply objective standards to the output product someone is expected to pay for.
Before postmodernism wrecked art culture, and everything in it became lazy, all forms of art had objective standards. That was before my time of course, and probably before yours, unless you're twice my age or older. We've had a few generations without them, and nobody can pretend that art culture is better for it. Artists, even those who style themselves as avant-garde outsiders, even those who otherwise rebel against the squishy, lukewarm prison of postmodern expression, take umbrage at the idea of being expected to practice self-improvement, even the same sort of gradual self improvement required of professionals of every other stripe.
If it weren't for this culture of stagnation, visual artists wouldn't and couldn't be threatened by algorithmic mimicry. Only truly, genuinely bad art - art that meets no minimum standard of acceptable quality - could ever be replaced by algorithmic non-art. For my whole life, artists have told western culture that it needs to be content with bad, ugly art, because the alternative was no visual art at all.
Now, through the power of technology, there's an alternative, however a flawed one it is. Sure, it's not really art, and it's crap, but it's free and plentiful crap, and sometimes it's not ugly. It's no surprise that it compares favorably to the lazily hand-made crap modern art culture vomits out and expects payment for.
The need for humans to make good art remains unchanged. Good art cannot be replaced by the machine. The bad, ugly, postmodern art might still be art, but it's so objectively bad and ugly that people would happily replace it with tepid non-art that's at least not ugly. I consider this to be a hopeful sign for the future of artistic expression, because it is a driver of change for a stagnant and sclerotic system.
Faced with a new paradigm where there is a minimum standard of quality below which the customer can say "screw it, I'll have better luck with the algorithm," it seems some artists are trying to wield the rusty old weapons of postmodern thought against this new threat. I don't think shame, emotional scaremongering, and tying up thought in redefinition of terms are going to save the old order, though. Postmodernism is an anomaly on the history of art; its standards-free environment was always artificial and unstable.
The Ethics of Algorithmic Images
By this point, hopefully you understand me when I say I don't consider algorithmic image generation to be some skynet-style danger, or to be art in any sense of the word. Perhaps you also understand my rationale for not being concerned about the algorithm replacing art. This is a good point for me to observe that not all use of the algorithm is morally in the clear.
In a chat about this with some other Minds creators recently, I likened the algorithm to a very cheap handgun - it puts power in the hands of people who couldn't afford it before, but that power can absolutely be used both for good and for evil.
Here are some examples (I'm sure there are more I haven't thought of) of usage which I would regard as immoral, and which are entirely worth condemning.
Just as I would condemn the use of a handgun in a murder without condemning the availability of the handgun or surrendering my own right to own one, however, I see no reason to oppose more morally conscious use of algorithmic image generators just because of the possibility of malicious use.
Why I don't take the "Stealing" argument seriously
Perhaps the most common catechism (and I call it this quite deliberately) of the anti-algorithm crowd is the tenet that image generating algorithms "steal" from artists, so there cannot be any possible ethical use of these tools. Nobody has ever provided any groundwork for this assertion that I can find.
This section is being added as an afterthought to this already-too-long post because, frankly, there's not much to say about this, except that this is an article of faith which I do not hold, but which people seem to assume that I must hold. This is why I call it a catechism and not an argument: nobody seems to argue why this is. You either believe this assertion with zealous certainty, or you think it's all nonsense. In absence of good arguments, I must conclude for the moment that it's nonsense, but that doesn't mean this is going to be my position on the matter forever.
I call this argument intellectually lazy because it is an unsubstantiated (and, I fear in many cases, un-examined by its proponents) assertion upon which the entire argument stands. I have considered it, and I cannot find any consistent moral standard for which this could be the case in the minds of the proponents, given my observations about their behavior and preferences. It bears, in other words, all the hallmarks of post-hoc rationalization for a conclusion that was emotionally generated.
If you have an intellectual argument for why algorithmically generated images are always theft, I'm happy to listen to it. Because of the complexity of this technology, I'm happy to admit the possibility that my twenty-thousand-foot analysis missed something technological or ethical. Any argument that starts from the assumption that this is the case before it's been established, however, is going to fail with me, and with everyone else who doesn't already think the generation algorithms are some sort of digital Antichrist.
I will say one further thing on the matter, however, because it is the line of logic that led me to conclude that the "stealing" assertion is probably a post-hoc rationalization, and thus it will be useful to you in attempting to change my mind.
If you are deeply opposed to seeing algorithmically generated images on Minds but aren't equally fervent in opposition to the many Minds accounts (some of which are probably algorithmically operated themselves) which simply post other peoples' art without attribution for upvotes, you need to examine your assumptions and work out that apparent contradiction before you try to change anyone else's mind. Once you have, if you come up with a good reason why the algorithmic images are "stealing" and the other content isn't at least equally stealing and equally worth being constantly badgered about it, I'd be very interested to hear it.
Now Change my Mind
If you've read this far, thank you for sticking with me this long. I am serious about being open to corrections to any of my thoughts on this matter. If you think I've gone wrong somewhere, do point it out in the comments. I'm happy to have a conversation about it, as long as that conversation remains civil.
If you think that it is inherently morally wrong to use algorithmic image generators, I'd especially love to see your rationale laid out in this manner. As stated above, I really don't have a strong emotional attachment to this issue; I wouldn't have thought about it much at all if I hadn't seen strong feelings flying around on this site about it. I appreciate seeing both sides shared without any sort of censorship or promotion; it made me think and evaluate the matter, hopefully thoroughly.
Do note that emotionally coded language has precisely zero persuasive influence on me. You can thank the nihilist death-cult of Woke for teaching me to automatically distrust any position urged on me with emotional appeals, and this issue is certainly not one I'm going to change that policy for. I promise to do my best to interpret rebuttals in the most charitable way possible, once needless emotional appeals have been stripped out, but I do ask you don't make this any harder to do than it needs to be.