AI Exists to Make Your Life Worse

AI Exists to Make Your Life Worse

Jun 11, 2023

Imagine for a second that you are an arsehole. Not just any arsehole, but a tremendous arsehole, a gargantuan asshole. A void at the centre of a crevice from which emerges a torrent of effluent that will - in time - drown the entire planet. Try and feel the sensation of power, of magnificence, as the river of humans and excrement flows beneath you. Imagine every one of those people reaching up, up, desperate for help; then imagine reaching down, ignoring their desperate hands and diving deep into their pockets and fishing out whatever pocket change they might have inside.

That image in your head? That mix of ego and evil? That's how it feels to be an Techbro.

I have to put my cards on the table here: I am not a computer programmer. I am not a data scientist. I am a nerd who likes science and history and spends too much time on the internet so I've read about the guy who used machine learning to make Magic the Gathering cards; I've read books about the birth of capitalism, I listen to Behind the Bastards and the Ezra Klein Show, to Trashfuture and Cool People Who Did Cool Stuff. I grew up loving science. Hell, I recently got back into studying maths for fun.



I loved Ghostface Killah growing up so my favourite superhero was Iron Man, and even now, in my early years of middle-age with the weight of years on my shoulders and the shadow of time clouding my optimism I still hope that one day I can go back to school, study science, and spend the rest of my life in a university somewhere with colleagues and grad students at my side, trying to peer into the mysteries of the universe. Take this as the words of a jilted lover if you want, or consider me an interested if inexpert amateur, but I come to you to tell you this:

AI is a scam.

AI companies are either lying or mistaken about who their customers are.

AI will ruin your children's education.

AI will make your job harder.

AI will make you poorer.

AI will kidnap opportunity and murder the human soul.

Thesis Please

Part One: The Scam

On the 16th of May the US Congress held a hearing on the risks of AI. OpenAI's CEO Sam Altman - a man solely interested in preventing AI from being misused and not at all interested in leveraging the power of the US Federal Government to entrench his companies market dominance - said that his company was: "created... to ensure that artificial general intelligence (AGI) benefits all of humanity" and that they "take the risks of the technology very seriously". The problem is that they lie about the risks that their software poses.

All of these AI companies are different types of Learning Language Models (LLM). A basic explanation is this: The LLM is given a pile of data, then uses that data to predict it's response. So if you go to the LLM and say: write a 400 word essay about cabbages; it will use it's dataset to generate an essay based on those parameters. The more data, the wider the range of responses and the more statistical correlations the LLM can find. The LLM is not thinking, it does not know anything; nor is it finding a solution. It is programmed to search its database, check that database against its parameters and say, '80% of the time the word before cabbage is green'. It is not intelligence, it's a more complicated version of your phone's autocomplete.

There are really cool things that these programs can achieve. They're great at finding the structures of proteins and how they fold; something which could revolutionise the development of new medicines.

But that's not where they get the easy money. The dirty secret at the heart of capitalism is that actually doing things is hard. Think about the protein folding again, say your program finds that protein X will fold in such a way that it kills cancerous cells whilst leaving normal cells alone, that doesn't immediately make you a billionaire. You've got to figure out how to mass produce the protein, you've got to check for side effects, you've got to run trials and make sure your product passes medical regulations. All of these things take time and money, and if something goes wrong at any point you may not turn a profit. On top of this, medicine is an evidence based business, if your product doesn't work your customer base will realise very quickly; this makes it harder (though not impossible) to lie about your product in order to drive up sales.

Furthermore, think about what these “AI” do. They check data and see if a thousand people have said x in response to prompt y. The program has no way of verifying if the input data is correct, nor can it confirm if its output is correct. It cannot analyse its product, all it can do is keep producing.

In On Bullshit(2005) Harry G. Frankfurt defined bullshit as an utterance made without any regards as to the truthfulness of that statement. In opposition to lying, which at least attempts to subvert the truth, bullshit and more importantly, bullshitters “[do] not reject the authority of the truth as the liar does and oppose [themselves] to it; [they pay] no attention to it at all.” He went on to say: “Bullshit is unavoidable whenever circumstances require someone to talk without knowing what he is talking about.” None of the AI machines know what they are talking about. They are specifically crafted to generate plausible-seeming responses to prompts. They are, in other words, machines designed to create bullshit as quickly as possible.

AI researchers Arvind Narayanan and Sayah Kapoor of AI Snake Oil both recognised this in their essay: ChatGPT is a bullshit generator. But it can still be amazingly useful. They suggested that there are three kinds of task where AI would be useful:

  1. Tasks where it’s easy to check if the AI’s answer is true.

  2. Tasks where truth is irrelevant, such as writing fiction.

  3. Tasks where there exists a subset of training data that is true, such as language translation.

As we have shown earlier, AI companies are interested in making easy money. There is no easy money in creating an AI to solve tasks where it’s easy to check if the AI’s answer is true. Nobody makes billions of dollars and can launch their own rockets into space by making calculators. The easy money is in task types 2 & 3; because that is where the scam starts.

Consider what Narayanan and Kapoor said in their essay: (para) ‘truth is irrelevant when writing fiction.’ Now this isn’t true at all, but what’s interesting is why they think this is true. People believe that truth is irrelevant to fiction because, after all, the author is making things up. But remember what a work of fiction is: Anybody who purchases a novel, or a short story, does so knowing that the text does not show factual truth; they purchase what they are hoping to be a well-told piece of fiction. They aren’t purchasing the truth; they are purchasing a lie.

I am focusing on writing fiction here because it is the form of art that I am most familiar with, but the same principle is applicable to all art. Art, fundamentally, is an act of communication from creator to consumer. The artist, through knowledge of their medium and of humanity, attempts to communicate through their work something that will inspire people to feel a series of emotions that are true to the audience, even though they know that the work does not portray mere fact. Think about Schindler’s List; does it matter if Oskar Schindler did or did not say the exact words “I could have got more out”? Of course not. What matters is the truth the film was trying to communicate with that sentence, that Oskar Schindler was a man in an impossible situation who did impossible things and how doing so fucking broke him. Writers, photographers, fucking spray painters, are intimately concerned with the truth because truth is what makes their art function; art is a lie, it isn’t bullshit.

Because art isn’t bullshit, AI programs cannot produce it. AI programs cannot tell a love story worth a damn because AI programs do not know what love is. They don’t know what a story is. They don’t even know what the word ‘is’ is!

The way technology is supposed to work is that technological advances lead to less time spent on labour and more time spent on leisure. The difference in cost between the labour saved and the cost to make and export the technology is the profit of the technology company. But AI writing cannot save on the labour costs of writers. 1) Because the act of writing a story basically only costs time and AI isn’t a time machine and 2) because the time it takes to turn the bullshit generated by the AI into a story is either equivalent to or greater than the time needed to just write the damned thing in the first place. How do I know this? Because of a thing in Hollywood called page 1 rewrites. Sometimes a script doesn’t work so the studio has to hire a new writer to fix it, and if you speak to Hollywood screen writers they will tell you that the amount of time it takes to rewrite a script from page 1 is basically the same as the amount of time it takes to write a script from page 1.

And what’s worse is that AI writing companies add costs to the writer, they don’t take them away. A company like Sudowrite charges writers for its product whilst doing nothing to assist in the creation of their work. In other words, AI writing is a technological innovation that increases the time it takes for labour to be completed whilst increasing the cost to the labourer.

This is the scam. The companies are pretending that AI can be used to help artists create their work. It cannot. They are pretending that it will reduce the cost of entry to creating art. It won't. They are pretending that it will make art more accessible when instead it will drown art in a torrent of bullshit and make it impossible for audiences to get access to the art they want to enjoy unless they spend minutes, hours, days, teasing something functional out of the infinite bullshit machine.

Clearly this is a horrible deal for writers and consumers, so how do these companies expect to make money? Because what they aren’t trying to make it easier for writers to write and by extension for artists to make art. They’re trying to make it easier for corporations to reduce the amount of money that they pay writers. That’s the second part of the scam: AI companies lying to their customers.

I’m going to take a break here but next week I’ll be posting part 2: AI Companies are Lying to their Customers. If you want to read more please sign up to the feed, and if you really like it please go ahead and Buy Me A Coffee.

Enjoy this post?

Buy G D Segwagwe a coffee