Why Isn’t Artificial Intelligence Fun?
AI is here. And yet, it is still incredibly unsatisfying.
Published
Computer Love is a monthly column that investigates new technology with the purpose of making it feel less dense and more fun.
We find ourselves at the center of a moment that has haunted science fiction for nearly 70 years: the emergence of artificial intelligence. The response to this revelation perfectly reflects the fractured reality of our time; from cracked out YouTubers extolling AI’s ability to help you get wildly rich (by either contributing to the content creator pyramid scheme or dropshipping) to playing out the extinction of humanity. Absent from this conversation is an understanding of why we adopt new technologies. The creators of current AI models haven’t found a way to make it fun, yet.
When the wizards of artificial intelligence are pressed to rationalize its raison d’etre, their arguments feel somewhat schizophrenic. In an interview with Lex Friedmen and in the recent Senate hearing on AI, the CEO of Open AI, Sam Altman stated that their strategy was to roll out AI systems while they are weak and imperfect so people can find out how to integrate them into their lives while they are being developed. Which kind of leapfrogs over the reason to create it in the first place. When large language models started rolling out, Bill Gates presented a vision of the AI future in this blog entry. Gates sees AI enabling dramatic advancements in medicine, education, and climate change research that are only mildly overshadowed by a renaissance of personal assistants. This confusion between optimization and progress contributes to an atmospheric malaise that leaves our culture feeling starved and desperate.
It is precisely the artists and stewards of culture who are most immediately threatened by advancements in artificially generated imagery, which are being presented as equivalent to art. Illustrators and other artists like Steven Zapata and Karla Ortiz are beginning to organize and build legal cases against the scraping of their art from online platforms to be used in the creation of “new” work made by generative image tools like Midjourney and Dall E. In his article “TEXT-TO-IMAGE," Dean Kissick refers to this practice as “spawning” and believes it will allow for appreciators to experience new dimensions of existing artwork by expanding past the perimeters of existing paintings and creating new works in old styles. In a video about an artificially-generated Jay Z verse, Marques Brownlee makes a distinction between two tiers of success in AI; AI that fools you when you are not paying close enough attention (like the photo of Pope Francis in a Balenciaga puffer), and AI that fools you even when you know it is AI. As the tools rapidly improve, they increasingly fall into the second category.
I sympathize with artists that are already undervalued in their industries who see their way of life threatened by AI tools. Consumers of creative work are also impacted because AI tools are incapable of conceptualizing and respecting an audience. When I see articles or tweets sharing interactions with ChatGPT, my eyes glaze over because I know there is no effort on the part of the prompt-writer or the large language model itself. It is “bullshit” as defined by Harry Frankfurt in “On Bullshit." To summarize the 56 page treatise, bullshit is the deliberate misrepresentation of thoughts, feelings, and attitudes that fall just short of lying through nonsense.
While ChatGPT and other LLMs are capable of outright lying, the issue is that they are indifferent to truth. In these salad days before artificial intelligence gains sentience, there is no intent to what they generate. Like dreams that are full of hidden meaning and emotional depth to the dreamer, but ultimately excruciating to the person hearing about them, this artificially-generated content is only interesting to the person making it.
In my experiments with ChatGPT, I’ve noticed my own brain being rewired. After a concentrated session of grilling the chatbot about cultural differences between China and the United States, I returned to work and found I had developed an expectation of instant informational gratification. My brain had to shift back into a lower gear to work out complex problems without an immediate result. In another experiment with Midjourney, I tried to generate illustrations for a death metal t-shirt design that fell just short of looking authentic. When I tried to draw my own design using the generated one as a starting point, I was both let down by my own artistic inadequacy and Midjourney’s inadequacy. The diminishing chasm between the artistic intent and the technical limitations was more painful because it was so close, but ultimately unsatisfying.
Despite calls to halt the development of AI, the “move fast and break things” ethos of the tech world barrels onward. Since the genie is already out of the bottle, consideration should be given to how these tools can be integrated in a more sensitive way. As the human players in the partnership with artificial intelligence, it is our role to bring intent to the projects we build together. So many of the digital tools we have built have contributed to an age of isolation. Instead of perpetuating this pattern, we should develop methods of bringing AI into environments with multiple human players.
Artificial intelligence may be the first tool in human history that needs to be socialized. Creation of any sort is ultimately fulfilling because by smashing things together, the outcome can magically become greater than the sum of its parts. That alchemy occurs between stages of unbound expression and critical refinement, both of which are missing from artificial creation. If artificial intelligence is going to be fun, it needs to contribute to an authentic spark of discovery.