Taking Refuge in AI
There is no Artificial Intelligence. Let's just get that out of the way.
What people call AI is based on an oligarchical scheme to inflate company stock values. I worked at a small software company in the 1990s and saw its stock reach the stratosphere on revenues of less than $20 million because it was a small, Internet-based start up. This was the period of the dotcom when tiny companies could turn into tech giants by unleashing technology in new ways. We've all seen the picture of Jeff Bezos' home where he started Amazon. The greed, the mania of the time taught me a lot about human delusion. Amazon, et. al. certainly made life more convenient in the short term but the future trajectory of this period's innovations could spell dystopia.
The stock became so valuable that the owners wisely decided to take refuge in profit and shut it down. It was delisted from the NASDAQ. 200 or so were let go and a small investment of several million dollars was regained 20 fold and returned to investors. How often do we read about companies closing up because all the profit had been realized and there was no point in continuing?
My heart is filled with gratitude for having experienced this period in history. Lots of nostalgia for the 90s of late. There was a general feeling of optimism with regard to the future. Work was fun, exciting, stressful and personally rewarding.
Back to HAL...
The AI revolution has some of the feel of the 90s Internet euphoria but as ChatGPT has shown, it is interesting on a whole new level, far beyond buying books online from a fledgling Amazon or getting driving directions from MapQuest. All the same inclinations are operating in the human heart though. There are reports this week that the hype surrounding AI is already starting to fade as people realize that it's a tool, not a sentient being, with flaws.
AI in relation to ChatGPT is really Machine Learning (ML); computing power has reached a scale where it can ingest all the information ever produced and synthesize it in response to queries for everything from party planning to linear algebra. There are other areas of AI research apart from Large Language Models (LLMs), but it draws the lion's share of attention because it is being distributed to the masses. AI has been around for years in back office systems.
Let's explore the problem with using the term Artificial Intelligence.
Intelligence is the Self willing to use knowledge and reasoning ability for a volitionally conditioned purpose. Intelligence includes awareness of a conventional person – a self-awareness – who is different from other objects even if they appear to share a likeness. Animals show this quality in varying degrees.
Volition comes from the Latin noun voluntas, voluntatis and means "will, desire, choice, intent" with the implied context of knowledge and self-identity. "John voluntarily surrendered his life to Allah." There it all is: knowledge, self-awareness (ego) and choice.
ChatGPT is ML (actually, a subset) so it does not and cannot choose anything. It is trained to detect patterns organized as models. It "learns" through recognizing symbols, making mistakes along the way. It is "corrected" so that the accuracy of its symbol detection and correlation to a model approaches 100% of what the trainers feel is "fact." The Intelligence involved in this training process has been human and will remain human as the ML moves to self-training. ML and eventually, true AI, will always bear the traces of a human cause.
This is not to take anything away from ML. It's amazing. I've been using it daily for professional and personal tasks and I've been happy with its contributions to my productivity. It's still wrong often in the answers it gives and because it lacks cognition, there is a tendency to spit back slightly different reformulations of the same wrong answer. It isn't aware in the least that it is serving up the same error with different explanations or justifications. More simply, it just isn't aware of anything.
It cannot be corrected outside of the training process, so patiently explaining why an answer is wrong, maybe related to something like programming syntax where something either works or doesn't, does nothing to help it learn.
Intelligence involves acquiring, understanding and interpreting information, especially in dynamic situations where other actors are involved. This is something no LLM can do. It's not Intelligence, artificial or otherwise. A small child learns new things at a rapid rate through classroom interaction with a teacher and peers. ML does not. LLMs do not remember past conversations you've had with it. LLMs cannot anticipate the answers to questions you may have related to a topic although it may appear to at times.
One thing intelligent beings have is a sense of Time and AI does not have this. Time is fundamental and not knowing or understanding it directly – even if it remains mysterious – shows non-intelligence.
As indicated, Intelligence also goes hand-in-hand with volition. Will as understood in classical Thomistic theology associates intellect or reason with its power to inform about the worthiness of objects of intent. No man craves the evil, but only the good even if his reason errors about what is truly good. AI doesn't will anything however and its power to produce organized truth-y material is an effect of machine computation done for the purpose of synthesizing data.
ChatGPT suggested a code fragment one day which included a file path like "C:\James\work\templates\readme.md." This is an instance where the source training material showed up in the wild; it had absolutely nothing to do with me or my file system. If I had spent a few minutes searching GitHub, I likely would've found the fragment there. The agent could not intelligently determined based on the context of me, my PC's file system and itself what the actual file path should have been something else entirely. The LLM could not reason out that I'm not James or that I'm not using a file structure like his at all. It could not abstract out a better suggestion with the training information it has.
Google search has sucked really hard for the last couple of years and now I use ChatGPT 4.0 to answer my questions when I can. Things that would automatically take me to the browser search bar see me going more and more to Poe, an app that hosts interfaces to a variety of specialized LLMs. It's far more effective than wasting time looking at gamed or censored results from a bloated capitalist monopoly that was engineered to reprogram society, turning its members into dumb sheep who are sheared for personal data. Google was never about providing information to help create a more knowledgeable body politic. That has happened accidentally sometimes along the way. I think this is one of the reasons it has lagged in the AI arena. When was the last Google did anything innovative?
ChatGPT made rapid inroads into corporate America and personal computers in short order. Microsoft, a key backer of OpenAI, the foundation responsible for ChatGPT, made Copilot a part of Office 365 and Edge, its web browser. Integrations were also added to the Microsoft software IDEs.
Stories came out in 2023 of AI's potential to alter medical diagnostics forever. In one trial, ChatGPT was able to correctly diagnose a 1 in 100,000 disease that most human doctors would've missed.
This has ignited the imagination of techno-futurists, who see a bright world ahead very, very soon. AI will open the door to fusion energy, disease cures, immortality, full automation of essential work, space exploration and on and on. We might even see a day in which corrupt, bumbling and bloodthirsty politicians are replaced with an AI that works on behalf of society instead of private interests. Democracy can be put to rest at last. Well, that last part is my dream but it is likely shared by a great many people across all strata of society. We're still in the awkward phase where publicly we pretend like democracy works.
I call this going to the AI for refuge. I saw Buddhist commentators on YouTube enthusing over ChatGPT and AI more generally for a bit during the spring of last year.
Within a short span of time the bubbly-ness of any New Thing dissipates and you are confronted with the Three Marks of Existence: Impermanence, Discontent and Not-Self. For the Western secular Buddhist, the AI hype is very easy to get sucked into because the devas play a mythological role, one easily forgotten in our world of science and machines. If one remembers them, their long lives that end, then AI becomes not such a big deal. Earth is still a garbage heap compared to the higher realms. Whether you believe angels exist or not, their story is a reminder that there is no future where the Marks vanish, either individually or as a whole.
Angels and gods fall all the time in the Buddhist cosmology. The heavens are beyond our imagining. Maybe a few arahants achieve recall of lives led on color-drenched ethereal worlds, where endless dark seas roll gently under an infinite array of stars suspended in the sweetest, coolest air.
The point is that while maybe AI can someday alleviate suffering, if experience has taught us anything, it's that technology's capacity for destruction increases in proportion to its promises of salvation. Splitting the atom heralded the start of cheap, limitless energy. And nuclear war and environmental pollution.
There is no salvation that involves keeping these aggregates of form, feeling and consciousness perfectly safe forever.