Artificial Intelligence and the Philosophy of Mind
Introduction
I’ve always been interested in games. When I was three or four years old, I was fascinated by paper-and-pencil games like Dots and Boxes, Battleship, and Tic-Tac-Toe. My father only liked chess and cards, and children’s games frustrated him. He wasn’t good at humoring a child.
I was born in the early 80s. My father bought an electronic chess set at Radio Shack called a Chess Challenger. One day, I drew the Tic-Tac-Toe grid on a piece of paper and approached him as he sat in his recliner doing a crossword puzzle, the chess computer on a table next to him.
He looked cross and said, “Son, let me show you something.” From there, he ruined Tic-Tac-Toe by showing me the solution. If you go first, go in the middle. If you go second, go in the corner. He demonstrated that the game was a forced draw with perfect play. He took the Chess Challenger off his table and handed it to me with a book of chess openings, and said, “You learned how to solve Tic-Tac-Toe. Solve this and get back to me.”
“’Dad, how is this a computer?’ He said, ‘Son, it does math quickly. That’s all a computer does.’”
This experience ruined a fun pastime I had as a child, but it taught me an interesting concept. Games have solutions. Being a very straightforward thinker and good with patterns, I took my dad’s advice literally. I studied chess openings and played the computer repeatedly, looking for patterns to solve the game. I looked for the solution in vain for years. If chess had a solution, even the computer didn’t know it, because I grew to a point where I could beat it regularly (it wasn’t that good).
Besides games, computers became another lifelong interest of mine. My dad loved teaching through object lessons. He sold banking equipment, and one day he brought home an antique adding machine. It was 100 years old or more. He told me it was a type of computer and challenged me to take it apart and put it back together. I got a lot of it apart, but soon realized he had given me another impossible task, like solving chess. This thing was complicated! I asked, “Dad, how is this a computer?” He said, “Son, it does math quickly. That’s all a computer does.”
Narrow
In 1996, the world’s best chess player, Garry Kasparov, faced IBM’s Deep Blue in a chess match. Deep Blue was an AI. Dad told me it couldn’t beat Kasparov. “A computer can’t think. It has no mind. It has no creativity.” He was right. Garry beat it like me as an eight-year-old, beating the Chess Challenger. That made sense. A computer can’t beat a human being, especially not the greatest chess mind in the world. You might as well put an adding machine up against him. It’s the same thing.
“AIs have been around for many years now without much hype or anxiety. Recently, other forms of AI, called LLMs (large language models), have come to market. They can mimic human conversation quite well. They are still ANI.”
In 1997, Kasparov went down to Deep Blue like John Henry to the steam-powered drill. In the media, it was machine over mind, humanity had reached its peak, and it was all downhill from there. In reality, Deep Blue used brute-force to calculate every possible outcome (or many of them), which no one can do. It played around 2800, comparable to Kasparov. Today, you can run a chess engine called Stockfish on your phone that would destroy the best chess players alive in a few moves. Deep Blue is obsolete, and humans don’t stand a chance.
Stockfish is an ANI, or artificial narrow intelligence. It can play chess with a superhuman capacity, but that’s all it can do. Another example of ANI is Google Maps. It can give you directions, but that’s all it can do. That’s why it’s “narrow.” It can do one thing. Playing chess well is a novelty, and getting you to your destination is helpful, and these AIs have been around for many years now without much hype or anxiety. Recently, other forms of AI, called LLMs (large language models), have come to market. They can mimic human conversation quite well. They are still ANI.
History
Modern LLMs use neural networks, mathematical models invented in the 1940s. As math concepts, they are great, but neural networks have faced many practical setbacks. The main issues were their inefficiency and the high cost of implementation. They became useful once computer hardware and power grids were powerful enough to run them (though they remain extremely costly).
The term “neural” trips people up. It conjures up biology, but in my opinion, it’s a misnomer. Neurons are cells; neural networks are a collection of equations.
“The term “neural” trips people up. It conjures up biology, but in my opinion, it’s a misnomer. Neurons are cells; neural networks are a collection of equations.”
The neurons inside a tapeworm are vastly more complex than neurons in computer code, and human neurons are vastly more complex than those of a tapeworm. In fact, we still don’t even really understand how neurons in the human brain work.
How does a mathematical computer function, less sophisticated than a tapeworm neuron, speak to us? When it comes down to it, a trained language model is a massive list of numbers. It has no thought, but it’s pretty good at predicting what to say because all human language follows a predictable pattern. Zipf’s Law governs this pattern. In other words, language is predictable, not random. It’s a bit of a mystery.
Deep Blue defeated a grandmaster, even though prior computers had failed, because humans developed chess theory and put it into a machine. Humans mathematized chess. They put it in the only language a computer speaks. The same principle applies to language, although humans merely discovered the mathematical pattern in language. Without a predictable pattern governing our languages, LLMs would still likely be prohibitively expensive or slow.
Intelligence
We allow tech billionaires to use words in strange ways. A neural network piggybacks off the term neuron. Likewise, AI is artificial, but is it intelligent? Is AI in the same category as human intelligence, or is it in a category entirely its own?
The word intelligence comes from the Latin intelligentia, derived from intelligere, which means literally “to understand.” What is the chief concern of human understanding? Can we say that our intelligence is mainly concerned with giving directions, doing math, playing games, writing, or creating art and music?
“A trained language model is a massive list of numbers. It has no thought, but it’s pretty good at predicting what to say because all human language follows a predictable pattern…language is predictable, not random.”
Indeed, these are capacities of our understanding or creativity, but they are not the sum of them. Romans 1 answers:
18) For the wrath of God is revealed from heaven against all ungodliness and unrighteousness of men, who suppress the truth in unrighteousness,
19) because what may be known of God is manifest in them, for God has shown it to them.
20) For since the creation of the world His invisible attributes are clearly seen, being understood by the things that are made, even His eternal power and Godhead, so that they are without excuse,
21) because, although they knew God, they did not glorify Him as God, nor were thankful, but became futile in their thoughts, and their foolish hearts were darkened.
-Romans 1:18-21 (NKJV)
The chief concerns of human intellect are discerning reality, grappling with the truth, understanding and creating love, and, above all, knowing God. It’s not a mathematical model. It’s not based on statistics. God built our intelligence on our conscious, subjective experience, otherwise known as qualia. It’s the spark that makes each person unique, with their own conscience, will, intuition, and thoughts.
Humans don’t require language to understand. Our understanding goes much deeper than that.
Hype
I hope this is all interesting information, but what is its relevance? Personally, when I think of the term artificial intelligence, I don’t think of getting directions online, playing chess, generating a report, or having a computer create a (poorly written and generic) story for me. I think of HAL from 2001: A Space Odyssey or Commander Data from Star Trek. I believe that’s what most people have in mind. We think of something with broad general knowledge, morality, and decision-making. These qualities fall within the realm of AGI or ASI, also known as artificial general intelligence or artificial superintelligence.
AGI is an AI equal to a human mind, and ASI is an AI with intelligence beyond us. This type of AI does not exist, and we are no closer to creating it than we were in the 1940s. Nevertheless, headlines in major outlets have given the impression that an AGI breakthrough is just around the corner (or at least that ChatGPT has smart people scratching their heads about whether it’s already AGI). Examples include:
“The chief concerns of human intellect are discerning reality, grappling with the truth, understanding and creating love, and, above all, knowing God. It’s not a mathematical model. It’s not based on statistics.”
Time Magazine: The Race for Artificial General Intelligence Poses New Risks to an Unstable World
WIRED: Some Glimpse AGI in ChatGPT. Others Call It a Mirage
Yahoo: ’Feel the AGI’ – OpenAI launches ChatGPT Agent
We understand that news outlets need clicks and tech companies need investors, but many really believe these headlines. Elon Musk is one of the most well-known public figures in the world, and he has been one of AGI’s biggest hype men.
Leaving behind the fact that he has been quite wrong about many of his predictions, is there something else behind all this? It doesn’t feel like everyone is just lying for money or views. Elon Musk is not stupid, and some other genuinely smart people believe in AGI. Has there really been some fundamental change?
Emergentism
There has been a fundamental breakthrough, but it’s not the one that the general public thinks it is. I’m a computer programmer, and the software that runs LLMs isn’t unfathomably complex like a mind. The mathematical model is impressive and valuable, but it’s way less impressive than Einstein’s General Relativity. The breakthrough that enabled commercial LLM products was GPU technology. It was a hardware breakthrough that finally allowed us to run LLM software at scale with real-time performance. Before that, you could ask a computer a question, but it might take 10 or 20 minutes to answer it.
“The truth is, the brain does not have separable hardware and software layers like a computer, because our brain is not a computer at all, and it’s impossible to separate its structure from the DNA that describes it or the processes that encode memories.”
Think about this. Is human intelligence in the “software” (DNA sequences, encoded memories, electrical activity) or the “hardware” of our physical brain structure? The truth is, the brain does not have separable hardware and software layers like a computer, because our brain is not a computer at all, and it’s impossible to separate its structure from the DNA that describes it or the processes that encode memories. The brain is unfathomably complex. It’s by far the most complicated object we have ever seen, and we don’t know how it works.
“Biblically, the brain does not generate the soul or consciousness in and of itself. The spark from God generates it.”
Can humans invent a machine that thinks? Do you need to understand thought to create it? It’s an interesting philosophical question. The answer depends on what you believe about the philosophy of mind. The universe has fundamental forces, such as electromagnetism, gravity, and the strong and weak nuclear forces. Thought may be a type of force. If you pass an electrical current through a wire, it’s commonly thought that the wire contains the electrical field. That’s not true. The force of electricity creates a field around the wire. The wire merely channels that field. Our brain may be like a wire channeling the basic force of consciousness.
Christians believe God created all mind and thought, and He is the source of mind. God breathed His life into us and made us “a living soul.” Biblically, the brain does not generate the soul or consciousness in and of itself. The spark from God generates it.
When it comes to modern tech investors, media figures, and entrepreneurs like Elon Musk, they are generally not religious. You can throw most media figures in this group as well. For them, the prevailing philosophy of mind is emergentism: life and thought emerge through purely material means. The brain generates consciousness because it is complex. Once a system reaches sufficient complexity, it “levels up.” At an adequate level of complexity, it can generate its own qualia and become capable of genuine subjective experience.
“It stands to reason that inventing consciousness requires fathoming what it is and deliberately designing it. The Bible teaches that God did just that. We are nowhere close to that level of understanding and never will be.”
Believe it or not, that is the basic idea driving AGI hype. As these machines become more complex, many believe they will cross the threshold of complexity required to generate intelligence or thought; therefore, thought will emerge from the machine without human beings having to design thought or intelligence directly. I believe this is a ridiculous mystical notion, but I will leave you with it to ponder its plausibility.
Conclusion
It stands to reason that inventing consciousness requires fathoming what it is and deliberately designing it. The Bible teaches that God did just that. We are nowhere close to that level of understanding and never will be. Despite all the scientific and technological breakthroughs over the past 100 years, we are no closer to explaining how our own minds work.
On the part of Christians, many fear that humanity is near to generating its own fallen or demonic form of consciousness, which seems ludicrous. Generating consciousness, whatever it is, is the sole domain of the Creator, and there is no evidence that He has delegated this ability in any form to mankind, angels, or anyone else. Not even the most powerful angels or demons possess the super-intellect of God or His creative abilities.
“Eternal Life is a feature of Christianity and religion that’s lacking in secular humanism, and many naturalists seek to use the hope of technological advancement and AI to replicate it.”
AI is a tool that will improve incrementally. Still, it will never experience a qualitative change from math equation to thought, no matter how complex its hardware or software systems become. Even if you buy into emergentism as a philosophy of mind, we are no closer to bridging the gap in complexity between a system we can explain (a server with a GPU running an LLM) and a system we can’t explain (the human brain). It fails on its own terms.
What is apparent is that AI wishful thinking is a step toward transhumanism and singularity. It’s a hope for a secular form of eternal life in which we can digitize our thoughts and upload them into a new body to live forever. Eternal Life is a feature of Christianity and religion that’s lacking in secular humanism, and many naturalists seek to use the hope of technological advancement and AI to replicate it. Like many predictive frameworks over the past 50 years, we will watch as milestone after milestone goes by without a hint of progress toward our so-called “inevitable” destination. In this scenario, it becomes crucial for Christians to avoid falling for the hype and stick to the Word instead.
John Fairfull is a singer-songwriter and former lead guitarist and singer for Five Minute Plan, a Central Florida indie rock band. John also serves as the Music and Media Director at Jordan Baptist Church in Sanford, Florida, and is owner of Simplicity.Online, a software company in Orange City, Florida. He is also the author of The Christmas Mushroom.
Image Credit: Head of a Bishop by Gaetano Gandolfi (Italian, ca. 1770). The Metropolitan Museum of Art. 2010.117.
Thank you for reading this article about artificial intelligence and the Bible.