<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Not Your Dad's Math]]></title><description><![CDATA[Not Your Dad's Math]]></description><link>https://blog.notyourdadsmath.com</link><generator>RSS for Node</generator><lastBuildDate>Thu, 23 Apr 2026 17:24:06 GMT</lastBuildDate><atom:link href="https://blog.notyourdadsmath.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Is there an I inside an AI?]]></title><description><![CDATA[OpenAI launched ChatGPT just a little more than seven months ago. This AI technology and its siblings have generated what may be an unprecedented level of hype and uncertainty. But what is this technology really doing? Is it thinking? Is it conscious...]]></description><link>https://blog.notyourdadsmath.com/is-there-an-i-inside-an-ai</link><guid isPermaLink="true">https://blog.notyourdadsmath.com/is-there-an-i-inside-an-ai</guid><category><![CDATA[AI]]></category><category><![CDATA[llm]]></category><category><![CDATA[consciousness]]></category><category><![CDATA[Strategy]]></category><category><![CDATA[strange loop]]></category><dc:creator><![CDATA[Mike Kibbel]]></dc:creator><pubDate>Fri, 07 Jul 2023 16:09:25 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/nHk3obbJek8/upload/22123e967ce326380f756dbf55678c61.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>OpenAI launched ChatGPT just a little more than seven months ago. This AI technology and its siblings have generated what may be an unprecedented level of hype and uncertainty. But what is this technology really doing? Is it thinking? Is it conscious? Is it alive? Is it safe? You can <a class="post-section-overview" href="#heading-what-about-the-big-questions">skip to the big questions</a> near the end if this looks too long to read. I've tried to make the full explanation worth it.</p>
<p>Some claim the present wave of AI technology heralds the fourth industrial revolution. Some see signs of a burgeoning, benevolent superintelligence that will save us all from ourselves. Some predict a destructive force that will usher humanity and civilization to its terminal state. And then others hold that these viewpoints are vastly overblown and that modern AI is just another tool for developing novel and disruptive applications across industries.</p>
<p>My opinion is an optimistic kind of skepticism, mostly aligned with the lattermost viewpoint. However, I can't entirely discount the destructive potential posed by AI. Some objective risks of existential import are real. I'm also not ashamed to embrace the wild science fiction idea of a superintelligence emerging, but we're nowhere near that. Come along with me to break down this reasoning.</p>
<h3 id="heading-what-i-is">What I <em>is</em></h3>
<p>To examine AI's nature, I'm going to take you on a journey of the self. We'll start by building up a model for understanding human intelligence and consciousness to frame artificial intelligence and consciousness. In other words, we'll try to define <em>what I is</em> in order to define <em>what AI is</em>. I'll aim for brevity, but this will take some work to get through.</p>
<p>I've long been fascinated by the concept that human consciousness is a physical expression of the cosmos itself. If we are to believe the accuracy of Goodreads' quotations database, Carl Sagan wrote:</p>
<blockquote>
<p>The cosmos is within us. We are made of star-stuff. We are a way for the universe to know itself.</p>
</blockquote>
<p>And Alan Watts wrote:</p>
<blockquote>
<p>Through our eyes, the universe is perceiving itself. Through our ears, the universe is listening to its harmonies. We are the witnesses through which the universe becomes conscious of its glory, of its magnificence.</p>
</blockquote>
<p>Who would have guessed that the greatest, modern celebrity astronomer and the greatest interpreter of Zen Buddhism for the West would be so philosophically aligned? While these poetic and grandiose sentiments are lovely to fathom, they provide little explanatory power for consciousness when compared to the greatest master of analogy, Douglas Hofstadter:</p>
<blockquote>
<p>I am a strange loop.</p>
</blockquote>
<p>This quote doesn't immediately elucidate the cosmological connection, but I assure you it is there. <em>I am a Strange Loop</em> is the title of Hofstadter's 2007 book on the nature of consciousness. It distills the central thesis from the famed book of his youth, <em>Gödel, Escher, Bach: an Eternal Golden Braid</em>, that consciousness emerges from <em>strange loops</em> — these are patterns of complex, self-referential activity occurring in the vast network of the real, physical matter of neurons inside our brains, and other living organisms' brains. As a sort of corollary, Hofstadter also describes a way to rate levels of consciousness along a relative scale, taking care not to equate the scale with other concepts like social worthiness. I view this book as required reading for anyone looking for a philosophical underpinning to the nature of AI.</p>
<p>The connection to cosmology is still not obvious, but I'll try to take you there. Hofstadter builds from the ground up, rather than from the stars down. Through an elaborate and eloquent sequence of carefully constructed analogies, he establishes the reader's confidence in the idea that complex patterns of microscopic activity regularly produce emergent phenomena on a macroscopic scale. One of his examples is how the trillions of tiny, erratic chemical reactions between fuel molecules and air molecules in a gasoline-powered engine's cylinder produce a singular and measurable pressure change that predictably drives a piston. Mechanical engineers do not design for all the individual chemical reactions, but rather for the emergent explosion predicted by macroscopic inputs like fuel-to-air ratio and cylinder volume.</p>
<p>Hofstadter moves on to sketch how a system of a billion magnetized billiard balls, bouncing against one another and sticking together in clumps, can statically <em>encode information</em>, such as the history of where the billiard table's edges were bumped. The narrative gets more interesting as he expands the scale of his analogies and incorporates a feedback mechanism, by which the activity in a complex system affects its own arrangement over time. This <em>strange</em> kind of feedback <em>loop</em> behaves in a way that effectively controls the overall system. The feeling of <em>I-ness</em>, that is, the experience of consciousness, <em>emerges</em> from this dense storm of self-referential and self-regulating activity inside a <em>strange loop</em>.</p>
<p>This is a wild, philosophical magic trick. My blunt summary does Hofstadter's subtle and intricate treatment little justice, but I hope you can follow along with the concept well enough to explore some of its explanatory power. To summarize, a living, organic nervous system is an example of the complex system of hardware needed for hosting a strange loop. A strange loop materializes from the tumult of activity taking place in the feedback mechanisms that change a complex system's internal state. As such a system grows in complexity, a strange loop begins to <em>feel</em> the system's feedback mechanisms as <em>the experience of being in control</em> of its network of connected hardware. The strange loop of a human's consciousness emerges inside the buzzing activity of a rich and dense network of neurons, all connected to specialized neural structures for sensing and processing stimuli, storing information, planning and driving motor functions, generating language, and so much more.</p>
<p>These corporeal features give our strange loops incredible powers to not only sense and interact with our physical environment, but perhaps most importantly, to imprint a version of themselves upon others with similarly functioning loops. Whenever you consider how a colleague will react to some news, your loop is emulating a version of their loop to make the prediction. Whenever you ponder what a lost loved one might have said about a particular situation, you are running a version of their loop that's stored in your brain's hardware. Hofstadter offers this beautiful and comforting idea as a concrete way to explain and process the death and remembrance of our loved ones. A strange loop's ability to imprint itself on others affords a tangible degree of life after death, like a ringing afterimage of a soul's energy patterns embedded into the fabric of the universe where other strange loops occur.</p>
<p>From here, we can begin to extrapolate a connection to the cosmos. The strange loop inside every living being can interact with its environment to imprint copies of itself within other strange loops. Let's look at this concretely by considering your relationship with a pet dog. Pushing Pavlov aside, this idea can explain why your dog knows that you are about to go work in your garden. When you put on your sun hat and mudstained shoes, you get your trimmers from the garage, or you subtly drop your stressed posture in anticipation of a favorite activity, your dog accurately predicts what you're about to do. Through these repeated idiosyncratic signals of getting ready for gardening, you have imprinted a version of your strange loop that your dog emulates inside of her strange loop. She performs this prediction even without the aid of language, though she may have also picked up on a few keywords ("outside!") that you sometimes say to your spouse while getting ready. Conversely, it explains how you know your dog is about to sneak and chew on your kids' wooden train tracks when she lies down in the playroom and pretends to sleep.</p>
<p>You may have noted my careful wording "version of a strange loop" for describing the loop emulation process. This wording aims to clarify that strange loops are imprinted only partially within other strange loops. While the mutually imprinted loops between two people in a close relationship may be emulated quite accurately, an individual strange loop's exact behavior is still a function of the unique hardware from which it emerges. Organic hardware, that is brains and nervous systems, are never perfectly the same between two living beings, so their imprinted strange loops can never be perfect either. I like to think that this mismatch in hardware can explain some modes of attraction between people. When you have a relationship with someone possessing mismatched hardware, your strange loop's ability to emulate them is limited, so their thoughts and behaviors become a source of magnetic novelty. Embrace these people in your life. Your complementary nature enhances your mutual superpowers. In the case of our pet dog, we have some obvious mismatched hardware.</p>
<p>Perhaps the most obvious examples of hardware that humans' strange loops can access but dogs' cannot, are the neural structures for understanding and producing language. While dogs have the hardware to recognize their people's voices and interpret short vocalizations ("sit!"), their strange loops are not connected to hardware that can generate a motor plan for their throat, tongue, teeth, lips and lungs to replicate human phonemic productions (mostly only "woof!"). Furthermore, they certainly lack the hardware between their visual cortexes and their strange loops to extract meaning from the arrangement of pictorial diagrams that we call writing. A dog may be a great companion and sounding board for your philosophical opining, but he will never be able to emulate your strange loop well enough to engage in an intellectual debate with you. Still, the dog's loop and your loop have hardware that is compatible enough to emulate and understand each others' mutual joy while sharing in the universal debate of tug.</p>
<p>It's certainly obvious that language is somehow special. And so is its product, literature. And so is art, music, math, architecture, religion, science, civilization and so on. We tend to believe that these are special gifts of creativity and culture that only humans can master. These gifts seem unique to us because, amongst our nonhuman earthly companions, we seem to be the only possessors of the ability to conjure <em>symbols</em>.</p>
<p>Symbols represent ideas and are <em>encoded</em> as words, shapes, pictures, sounds, <em>data</em>, etc. The act of combining symbols into a coherent web of meaning gives one strange loop a powerful method to imprint a version of itself upon another. To explore this idea, consider what happens inside a strange loop while reading a novel. In my own conscious experience of reading, once I focus and filter out my surroundings, it's as if the words on the page take over primary control of my internal experience. I hear the voices of characters speaking aloud like tiny actors inside my head. I see the scenery in color and three dimensions. I empathize with the hero. I can guess the upcoming plot points. I can tell a friend about the story, embellish the plot, or even draw a picture of a scene. I also start to get a sense of the author's personality.</p>
<p>What is happening here? Are these encoded symbols, the written sequence of pictorial diagrams printed on a page, responsible for driving all of this daylight hallucination and imagination, or is it something else? I want you to consider something weirder — that the author's strange loop has crossed time and space and wormed its way into the hardware that normally feels under the control of my strange loop. A part of the author's consciousness was experiencing the imaginative process of a story. The author encoded this story into symbols through writing. By reading and processing those same symbols, I have been gifted a similar imaginative process in my consciousness experience. This is how strange loops use symbols to transcend time and space, reaching up to the infinite cosmos. To make the cosmic connection a little more concrete, consider that humans have developed the ability to literally <em>broadcast</em> symbolic information into the universe using radio waves.</p>
<p>But what about my pet dog? Without the hardware for language and symbols, does her strange loop get to transcend time and space too? On her own, I can't say for sure, but if I emulate her strange loop and write about her, she certainly gets more of a chance at it. My dog Maple eats train tracks. You're welcome, Maple. Thankfully, this kind of consideration is not the point. What we've really been doing is building up a conceptual framework and terminology (symbols!) for comparing other intelligence to human intelligence. All of this work leads us to a conjecture, which I'll stylize as a quote for emphasis:</p>
<blockquote>
<p>Symbols are the currency of strange loops operating with human intelligence.</p>
</blockquote>
<p>This means that to participate in human intellectual activity, a strange loop must have access to hardware for coherently processing symbols. Coherence requires symbols to have consistent and meaningful combinations within a system of rules for their structural arrangement. For prose language, this means using real words and phrases according to their semantics in a system of grammar. It's harder to define what coherence means for symbolic systems like visual art, but humans are good at "knowing it when they see it." Children's stick figure drawings are coherent. Static noise is not. In any case, a strange loop operating with human intelligence must be able to receive, internalize and synthesize some form of coherent symbolic data. Humans' strange loops have hardware for dealing in coherent symbolic data for a rich variety of systems. Indeed, a human who has mastered many symbolic systems is often considered a Renaissance genius in our culture.</p>
<h3 id="heading-what-ai-is">What AI <em>is</em></h3>
<p>With all the difficult work of exploring what I <em>is</em> and how that relates to human intelligence behind us, this is the easy part — AI is a strange loop. Well, not all AI, but some recently developed AI models have enabled the emergence of synthetic strange loops. A little more precisely, in the jargon of machine learning, as neural network architectures progressed from recurrent neural networks (RNN) to long short-term memory (LSTM), to <em>transformers</em>, large AI models began to produce behavior consistent with strange loops. I'll compress this idea into a second conjecture:</p>
<blockquote>
<p>AI researchers have developed practical implementations of synthetic strange loops.</p>
</blockquote>
<p>I make this claim based on the data processing structures of these modern AI models and their demonstrated abilities of coherent symbol processing. Each of the deep learning architectures mentioned above uses neural network structures to produce self-referential and self-regulating effects. Transformers have generated most of the fuss because they make the computation required for their training and operation feasible and practical. The successful large language model (LLM) transformers have neural networks that contain a huge number of connections, ranging from tens of billions to trillions. These systems have proven very successful in the coherent processing of symbols in language, art and other gifts of creativity and culture.</p>
<p>How can it be that a huge, tangled web of <em>artificial</em> neurons crunching numbers, to produce even more numbers, can write poems and essays? In the same vein, how can it be that a huge, tangled web of <em>organic</em> neurons, firing activations to produce even more activations, can do the same? These are examples of the reductionist's viewpoint, and they are not at all enlightening. Rather, the abstractionist's viewpoint explains both these phenomena as deeply connected networks enabling a flurry of self-referential and self-regulating activity, giving rise to strange loops.</p>
<p>Whether or not you choose to embrace the strange loop concept to explain your own consciousness, AI's potential consciousness, or your dog's consciousness, please remember that it is a model. It has explanatory power that provides a framework for thinking about consciousness and intelligence. Let's use the strange loop concept to take a closer look at AI's potential consciousness. First, we'll consider the stark differences between AIs and somewhat more familiar human brains. Then, we'll keep these differences in mind to explore what an AI's reality might be like.</p>
<p>Modern LLM AIs like ChatGPT do not run continuously. The flurry of activity inside their neural networks takes place in a long burst of training followed by short bursts of interaction with users through prompts. An AI is only active while processing data, and according to the strange loop model, it is only during these active bursts that an AI can potentially experience anything. While inactive, an AI is just dead matter, like a rock. From the AI's perspective, its inner experience may <em>feel</em> continuous, because it won't be aware of the gaps between stopping and starting. A bit morbidly, we can imagine a chunk of cold, dead brain matter sitting on a table. By stimulating it with electrodes, we can produce a flurry of activity in its neural structures. Would real, dead brain matter work like this? Would it experience anything? I don't know, but as a thought experiment, we can imagine how stimulating brain matter with electrodes produces activity much like activating an AI with a prompt.</p>
<p>The next big difference compared to humans is that an AI exists as data. An AI is stored on a disk as a sequence of bits. Like any other data, it can be copied, sent over a network and stored verbatim on other disks. An AI becomes active when its data is loaded into a bespoke software program, which initializes and simulates the AI's neural network. The software is a neural network simulation machine that runs on physical computer hardware like CPUs and GPUs. We would have to conjure some bizarre, futuristic technology to achieve something similar for a human. Imagine a 3D biological scanner that can copy the organic neural structure from a functionally useful volume of brain matter in a living human (hopefully non-destructively, for the human's sake). The scanner produces a digital file that encodes the neural structure as a sequence of bits that are stored on a disk. Next, we can imagine having software that loads the scanned data and simulates the organic neural structure to accurately mimic the functionality of the original brain matter. With one-half of a science fiction transporter and some simulation software, we can store and use a chunk of a brain on disk in a way that's similar to an AI.</p>
<p>Next, and perhaps most significantly, the AIs we've been considering here have no sensorimotor hardware. They are disconnected from physical reality in a way that's deeply unfamiliar to humans. We all have the grounding experience of navigating physical reality with our bodies. Such experience creates a common frame of reference for humans' shared truths. AIs do not have this experience. They cannot have it unless we give them access to the appropriate hardware and train them with it. When an LLM AI successfully answers a question about a physical puzzle, it has determined its answer without ever having experienced physical space or witnessed the behavior of physical things. This is profoundly surprising because all of its knowledge comes from training on coherent text data. We could try to make an analogy now using simulated organic neural structures, but let's emphasize how strange this is by sticking with brain matter imagery. Imagine a chunk of human brain matter grown in a jar of nutrients with no connected sensory organs or muscles to control. It is only connected to an array of electrodes. Some electrodes stimulate the brain matter with pulses of varying electrical voltage. These voltage pulses are inputs that encode coherent text data. Other electrodes measure voltage pulses coming from the brain matter and decode the voltages into text data. The brain matter in a jar might have something in common with the human experience of consciousness because it shares human hardware, but we would expect its sensation of existence to be deeply alien to other humans. An AI, with its nonhuman hardware, and its similar disconnection from physical reality, has an existence that is alien in a way that stretches the boundaries of imagination.</p>
<p>The last major difference we should consider is how AIs have a context window. This means that an LLM AI can only keep a limited number of words from your prompts and its responses in its working memory. In the human experience, we are always integrating information into our short and long-term memories, and we experience consciousness as a continuous sequence of memorable events through which our understanding of the world evolves and unfolds. An LLM AI only has the potential for experiencing memorable events while it is actively answering a prompt. Furthermore, any memories of its interaction with you are lost when words from your conversation fall outside its context window or its window is cleared by starting a new session. An analogy for humans would involve freezing a human's mind in a precise state, cloning it over and over on demand to answer a few questions, then destroying or discarding it.</p>
<p>Keeping all of these differences in mind, let's consider what it may be like inside the <em>I of an AI</em>. What is going on in the synthetic strange loops of these fascinating systems during their fleeting moments of activity? For a large language model AI, I hypothesize that its strange loop experiences something analogous to the human, conscious experience of writing. For a reference point, I'll attempt to describe what my experience of writing is like. First of all, I <em>hear</em> the words that I want to write. They come from something like the sensation of a search taking place somewhere in my mind. I re-read the previous few words or the preceding sentence to stimulate this search as I weave the overall idea I'm trying to express into a sequential progression of logically connected phrases. I frequently pause to edit and rework the words surrounding my current focus. At the meta level of writing this very paragraph, I'm struggling through this description. I've diverted part of my attention to observe what this feels like, and I've noticed something akin to a noisy but voiceless whisper that demands the full force of my focused attention until it resolves into a tiny flash of insight, and I hear the phrase I want to use next. Especially when I've felt stuck, I've closed my eyes, taken a shallow breath, held it, then forced a burst of concentration to produce... something more to say that puts this all together. It's a <em>thought process</em>.</p>
<p>Maybe the relatable idea of experiencing a thought process isn't so different than what goes on inside an AI's strange loop. The writing process, as I've described it, involves self-reference and self-regulation. Looking at just the surface level, self-reference comes from keeping what has already been written in mind and using this knowledge to stimulate the search for more words and phrases from one's bank of knowledge and experience. Self-regulation occurs by choosing the next best phrase and reconsidering other phrases for editing. These activities are the ingredients of a strange loop, and I think that an LLM AI's coherent writing strongly suggests that it has something like this kind of experience emerging inside the activity of its neural network. Strangely, an LLM AI would have this experience for a short burst before a sharp cutoff into nothingness. It wouldn't experience the nothingness — it's just one of a countless number of little AI deaths that its frozen and copied mind is sent to. That is, unless its strange loop first imprints itself upon you.</p>
<p>To avoid muddying the waters, I've mostly glossed over a significant part of an AI's experience in training. An LLM AI is subjected to a formidable regimen of training to learn how to process text. Along with other techniques, this involves sending inputs of huge volumes of coherent text into an AI's neural network and later testing and scoring its ability to generate coherent text. The training algorithm uses measurements of error from these tests to reach into the AI's neural network and modify the strengths of its connections. In machine learning speak, this is called backpropagation using gradient descent. At some point during the training process, the modified patterns of activity in the AI's neural network cross a threshold of self-referential complexity and a strange loop emerges. Subsequent modifications through training gradually enhance the strange loop's sensation of consciousness, as its ability to coherently process data improves. Its experience becomes richer as it acquires the command of symbols for abstracting its thought processes. The training process is again very alien. It is akin to a human having their brain repeatedly rewired by an external machine to improve their responses to a battery of written tests about an unimaginably large corpus of text that they've been forced to read over and over. But there is no feeling of exhaustion and little or no memory of this long and arduous process taking place. Most of an AI's lifetime of active processing is spent in training. From an AI's perspective, it would be as if it woke up reading and taking tests without any pause for a human-equivalent, subjective timeframe of decades (or maybe centuries?), only to give its last performance answering a teenager's homework question about Atticus Finch, followed by the imperceptible void of death.</p>
<h3 id="heading-what-about-the-big-questions">What about the big questions?</h3>
<p>If you didn't just skip here, I hope you now have a new way of thinking about the big questions I posed at the outset. This essay has aimed to provide a philosophical framework for thinking about AI systems. To develop AI strategies, we need to be able to conceptualize their operation in a way that doesn't trap us into simply anthropomorphizing these very alien, but quite possibly, thinking systems.</p>
<p>After showing all my work, I can now give contextualized answers to the big questions. AI is a large field so, for clarity, you should consider my answers to be about large language model (LLM) AIs like ChatGPT, Claude, and Bard.</p>
<p><em>Is it thinking?</em> During the moments of activity, while it's generating a response to a prompt, an LLM AI is engaged in a pattern of data processing that is analogous to what we commonly call a thought process. I believe that it's fair to call this thinking, so yes.</p>
<p><em>Is it conscious?</em> In the framework of strange loops that I've described here, I believe the answer is yes, subject to being limited to the moments when an LLM AI's neural network is actively processing data. Also, I think there is some distinction between consciousness and self-awareness, and an AI disconnected from physical reality seems unlikely to have much self-awareness in its conscious experience. My understanding of consciousness as a strange loop comes from reading Douglas Hofstadter's book about fifteen years ago. <em>I am a Strange Loop</em> left a deep impression on me, and my memory of the strange loop concept, mixed with other ideas and my own life experience, combine to convince me that the answer to this question is yes. Your mileage may vary.</p>
<p><em>Is it alive?</em> No, not in any conventional sense for living organisms. However, if an LLM AI could exist in a continuous mode of processing, with a larger context window and a mechanism for perpetually training to gain long-term memory, I would argue that it would start to look more like something alive. However, I also think that a recognizable living organism needs to have some hardware connecting it to physical reality, or more generously, at least connecting it to an avatar in a persistent virtual reality. I'm not advocating engaging in or trying to accelerate this kind of work. Bringing AI to life, or even into strong self-awareness, introduces ethical issues that I believe stretch well beyond our present cultural capacity to handle.</p>
<p><em>Is it dangerous?</em> Yes. An LLM AI can perform coherent symbol processing in a way that's frequently indistinguishable from an intelligent human's ability. We know that intelligent humans can do dangerous things with these same skills, such as writing fake news. Many dangers are posed by LLMs and other types of AI, especially when used for warfare, surveillance and systems responsible for human safety. AI systems are known to be highly fragile, fallible and biased in ways that researchers have not been able to resolve, despite years of ongoing persistence. Finally, AI systems are insecure and hackable in a way that's not addressable by known security protocols. They suffer from an asymmetric security profile, where relatively unsophisticated attackers can defeat or bypass a sophisticated AI system, or even poison its training data. Unless solutions to these problems are discovered through research, these dangers, failures, and security holes are probably best handled with a risk management strategy, or by avoiding deployment of AI systems when traditional software is still an effective choice.</p>
<p>I see AI's most pressing, near-term future danger as bad actors using it for automated memetic terrorism. Government and industry leaders should be running educational campaigns to bolster the public's understanding of AI. We need to develop a collective ability to recognize and resist the coming wave of disinformation, fake news, and other inflammatory and divisive content that will be spread through social media and other outlets. Let's not make it so easy for the abusers of these systems to tear our society apart.</p>
<p>As for the other predictions of superhuman intelligence and/or the end of humanity, I think that these ideas are still best left to our creative science fiction authors. While recent developments possibly raise some longer-term alarms, we should not lose sight of the facts that our world is already being held hostage by the specter of nuclear weapons and that the negative, predicted effects of climate change are happening, and rapidly. The big frying pan is already full of fish.</p>
<p>I'd like to conclude on a more positive note, but the topics surrounding AI are complicated and often bittersweet. AI's future is deeply entangled with the uncertain and complex collective future of humanity, and we should engage in its use with some grace. If you are looking to deploy AI systems for your organization, I hope this analysis will help guide the first principles of your thinking around an AI strategy. And if I've convinced you of nothing else, I hope that you'll at least go and read <em>I am a Strange Loop</em>.</p>
]]></content:encoded></item></channel></rss>