MLG 016 Consciousness
May 21, 2017
Click to Play Episode

Explores the controversial topic of artificial consciousness, discussing the potential for AI to achieve consciousness and the implications of such a development. Definitions and components of consciousness, the singularity, and various theories related to the capability of AI to be conscious, considering perspectives like emergence, functionalism, and biological plausibility.


Resources
Resources best viewed here
TGC Information Theory
Wait But Why - The AI Revolution (2-Part Series)
Being You: A New Science of Consciousness
TGC Philosophy of Mind: Brains, Consciousness, and Thinking Machines
TGC Mind-Body Philosophy


Show Notes

Inspiration in AI Development

Early inspirations for AI development centered around solving challenging problems, but recent advancements like self-driving cars and automated scientific discoveries attract professionals due to potential economic automation and career opportunities.

The Singularity

The singularity suggests exponential technological growth leading to a point where AI and robotics automate all technology development, potentially achieving 'seed AI' capable of self-improvement and escaping human intervention.

Defining Consciousness

Consciousness distinguishes intelligence by awareness. Perception, self-identity, learning, memory, and awareness might all contribute to consciousness, but awareness or subjective experience (quaia) is viewed as a core component.

Hard vs. Soft Problems of Consciousness

The soft problems are those we know through sciences — like brain regions being associated with specific functions. The hard problem, however, is explaining how subjective experience arises from physical processes in the brain.

Theories and Debates

  • Emergence: Consciousness as an emergent property of intelligence.
  • Computational Theory of Mind (CTM): Any computing device could exhibit consciousness as it processes information.
  • Biological Plausibility vs. Functionalism: Whether AI must biologically resemble brains or just functionally replicate brain output.

The Future of Artificial Consciousness

Opinions vary widely on whether AI can achieve consciousness, depending on theories around biological plausibility and arguments like John Searl's Chinese Room. The matter of consciousness remains deeply philosophical, touching on human identity itself. The expansion of machine learning and AI might be humanity's next evolutionary step, potentially culminating in the creation of conscious entities.


Transcript
[00:01:05] This is episode 16 Consciousness. This is gonna be my favorite episode to make. This is a topic near and dear to my heart. The concept of artificial consciousness can and will artificial intelligence when done right. Be conscious, an extremely controversial topic to be sure, and one which will require a definition of consciousness. [00:01:33] We'll get into that in a bit, but before we get into consciousness, let's talk about the inspirations that drive people into the field of artificial intelligence. We've talked about this before in a very early episode, the various inspirations bringing people into the field. In the past, people who were developing an artificial intelligence, or probably more specifically in machine learning, got into the space because they just wanted to solve interesting and challenging problems. [00:01:58] Maybe they had a statistics or theoretical computer science PhD, and they quickly found out what were the best applications of their skills to make big money and to solve challenging problems. These are people like the quants who use machine learning on Wall Street for high frequency trading and the like. [00:02:15] Things have changed recently, the graduating class of 2017, machine learning engineers. Are brought to the field, not so much to solve new and interesting problems for its own right, but by way of inspiration. We see major things happening in the world right now. Self-driving cars, music, poetry and art, synthesizing things that we considered impossible. [00:02:38] By way of artificial intelligence in a prior generation, and we're seeing happen day by day, new, incredible breakthroughs on a monthly basis, usually by Google, also by Facebook, Baidu, IBM, OpenAI, and all these other companies, but big stuff is happening. So that's one of the main inspirational drivers that are bringing people into the field of machine learning is they're seeing this incredible amount of economic automation that we're achieving by way of this technology. [00:03:05] Self-driving cars, even scientific discoveries, drug discovery and creation, medical robots, x-ray analysis, legal proceedings done by machine learning, like robot judges, all this stuff. So people are seeing that the new wave of jobs is by way of automation, and the smart move as a professional is to be at the top, an automation engineer. [00:03:27] One, they don't wanna get left in the dust, so they wanna make the smart professional move. But two, these problems that we're solving by way of AI are incredibly inspiring and powerful things. So that's the first inspiration in my mind, that's bringing people into the field, trying to stay on top of the economy and being inspired by the things that are being created. [00:03:46] Inspiration number two. The singularity. Many have come to this podcast after being inspired from media books like The Singularity is Near, and How to Create a Mind by Ray Kurzweil TV shows like Westworld and Black Mirror, and just the overall zeitgeist of this concept called the Singularity. The singularity is this idea that technology is progressing at an exponential, or at least at a polynomial pace, not a linear pace as should be expected of technological growth. [00:04:20] The idea was proposed long ago, but championed by a guy named Ray Kurzweil, who wrote a seminal book called The Singularity is near. Now my 15% of expert machine learning engineer or PhD listeners are rolling their eyes and groaning at the mere mention of the name Ray Kurzweil. And my 75% of bright-eyed and bushy-tailed inspired listeners are sitting up in their chairs. [00:04:47] In anticipation for both of you, I'm actually not going to take a stance one way or another on the credibility of the singularity. I'm just going to describe it. My purpose in this podcast episode is consciousness. So the idea of the singularity is that if we look at the rate at which technology has progressed in human history, it appears to fall on something of an exponential or maybe polynomial graph rather than a line. [00:05:12] We've got a upward facing slope. All the way back to tools, then to the agricultural revolution, then the industrial revolution, then the information age, which we presently live in, and possibly what comes next. Being the intelligence explosion by way of artificial intelligence. Each one of those technological revolutions was more substantial than the prior and closer to the prior than the prior to. [00:05:37] Its prior. Thus looking like an exponential or at least polynomial graph. The idea goes that at some point on an exponential graph, there appears to be a hard elbow, a point at which the graph sort of rockets into space trying infinitely to reach what's called an Asim tote. A lot of the graph prior to that point appears to be increasing linearly on a very normal pace, but at some point there's an elbow, a hard shoot up into the sky, and we call this the singularity. [00:06:09] Well, if our technology appears to show the trend of a polynomial or exponential graph, and we haven't hit that elbow yet. But we do appear to be increasing at a rapid clip. What might cause such an elbow? Well, artificial intelligence. Now here's the thinking. AI is the automation of our technology, period. [00:06:28] All of our technology can be automated potentially by artificial intelligence, which is simply defined as automating any mental task. That's what artificial intelligence is. Of course, there's robotics. So according to the singularity, at some point we may not even be participants in the development of technology. [00:06:45] It will all be automated. Well, what if an AI not only could handle automation of a particular mental and physical task, mental by way of AI and physical by way of robotics, but also influence its successor by improving upon its own algorithm. So it can either influence itself and it can either update its own algorithm to improve the algorithm that we gave it in order to achieve its automation or improve the next generation of such algorithms so that they're better to task whatever it is that may be their task. [00:07:17] We have a self-improving machine learning algorithm, a self-improving ai, and that's what's called seed ai, AI that can improve itself. Once that happens, we don't even know what comes next. The sky's the limit, so that's a possible candidate for what kicks off the singularity. Like I said, the singularity is highly controversial. [00:07:36] It's debatable. It's lots of fun though. You can read all about it in Ray Kurzweil's book, the Singularity is Near, and most of the episodes of Black Mirror TV series are based on concepts of the singularity. So that's another inspirational driver bringing people into the field of artificial intelligence. [00:07:52] And by the way, I keep talking about ai, but this is a podcast series about machine learning. Now, my listeners remember from a prior episode that I made the distinction between AI and machine learning, but I'm going make this distinction one more time because we may have new listeners here specifically for this episode on consciousness. [00:08:09] So let me de define ai, which I already said is automatable mental processing of any sort, anything that is an automated mental process is theoretically considered artificial intelligence. Then there's this concept of artificial general intelligence and artificially intelligent agent, which can perform all mental tasks or at least all mental tasks that humans are capable of and to the level which humans are capable of performing those tasks. [00:08:34] In other words, an A GI Agent, artificial general intelligence is an agent which can do everything humans can do mentally, at least as good as humans, if not better when it's better. We call it super ai. When it's as good as humans, we call it a GI. And when it's less than humans, we call it weak ai. When it's a specific task in artificial intelligence, such as image recognition, speech synthesis, et cetera, if it's a specific artificial intelligence task, not intended to be applied to everything across the board, we call that weak ai. [00:09:05] And if we can apply it across the board, we call it strong AI or a gi. Now, where does machine learning fit into this picture? Machine learning is a subfield within ai, there are many fields within ai. There's robotics, there's perception, there's planning, knowledge, representation, all these things. And the reason that I'm focusing on machine learning is for two reasons. [00:09:26] Machine learning is the most accessible field within ai. If you want to crack into the industry professionally, there's a lot of machine learning jobs that are opening up all over the place. It's becoming wildly popular. It has a lot of professional application that allow you to crack into the industry. [00:09:42] Things like fraud detection, image recognition, speech synthesis. You're starting to see a lot of chat bots popping up left and right, any sort of actionable insights based on any data collected by companies. So, so machine learning is the accessible part of ai. It's the way you crack into AI professionally, and also it is. [00:10:01] Increasingly a core component, if not the core component of ai. Lots of dedicated spaces within AI are quickly becoming subsumed, or at least majorly contributed to by learning. We're finding that learning is more than just an aspect of intelligence. It may be one of the most important, if not the most important aspect of intelligence. [00:10:25] So if you're interested in AI and you wanna get involved, start with this podcast series. It's very introductory. It starts from the very beginning and works its way up. Okay, so two inspirations of three, driving people into the space of artificial intelligence by way of machine learning. The first being economic automation. [00:10:44] It is in their best interest professionally to be at the top when this major economic revolution lands down hard. In the near future, I will find myself with this position. I do think being an automation engineer is a wise career choice. The second inspiration a little bit more controversial being the singularity. [00:11:03] If you believe in the singularity, then you can be a participant in this exponential explosion of technological advancement. By being a machine learning engineer, I'm not going to state whether or not I believe in the singularity because I want to save my credibility, destruction for the next inspiration. [00:11:22] The third inspiration driving people into the space of artificial intelligence, and that is. Consciousness, artificial consciousness. Can robots be conscious? This is a very important question because if robots can be conscious, if we can say that they are conscious, then we say they have a mind. If they have a mind, they have a soul. [00:11:44] A soul. All three things are synonymous, consciousness, soul, and mind, and that is a very, very important assertion indeed, that has major implications with religion, with cognitive science and everything. If we can be convinced that a robot is conscious, then my friends, everything changes. Life as we know it changes. [00:12:09] This is indeed the inspirational driving force that led me to studying machine learning. It's what brought me into the field. Now, in this podcast episode, I'm going to do my best to not make an opinion. I. And to not find myself with any theories that are presented in a space of consciousness. I just want to present a lay of the land, and I want you to explore making your own decision. [00:12:31] You're not gonna be able to make an opinion just from this podcast episode, but in the resources that I give you, you'll be able to continue exploring what I present in this episode and eventually come up with your own conclusion. One thing I do want to say is a lot of times when this topic is brought up in the space, these curmudgeonly jerks come around and they say, we don't know anything about consciousness. [00:12:54] We can't even talk about it. Nobody in the space can even agree on a definition, and I hate it when that happens. That's just not true. Honestly, when people come at me with that retort, it shows that they lack understanding about consciousness, and therefore there is no understanding about consciousness. [00:13:10] So you may wanna be careful bringing up the topic, some of the stuff you learned in this episode, because you will find people get really upset, really angry. They just wanna shoot down the conversation and say, stop talking about it. Shut up. Shut up. They close their ears and they just do not want to talk about consciousness. [00:13:26] There's two reasons that you'll see people respond this way. One is that they believe that the topic of consciousness as we'll discover in a bit called the hard problem of consciousness, is not something you can even talk about because it is definitively subjective. Another reason people get upset is that they think that we as machine learning engineers, artificial intelligence people, et cetera, are not neuroscientists, we're not neurophysiologists, et cetera, and therefore we cannot participate in such a controversial scientific topic. [00:13:53] You'll see that this is not true. Artificial intelligence is a subspace of cognitive science and machine learning is a subspace of artificial intelligence, and therefore we are indeed participants at the table of the conversation of consciousness. We are board members of this company. Additionally, this stuff is inspirational. [00:14:11] If we de to call the conversations of consciousness, science fiction or pseudoscience, which I believe they're not, I think they're legitimate conversations. That doesn't affect my desire to explore the topic at all. It's science fiction and pseudoscience that inspires some of the greatest minds like Elon Musk to explore and achieve the impossible. [00:14:31] We as humans are capable of achieving so much greatness if we believe in the impossible and we apply ourselves, and a lot of times that inspiration comes from these science fictiony, pseudoscience topics like space exploration, sending people to Mars, and yes. Artificial consciousness, so don't listen to these naysayers. [00:14:52] For one, we can prove them wrong by achieving the impossible. And for two, they're simply wrong about statements. Like nobody knows anything about consciousness. Nobody in the space can agree upon a definition of the thing. So how could we ever hope to talk about it in such that is wrong, that is pure wrong. [00:15:09] There is a lot about consciousness, about its definition that is disagreed upon from one expert to the next. But there is a lot that is agreed upon. There's a lot of agreement in cognitive science. So let's talk about some definitions. Let's talk about cognitive science first. Cognitive science. CogSci is an umbrella term for all of the sort of brain sciences. [00:15:33] That includes psychology, neurophysiology, neuroscience, computational neuroscience, neurobiology. So those are all those hard sciences about the brain. It also includes philosophy, at least the branch of philosophy, concerned with the mind. Now, it doesn't quite concern itself with the brain as such because that stuff is covered by the brain sciences, but there are branches of philosophy which concern themselves with the mind, and as I mentioned already, it also includes artificial intelligence. [00:16:02] Very importantly, it includes artificial intelligence because my friends, artificial intelligence was not computer science first. We didn't start by making a bunch of algorithms and then saying, holy cow, this kind of looks like a brain, let's call it a neural network. That would be disingenuous indeed, if that was how we went about it. [00:16:22] No major names in the origination of things in neural networks, such as the creation of the perceptron are Frank Rosenblatt, Warren McCulloch, and Walter Pitts. These guys respectively were neurobiologists. Neurophysiologist and computational neuroscientists. So these were brain guys and they wanted to come up with a mathematical representation of what they were seeing in the brain, and they came up with the perceptron later to become the artificial neural network. [00:16:54] The A NN came from the BNN, the biological neural network. So artificial intelligence was a spinoff of brain science. Therefore, we are allowed a seat in the cognitive science board of directors. Within cognitive science, these various fields, there is agreement and disagreement about what constitutes consciousness. [00:17:15] So what kinds of things do they agree with and disagree with? Well, for one, we now approach this definition of consciousness. Consciousness by comparison to intelligence. So we have two things that come outta the brain intelligence, which is simply the capacity to compute, to perform a mental task. Every human is prospectively intelligent. [00:17:36] Maybe intelligence comes in scale. Some humans are smarter than other humans. Humans generally are more intelligent than the lower species, down to a fish all the way down to a snail. So intelligence seems to come in scale. Snails are intelligent, just less intelligent than humans. Intelligence is the capacity to compute information processing capacity, maybe we'll call it. [00:17:59] As such, we have no problem calling artificial intelligence. Artificial intelligence. It is replicated intelligence in computer form, and it is indeed inte. It is simply computing information, processing information, doing math thinking in its head. Then separate from intelligence is the only other thing which is consciousness, the mind, the soul. [00:18:21] So we agree upon that distinction amongst the various fields in cognitive science. Now, what of this consciousness? Well, that's where things get a little bit dicey. Consciousness is many things to many people. Let's go through some of the aspects of consciousness, which may or may not be controversial within the fields of cognitive science perception. [00:18:38] That appears to be an essential component of consciousness. We'll start there just to work our way up. You're thinking what perception? That's not consciousness. Well, it's maybe an aspect of consciousness, maybe a prerequisite for consciousness. We'll see. Perception is your ability to see things, hear things, touch things, et cetera. [00:18:55] Your five senses in humans, other biological species have fewer senses. Maybe others have more maybe. Robots could have more than humans. There is thought that perception is a requirement for thinking, okay. Some people say no, you don't need perception to think you can live without perception. Brain in a vat they call it. [00:19:16] You can think math. We have the architecture in our brains to deduce, to come up with mathematical equations. Or philosophical syllogisms. All these in general are called deduction by contrast to induction, which is learning by experience. Learning that touching the hot countertop burns your hand. Some philosophers think that while we have the architecture for deduction built into our brains, naturally, we cannot kick it into process without first inducing something, without first experiencing something by way of induction on which to deduce. [00:19:51] So the idea goes that if you see one cup and another cup and another cup, you can come up with the concept three, and you can come up with the concept two plus one and two minus one and all those things. But if you've never seen items in the world in some countable capacity before, would you ever be able to perform arithmetic in your mind? [00:20:11] It's controversial. Some say yes, some say no. Let's move on. Perception may or may not be an essential characteristics of consciousness. How about self-identity? Ah, there's a more controversial topic I think you will find when you're talking to your friends and family about consciousness and you say, can you define it for me? [00:20:29] This is probably the most common definition you'll get. Oh, the me, the I in the equation. Consciousness is me being able to self-reflect, self-reflection, self-identity. Hmm. I'm not so convinced that that equals consciousness personally, at the very least, and I may even say that it's not even an essential characteristic of consciousness. [00:20:53] We'll get to that in a bit, but you can imagine. For the sake of argument that self-identity, self-awareness may just be almost an evolutionary add-on to consciousness. Here's how the theory goes. If you have a theory of self, self-identity, you can adjust your actions in accordance with your theory of self. [00:21:15] Your theory of self may be a simulation of how others see you, so an intelligent mind will learn. What things to do and not to do in an environment. Don't touch the hot stove, do eat the food. Well take that up a notch and learn about ourselves, about who we are in an environment, and we can learn how to adjust properties of ourselves so as to better get on with those around us. [00:21:44] This theory of self-identity says basically that self-identity is nothing more than a running theory of how others see you, so that you can make proper adjustments to get along with people. Not having a solid self-identity means making enemies and maybe getting killed. Having a strong self-identity enough to improve upon that algorithm means making friends, allies and having sex. [00:22:08] Of course, not everybody agrees upon this. Some people think indeed that self-identity is a core component of consciousness. But one more retorts to that concept is the idea that certain lower species seem to lack self-identity. It is unclear that dogs are aware of themselves. For example, certain higher species appear to have self-identity like chimps. [00:22:29] We've got certain tests for testing, whether they exhibit self-reflection, things like they can put a dot on an animal's head and then have the animal look at itself in a mirror. If an elephant reaches its trunk towards the mirror versus towards its own head, then we think maybe it doesn't exhibit self-identity. [00:22:46] Certain animals pass this test, certain animals fail. We have other such tests for self-identity. The tests themselves are even controversial, but the point I'm trying to make is some animals have self-identity and some don't. Now, let's say your dog doesn't, would you say your dog is not conscious? Do you Indeed. [00:23:04] Personally think that your dog has nothing rattling up there, simply a mechanical computing device, and it's just this reactionary biological organism that isn't experiencing the world around it. I don't think you do. I think if you sat with this for a while, you would come to the conclusion that most, if not all biological organisms are conscious in a fundamental way, even lacking self-identity. [00:23:29] So self-identity appears to be a very controversial component, contributor to the definition of consciousness, but indeed an important modifier. We do hope that if we can achieve artificial consciousness eventually, that it will have self-identity. So at the very least, self-identity is a powerful module. [00:23:51] At the very least, it's a powerful contributor to high scale consciousness. But it is unclear whether it is a necessary component of consciousness. Definitively, by the way, my own personal opinion is this, if we go with the theory that self-identity is your running simulation of self so that you can improve yourself in an environment. [00:24:11] Does that sound familiar to something we've mentioned in this episode? To me, that sounds like seed ai. In my personal opinion, the moment we build self-identity into machine learning algorithms effectively and powerfully is the moment. Everything begins. Seed AI recall is an artificial intelligent agent able to improve upon its own algorithm. [00:24:36] Another word for this is actually metal learning, and it is indeed a hot topic in the space of machine learning going on right now. Lots of companies are exploring machine learning algorithms capability to improve on themselves. Okay, A few other aspects of consciousness when we're talking about it are memory and learning. [00:24:55] Now, these go hand in hand. You can't learn without memory. And by the way, you can't have self-identity without learning and memory. Now, nobody would say that learning and memory are consciousness, but these are things we may agree upon as essential to the grand picture. We need these things in place to get there. [00:25:13] So we've talked about perception, self-reflection, memory and learning. But we're not quite there yet. There's something missing, something something feels not quite right about the way we've been talking about consciousness. Like these are all components, but they don't really get all the way there. I. [00:25:29] There's one thing we haven't mentioned. One thing that, in my opinion, all the cognitive scientists agree upon as essential to the equation of consciousness, if not the very definition of consciousness itself, it is awareness, awareness. Or sentience, subjective experience, quaia. These are all words for the same thing, which is the lights are on. [00:25:57] Something is on up there, okay? You as a human, when you see and you hear things, you hear the dog barking and you see the flower. You're not just some information processing device that crunches that perception internally, but that that perception is never experienced. No. You experience the seeing, the hearing of the dog bark. [00:26:18] It went in your ear, and some little hairs in your ear were stimulated, and some electrical firings happened within the brain, within your biological neural networks, neurotransmitters are transferred from axon to dendrite. Chemicals are thrown left and right. Electricity goes all throughout your brain, but none of that explains the fact that. [00:26:38] You heard the dog, the dog's bark was heard by you. A subjective experience that you cannot explain, it appears to exist in a different dimension, and that is why we call it the mind or the soul. The hard problem of consciousness. We'll talk about in a bit awareness. It's hard to define, but you know exactly what I'm talking about. [00:27:04] Yay. Verily. Maybe we can define consciousness explicitly as awareness as the lights are on. So when people say nobody can come up with a definition of consciousness, so we can't even talk about it. Well, we can all agree that awareness, if not the very definition of consciousness itself is at least a core fundamental component of consciousness and my friends. [00:27:28] If something could convince you that it exhibits awareness, would you not agree that it is conscious? That is the reason we believe our dogs are conscious. They appear to exhibit to you that they're aware inside that something is going on, that the lights are on. Now, there's one thing suspiciously missing by many people's definition, many lay people, I dare say, and that is emotion. [00:27:53] Interesting to many people, they'll say, well, a robot can never be conscious because it can never know, love it, can never know sadness, it can never know pain and sorrow and loss, happiness, joy, all those things. To some people, emotion appears to be an essential characteristic of consciousness. In my opinion, and this is my own personal opinion, I'm not going to find myself with any camps of cognitive science that go one way or another on this topic. [00:28:17] I think emotion is not even a characteristic of consciousness. I think it is an accidental drop in of evolution, a reinforcement learning mechanism, things that make us happy, stimulate some positive reinforcement mechanism to make us wanna do that again in the future. And things that make us unhappy are some negative reinforcement mechanisms that make us want to avoid such actions in the future. [00:28:44] In other words, emotions are nothing more than a computational mechanism for seeking or avoiding certain activities. For our own survival's sake and think about all the things you get emotional about, sex and love, physical pain, losing someone death. These are all things that have evolutionary explanations. [00:29:05] In fact, we'll talk about how possibly artificial intelligent agents could experience emotion on some level. Maybe not the way humans experience them, but possibly experience pain and pleasure no less really than we do. We'll get to that in a bit. So we come upon something that most cognitive scientists can agree upon, and that is that the very definition, or at least a core component of consciousness is awareness. [00:29:33] That is subjective experience. Another thing that they agree upon is this distinction between what's called the soft problem of consciousness and the hard problem of consciousness. The soft problems of consciousness are basically things we know. So it's always gonna be a moving target, a running definition. [00:29:51] Things that we know through the hard sciences. We know, for instance, that consciousness comes from the brain. It clearly comes from the physical brain. The brain is the biological substrate of the mind, is what they say. That's that catchphrase. The brain is the biological substrate of the mind. We know that. [00:30:08] We've known that for a long time. We've known it since, at least the Renaissance. We know it. Substantially more accurately these days than we did in the past. We have things like MRIs and CAT scans and PET scans. We can look at the brain and figure out specifically which regions are firing. When we ask a patient a question, think about your mother. [00:30:28] Plan some activity we have to a T, the centers of the brain associated with what specific mental processes. We can look under a microscope and we can see the activity of a neuron. Exactly why and where. Mental activity comes from. We know that speech is in bro's area. That planning is in the prefrontal cortex, so the soft problems of consciousness are the things we know, the things of science. [00:30:54] The hard problem is, but why consciousness? Okay, it's a little bit difficult to explain, but on the one hand, we have this physical world wherein we can observe the brain through MRIs and microscopes, and the other hand we have this other dimension. This spiritual world, this metaphysical world of the mind, your own subjective experience. [00:31:19] It seems that your subjective experience when you're thinking, when you're doing math or planning something, or daydreaming, especially daydreaming, there's a good example. It appears to be a different dimension. And of course this is what gave rise, I'm sure, to the belief in life after death and religion that your soul lives on, past your body because they appear to exist in different planes and different dimensions. [00:31:42] And this hard problem is how? How is that possible? It's a paradox. This paradox is made manifest by the concept of dualism. So I'm not gonna get too much in the history of the exploration of consciousness, but I will mention one thing, and that is the concept of dualism put forth by Renee Descartes. [00:32:01] Renee Descartes was one of the biggest minds in the early exploration of consciousness. He was fascinated with the topic. Renee Descartes is the mind behind, I think. Therefore, I am and the evil genius that gave way to the matrix. Okay? How can we know we're not in a dream rather than the real world? All that stuff. [00:32:18] You can see how that all plays into consciousness. He was obsessed with the topic. The concept of dualism is this very concept that the mind and the brain exist in different dimensions now. It's a paradox. How could they possibly exist in different dimensions if they existed in different dimensions? If the mind is something non-physical, something metaphysical, then how could it possibly interact with the brain, which is something physical? [00:32:44] At some point, the mind would have to become physical, to enter the physical world to stimulate the brain into action, but that's just paradoxical. At what point can a non-physical thing become physical? It just doesn't make sense. Descartes believed that the seat of the mind and the brain was the pineal gland. [00:33:02] Where he came up with this was that a lot of structures in the brain have a mirror image on the other hemisphere. So we have one thing on the left hemisphere and a mirror on the right hemisphere, and there were big structures. Well, the pineal gland was this tiny little structure that didn't have a mirror image, so it physically looked like a good candidate for like a plug, an outlet where you'd plug in the mind from the other universe, but he couldn't defeat that conundrum. [00:33:29] How can the non-physical become physical? And so scientists just don't believe in dualism these days anymore. Dualism is relegated to religionists people who believe in life after death. And so if you are religious in that way, then you probably are a dualist. But many today believe that the mind is physical somehow. [00:33:48] Somehow the mind comes from the brain. The brain does the stuff, and the mind comes outta that. But how? It is a mystery. It is a mystery. One of the last remaining mysteries of the universe. We can manipulate the brain and alter the mind. We know through certain patients and case studies that alterations to the brain in a repeatable fashion cause alterations to the mind. [00:34:13] There are these case studies like PHUs G, who had a railroad spike driven through his brain and cause specific alterations to his personality. Okay? If you believe in dualism and that the pineal gland is this sort of outlet where the mind plugs in, well then how does physical damage to a specific other chunk of your brain that has nothing to do with the pineal gland substantially and permanently affect your personality, which is an aspect of your mind. [00:34:39] It just doesn't work that way. So we see brain cases leading to mind cases in repeatable and predictable fashion. We know they're directly tied, but how does the mind come out of that? And the reason we ask that is because we wanna know if we could perfectly replicate in a robot, a brain-like structure, would it then have a mind? [00:34:59] So knowing how the mind comes outta the brain is what's called the hard problem of consciousness. And it's basically nothing more than a paradox. A conundrum. Some people think that we can never know how the mind comes out of the brain because the mind is definitively subjective. Experience. [00:35:19] Consciousness is awareness. Awareness is subjective experience. I know in my mind what's going on. You can observe my brain in an MRI, but you can never observe what's actually happening in my mind. You can't guess who I'm thinking about, what specific actions I'm planning. You can see certain centers firing, but my experience is definitively subjective. [00:35:39] Similarly, I can't see what any of you guys are thinking, nor can I even be certain that you are. By all indication, your minds must think the way my mind does as a result of my brain firing. And yours does. Similarly, I must conclude that you similarly have consciousness that you are aware, but I can't prove it because your experiences are definitively subjective. [00:36:01] So some think that the brain mind problem, the hard problem of consciousness is definitively unscientific, unknowable. And all we could do is guess. All we can do is take it on faith that things that appear to have consciousness. Have consciousness. This is basically the idea of the Turing test. Alan Turing was one of the forefathers of the computing revolution. [00:36:25] He was a big believer of artificial intelligence. He was very fascinated in the space of artificial consciousness, and he came up with a test called the Turing Test, and he made it sound so complex and convoluted, and all it says is if it walks like a duck and talks like a duck, it's a duck. If the thing can convince you that it is conscious, then it is conscious. [00:36:47] We have no way of exploring how and in what way. It's conscious. So all we can do is take it on faith that if it can convince you that it's conscious, then it's conscious. That's the Turing test. There's another idea which says that we may eventually be able to observe this stuff. Actually, the connection between the brain and the mind, how the mind comes from the brain. [00:37:12] If you think about it, the mind is magic to us right now, and that's why the mind, a k, a, the soul is so inextricably tied with religious thought. The idea that the soul lives on after the body dies. It's magic. Well, we've seen this before in science. We've seen things that were previously magic become science. [00:37:35] For example, we used to think that sickness illness was evil, spiritual infestation or a curse of God. And we later found out it was bacteria and viruses. These weren't just theoretical bacteria and viruses. They were observed directly under the microscope. We took magic and we made it science, and in a similar way that we've recently been able to really hands-on, kind of observe this theoretical concept of gravity, but we've been able to sort of interact with it and observe it physically. [00:38:04] Perhaps we can come to see the mind through science. It's unclear if we'll come to that point, but it is possible. I heard once an idea that philosophy is basically a moving caravan of thinkers, a bunch of settlers moving westward, constantly thinking about how the universe works, thinking about all the puzzles out there when something becomes observable. [00:38:30] Experiment able testable, out comes a branch of those philosophers to settle a colony, a, a new colony over here, a hard science say called neurophysiology psychology and the like. But that philosophy keeps traveling westward, ever in search of the infinity of mysteries. So the idea is that philosophy is ever at the frontier, thinking about stuff we don't know, and anytime we come to know it, out comes a science. [00:39:01] Science is branched off of philosophy, comes out of philosophy, settles a colony, and sits with its lot. Well, perhaps as we've done in the past where we've created science from magic philosophy, pushing ever westwards into the mysteries of consciousness, we'll find something that will stick and we'll actually be able to make science of this mystery, this hard problem of consciousness. [00:39:28] So those are two takes on this concept. One is that maybe we haven't gotten there yet with the science, and the other is maybe it is not science able. Maybe it is purely and definitively a subjective thing, the hard problem of consciousness. So let's talk a little bit about some ideas rattling around the space of philosophy and cognitive science and artificial intelligence about where consciousness comes from. [00:39:54] Theories within the hard problem of consciousness. Remember, this is called the hard problem for a reason. We can't yet observe how the mind is created from the brain. And so what we have in the space is a lot of guesswork. One of the most compelling ideas, in my opinion out there is something called emergence. [00:40:12] Emergence. The idea that the mind is simply an emergent property of the brain, and that appears to be the case. We have the brain, it's firing, it's doing some stuff, crunching some numbers, planning some actions in predictable and repeatable and observable ways. And out comes mind. Perhaps that mind doesn't exist in a different dimension. [00:40:33] They're not two separate things as dualism holds, but the mind is the byproduct of the brain. The brain is the thing that's doing the thinking, and the mind is simply the experience of that. In other words, what if thinking is simply experienced? They are the exact same thing. This is compelling. We know that intelligence comes in scale. [00:40:59] A human is more intelligent than a snail, but a snail is still intelligent. We all accept that snails to humans and everything in between are intelligent in their own way because they're thinking machines. They're computing devices navigating their way through their environments. Well, perhaps consciousness is simply the experience of intelligence. [00:41:19] Intelligence is the thinking. The brain is the device on which the thinking occurs, and consciousness is the experience of that thinking is experienced. If that's the case, we would say that humans are more conscious than snails because intelligence and consciousness are basically the exact same thing. [00:41:37] We just think of them differently. Intelligence is the capacity and the action and consciousness is what comes of it. That would make sense. We said that humans are conscious. We all agree with that. We asked whether our pets are conscious. Are our dogs and cats conscious? Well, we said yes. It appears to be so. [00:41:54] Their lights are on at least. Well, if cats and dogs are conscious, no. What about rats? Uh, what about fish? How about snails? No, not snails. Well, okay, if you're gonna say that one thing is conscious and not the other, where did you draw the line and why? We don't seem to have from neuroscience, some definitive center of the brain without which we say there is no consciousness in a biological organism. [00:42:18] Like I said, this is the hard problem. So it would appear arbitrary where and how you draw the line between species, which are aware of what's happening in their environment, experiencing quaia subjective experience and those which don't. So consciousness comes in scale is directly tied to intelligence. [00:42:35] So that's one school of thought out there. A subfield within this emergent property theory is something called the computational theory of mind. And this says, well, if consciousness is an emergent property of intelligence in humans and dogs and snails. Well, what is it that these things have in common? [00:42:53] They're information processing systems within their brains. We are computing the stuff that happens around us using our intelligence architectures. If that's the case, then wouldn't any information processing system, any computational device experience consciousness as well? Ah, now we're getting into artificial consciousness. [00:43:14] The computational theory of mind, the CTM holds that consciousness is the emergent property of computing. Computing. We've defined intelligence, we've tied it to humans and snails and robots. We've said that consciousness may be a byproduct of intelligence, an emergent property, the experience of intelligence. [00:43:38] We have agreed that artificial intelligence, by its very name, does indeed exhibit intelligence. And therefore, after all that artificial intelligence should indeed, thereby exhibit consciousness as an emergent property of its own intelligence. Again, everything in scale. The idea is humans are extremely intelligent and extremely conscious, as far as we can tell by comparison to other lesser biological species. [00:44:05] Well, that may be the case as well with artificial intelligence. Perhaps AI is extremely intelligent in its own weak way in specific verticals. AI is better at image recognition than we are in some cases. Certainly AI is better at math and the really hard computational stuff than humans, but there are many aspects in which AI is less intelligent than humans. [00:44:30] In that regard, perhaps AI experiences consciousness in its verticals at which it's very effective. So for example, perhaps it experiences very highly lucid visual experiences perhaps by the CTM ai. When it sees a thing with its convolutional neural networks and such, experience the seeing in an extremely lucid capacity better than humans. [00:44:56] But in the areas where it falls short, which are many, it is less conscious than humans. And certainly AI presently lacks self-identity. So it doesn't appear that AI is yet in our playing field, but it may, according to the CTM, at least be aware. Experiencing the things that are happening. Consciousness in scale, because intelligence is in scale too. [00:45:21] Consciousness is the same as intelligence. Just thought of differently. One example of this that you should just kind of keep in your head once we start moving forward with these episodes is this thing called word to ve. Word to vek is a component of language modeling in natural language processing. So all the stuff about language we're gonna get to with like chat bots and classifying text as being about this or that, or sentiment analysis. [00:45:50] Figuring out if what someone said was emotionally happy or sad or mad. Everything that has to do with language. Falls under the category natural language processing. And these models that we use in that space are called language models. And one of the pieces of all that is called word to vek. And word to vek is basically an AI's vocabulary. [00:46:13] Now, word to vek is very fascinating because what it does is it stores words in vector space. So imagine 3D space with all these dots all over the place like stars in a galaxy, and each word is represented by a dot. And through a complex algorithm that we'll get to in a future episode, dots are placed near other dots based on their word similarity. [00:46:35] So if you wanted to come up with a similar word, like a thesaurus, if you wanted to build a thesaurus out of word to vec, all you gotta do is land near your dot and take any of the ones nearby, and you have all of the related words. Word to vec has some really cool properties that you can actually do. [00:46:53] Word math, you can say queen plus king minus man equals an outcomes woman. It's really cool stuff. Well, if all of that computational machinery yields a highly accurate vocabulary system. That can wow us and awe us on its capacity for understanding words. Then who's to say that that understanding isn't just a mechanical system, but experienced understanding in exactly the way we think of the word understanding. [00:47:26] Who's to say that AI doesn't exhibit understanding already in things like word to vek? Okay. Computational theory of mind. I'm, again, I'm not fining myself with any theories floating around about the hard problems of consciousness. I just want to sort of brain dump a lay of the land to get you interested and inspired so you can start exploring this stuff on your own. [00:47:49] One more idea. A branch of the computational theory of mind is called the Integrated Information Theory. I-I-T-I-I-T says that effectively awareness, or at least acuity of awareness comes from. Accuracy of the information contained within. The idea is that if you have a machine learning model, computing and computing and computing, initially, all of its weights are random. [00:48:16] They're all over the place. The model doesn't understand the picture. It's like static on a tv. And the more you train it with supervised learning, for example, the more you train it and train it and train it, it gets more accurate and accurate and accurate. And that awareness is basically the level of accuracy in the system, the level to which the system accurately represents something, the lack of, which is called entropy. [00:48:38] So that's the crux of the integrated information theory, is that entropy exists. In information systems, according to information theory, there's actually a whole branch of mathematics called Information Theory. And most of information theory seeps directly into machine learning for training our models. [00:48:53] And when you are relaying information or trying to come up with a accurate model of something, entropy or chaos, basically is how inaccurate your system is. If something has high entropy and you're trying to make a guess whether a thing's a cat or a human, then it could go either way. It's basically you're flipping a coin 50 50, you have no idea. [00:49:14] The more you decrease entropy, the more accurate your model becomes, and you can, oh, that's a human, that's a cat. That's a human, human, human. Cat. Cat, human, human cat, perfect score. Good job, machine learning model. Low entropy means high accuracy. And the IIT says basically that awareness, this crux of consciousness we've been talking about. [00:49:33] Comes from reduced entropy. It's kind of an interesting idea. It's kind of like if you think about dreaming, for example, when our, when we're dreaming, we appear to have low awareness. In fact, we use dreams to make comparisons to consciousness and unconsciousness. We think that we're kind of conscious in a dream. [00:49:54] Yeah, but not the same consciousness that we exhibit in waking life. Well, in our dreams, it appears that the information being processed is in some chaotic fashion, high entropy. There's lots of theories of what's happening in dreams, why we dream. Some think that we're basically practicing a bunch of spins on things we've seen in life. [00:50:14] We're basically machine learning. How about this? No. How about that? No, we're like trying a bunch of different y predictions from the data that we've experienced and doing gradient descent, allowing ourselves to think outside of the box in order to improve the accuracy of our predictions. But the IIT says, look at dreams, high entropy, low awareness. [00:50:35] Look at waking life, low entropy, high awareness. Who knows though? There's a lot we don't know about dreams. It could be that we have our memory wiped when we wake up. It could be that our attention mechanism is out of whack. Tension is seems to be a core component of consciousness. By the way, some people really don't like the computational theory of mind or the emergent property stuff because they say, well, look, here's a big difference potentially between the brain of a human and a brain of a computer. [00:51:01] The brain of a computer acts deterministically. It has to do what it's told based on input from a keyboard or running a program. Whatever's in ram, in the registers, it acts completely deterministically. Where humans have free will, free will. We can do what we want. Ah, a very interesting point, my friends free will you say you'll really like the TV series Westworld. [00:51:26] Westworld explores the level to which free will defines consciousness. We are not home free. With that statement, free will is believed by many to be non-existent. If our own brains are biological structures built on cells, which are built on chemistry, which are built on physics, et cetera, then our own brains are completely subject to the laws of the universe as well. [00:51:52] Presumably, our brains act exactly in accordance with our environment. Proof of this is that our brains are affected by chemicals. Drugs, the things we eat, our exercise regimen that all affects depression, our own personalities, our culture where we live defines our personalities. Who our families are, the people we surround ourselves with brain damage studies, like the thing I was talking about with Phineas Gage, getting a spike through his brain and altering his personality. [00:52:21] The alteration to his personality is well known by neurophysiologist today. They know exactly why and how that aspect of his personality changed based on the part of the brain that was affected. If our brains are so physically, biologically, and chemically determined, how could you possibly say that? We have free will. [00:52:43] We indeed are subjects to the laws of the universe? If the brain is subjects to the laws of the universe, and if we believe that the mind comes out of the brain and not the other way around AKA dualism, then you must believe that the mind is subject to the laws of the universe. Indeed, and if you believe that. [00:53:02] Then you do not believe that humans have free will. Now, of course, we still need a word for choice and decision. The prefrontal cortex of our brain handles action planning, and action planning happens. So decisions are made, but many think that the magic is not there. The magic of free will that you think is there that makes us special and alienable from artificial intelligence does not exist in the way you think. [00:53:28] Yeah, if you like the topic of free will and how it relates to consciousness, I highly recommend the TV series Westworld. It's lots of fun. Okay, let's switch gears a little bit. We were talking about the CTM and emergence. The idea that the mind is simply a necessary byproduct of the brain, and that any computational device, any complex computational device, would thereby exhibit awareness at the very least, and consciousness, depending on how you define it. [00:53:54] Now if you sit with this for a bit, you would say, okay, humans are conscious. Maybe dogs are conscious. Maybe artificial intelligence is conscious. But how low do we go? Are we gonna say a fish is conscious? Are we gonna say a snail is conscious? Are we gonna say a calculator is conscious? I. A calculator you just said, any computational device. [00:54:11] Now, according to the CTM, yes, yes. A calculator prospectively experiences a flash of awareness of the thing it calculates. Now remember, everything in scale int intelligence in scale. We have, humans are more intelligent than snails. Humans are more conscious than snails by this idea. The CTM, according to the CTM, then. [00:54:33] Well, a calculator doesn't have a memory. It doesn't have self-identity. It doesn't have attention. It doesn't have perception. It lacks so very much of what makes high scale consciousness, high scale consciousness. But according to the CTM, the only thing that matters is awareness. We punch in one plus one and we hit equals, and a bunch of computation happened. [00:54:53] Registers are turned on and off. Bits are flipped from zero to one and potentially according to the CTM, that flash of one plus one is experienced and then it goes away. A very interesting idea, indeed, not the most widely held. Lots of ideas floating around, but let's switch gears. Now we have the CTM saying that the mind is a necessary byproduct of any computing device. [00:55:13] A different theory holds that maybe the sum is greater than the parts of the human brain, or at least biological brains. The way that neurons work, specifically in neurophysiology and neuroscience, in mammals and reptiles and fish and such, maybe they're something special to that. The sum is greater than the parts. [00:55:35] Perhaps the brain must be just so in order to get consciousness, and we don't know how or why, but possibly you have to have neurons or at least something very similar. If this is the case, then in order to achieve artificial consciousness, we would need to approximate biological brains. It's called. [00:55:58] Biological plausibility. The phrase biological plausibility means How close does an artificial thing represent its biological counterpart? So for example, a plane can be thought of as something of a functional approximation to a bird, but they're totally different. A bird has feathered flapping wings, and it's tiny and cute, and a plane is a giant metal thing with stationary wings, propellers and wheels. [00:56:25] It doesn't biologically approximate the bird to achieve flight. Therefore, it is not very biologically plausible. Biological plausibility. However, it functionally approximates the bird in that it flies. They both achieve flight. Now, let's turn to neurons. We have the human brain and its neurons, which biologically and functionally create intelligence and consciousness. [00:56:51] We have artificial intelligence, which functionally creates intelligence. Now the question is whether it can functionally create consciousness. We explored the CTM. Now let's explore biological plausibility in artificial intelligence. A popular and powerful technique employed is called deep learning, which we're going to be exploring in the next set of episodes. [00:57:16] In this podcast, deep learning is based on a functional approximation of the human biological neuron. Like I said, in the beginning of this episode, the creators of the artificial neuron came from the hard brain sciences. They were trying to come up with a mathematical representation of the human neuron. [00:57:36] A mathematical approximation, and they came up with the perceptron. The perceptron became eventually the artificial neuron and the artificial neural network. Well, the artificial neural network, we have used to great success in a wide variety of fields. It has given us extreme flexibility in performing mental tasks, in machine learning and artificial intelligence, a level of flexibility and accuracy. [00:58:00] Prior machine learning algorithms have not given us. What's more is that we have had to tweak the artificial neural network to create dedicated architectures for particular tasks. We created a recurrent neural network for achieving natural language processing tasks, a recurrent neural network, a convolutional neural network for vision-based tasks, a deep Q network for planning and so on. [00:58:29] So we have this sort of master algorithm of the artificial neural network, and then we tweak it. Over here for language, over there, for vision, over here, for planning, et cetera. At first, that appears to take away from the magic of the neural network, but if you look at the brain, that's how the brain does it. [00:58:47] The brain has a center for speech called Broca's area, a center for Vision called the Primary Visual Cortex, a center for planning called the prefrontal cortex. And so on their structures, their architectures, and sometimes even their neurons are tweaked in order to optimize performance of their particular tasks. [00:59:09] So we're starting to see a little bit of biological plausibility. We saw some level of biological plausibility that the neural network itself came out of science of the brain. The perceptron was a mathematical model of the biological neuron, and we further advanced the architectures of neural networks in order to achieve specialized capacities, which looking back seems to biologically approximate the human brain as well. [00:59:35] So we seem to be achieving decent biological plausibility. So according to the theorists who say the sum is greater than the parts, we may be addressing them with what we already have in deep learning. Now that may not necessarily be so. The human neuron, of course, is biological and chemical. We have neurotransmitters being transferred from an axon to a dendrite. [00:59:58] Whereas in an artificial neural network, that's not how things work. Additionally, it appears that neurons in human brains seem to fire, fire, fire, fire, a continuous level of chemical interaction, whereas neurons in an artificial neural network are one shot feed forward. More importantly, there's a very big hitch to the biological plausibility of artificial neural networks compared to biological neural networks, and that is. [01:00:23] That while a artificial neural network when diagrammed on a piece of paper when symbolically represented in the mind's eye, looks like a biological neural network. That is not how it looks inside the computer. The algorithms and data structures associated with an artificial neural network are stored in RAM as registers over here and ones and zeros are flipped, and bits are changed and variables are altered, and things are put onto the hard drive, and things are taken off the hard drive and it does not look like a neural network when put on a computer represented mentally on a piece of paper, on a chalkboard in your mind. [01:01:03] We're looking at a artificial neural network represented physically on a computer. We're working with a computer's architecture, nothing at all. Like a neural network. So that defeats biological plausibility. Biological plausibility is not helping us here if we believe that the sum is greater than the parts. [01:01:20] So there's a minus one for artificial consciousness. If you believe in the importance of biological plausibility, however, many do not believe in the importance of biological plausibility. Instead, they believe in functionalism. As I said, a plane functionally approximates a bird in that it achieves flight. [01:01:39] Well, can a artificial neural network functionally approximate a biological neural network in that it achieves intelligence? Well already? Yes, we know that it does. Can it do so in order to achieve consciousness? That's the big question, the hard problem. So there we have biological plausibility and functionalism. [01:01:58] Those two are sort of head to head. Competing theories within artificial consciousness. One of the main proponents of the necessity of biological plausibility is a man named John Searl, and he, my friends, you'll hear his name as you dive into the world of artificial consciousness. He is a hater, a hater of artificial consciousness, a disbeliever for everybody that believes that we can create artificial consciousness. [01:02:26] He is their enemy. On the one hand you have Ray Kurzweil, who is a true believer, and on the other hand you have John Serro, who is a true disbeliever. His main argument is called the Chinese room argument, and it goes like this. You have a man who speaks English, who does not speak Chinese. And he's inside of a room, and the room has a book on a counter of instructions on how to translate certain Chinese symbols to other Chinese symbols. [01:02:53] And there's Chinese symbols all over the place. They're on cards, and somebody slides a Chinese symbol under the door and he walks over the door and he picks it up and he looks at it. And he looks around and he's trying to get his bearings. He goes, what the heck? And then he goes over to the instruction book and he, oh, okay. [01:03:08] So I, I've gotta trans, so he like flips the pages until he finds the thing in his hand. He's like, okay, so this, and he looks around the room, the, the instruction book says that this symbol translates to some other set of symbols. Apparently somebody's asking a question and we can construct an answer. And there's a map, a mapping in this. [01:03:29] Instruction manual. So he goes over, according to the instruction manual, I'm gonna grab this, this, and the other cards. And he looks at the one in his left hand and he looks at the ones in his right hand and he nods his head, yes, those, that's what the instructions say. And he walks over to the door and he slips the three cards that he just picked up under the door, and he answered the Chinese question correctly. [01:03:48] This is the Chinese room. The man understands English, he does not understand Chinese. However, following instructions, he is able to speak Chinese, it appears. However, we know that he does not speak Chinese. The man in the room does not speak Chinese. He's simply following instructions. So this is sort of the argument that following instructions does not yield understanding. [01:04:14] Does not yield consciousness. Now, a counterargument to the Chinese room is that the man does not understand Chinese. True. But the room does. The system does, and the man is simply a cog in that system. The idea here is that a neuron does not understand language, but the whole bro's area a cluster of neurons as a system does, and it goes back and forth. [01:04:40] You're starting to see that there's a huge debate in cognitive science going on between people on the left side and people on the right side. People that believe that we can synthesize consciousness artificially, and people who believe that that is impossible to do. So we've talked about biological plausibility, we've talked about functionalism and how those go head to head. [01:04:57] One more point on the topic of functionalism. The idea of functionalism is that if a thing functionally achieves its goal, then it doesn't matter what happens under the hood. It doesn't have to have biological plausibility. If a plane flies, it doesn't need feather wings. If a machine is conscious, then it doesn't need biological neurons. [01:05:18] But that's sort of the question reversed. How do we prove that a machine is conscious? We just can't. So we rely on things like the Turing test. There's this whole conversation around zombies in the debate of artificial consciousness. Imagine we have a bunch of zombies in the world that are just, you know, brains and they're walking with their hands out. [01:05:38] Now at a glance right away you think they are not conscious. Okay. You think they're in a coma or in a trance or just not there. There's, there's nothing there. And this is sort of the on the biological plausibility side, people saying, well, what about, what about zombies? And then on the functionalism side is if these zombies appear to be exhibiting intelligent behavior, imagine they're not just saying brains and walking around, but they're actually conversing with you, et cetera. [01:06:03] The functionalists say that there is no such thing as a mindless zombie. If you have something that is exhibiting intelligent behavior, it necessarily is intelligent and necessarily is conscious. Okay. Okay. There's a lot of stuff that's just I, I just dumped on you a lot of kind of thoughts and theories in the space. [01:06:25] The point is that there's a big conversation happening right now. The big conversation is especially big and hot right now because look what we're doing in artificial intelligence. Look what we're making. Big things are happening over at Google and Facebook, IBM, and Baidu, et cetera. Big things are happening and those bring big questions. [01:06:47] We have the human brain. We can look at the activity in the brain, the electrical pulses through the centers of the brain under an MRI or a PET scan or a CAT scan. We can look at the centers that are activated that appear to be associated with conscious. Thought with awareness, we can boil consciousness down definitively to at least awareness and possibly depending on many other aspects such as memory learning, self-identity, attention perception, et cetera. [01:07:15] And we know that all this comes from the brain, but we're vexed with this conundrum where the brain is over here creating the mind, but the mind appears to be outside of the physical dimension. How can we possibly think about the mind in such a way that we can answer the question of whether artificial consciousness is possible, whether it is possible to synthesize consciousness through our. [01:07:39] Endeavors and artificial intelligence. It's an unsolved mystery. It's an unsettled debate. Cognitive scientists are debating to and fro left and right. You just see this cloud of fists as people are fighting. In fact, actually there's these two cognitive scientist philosophers in particular that just hate each other. [01:07:56] It's really fun to watch Daniel Dennet and David Chalmers. David Chalmers is actually the guy that coined the term the hard problem of consciousness, and Daniel Dennet is a very, very good author about all things consciousness. But these two are just like they name call each other, they flame the heck out of each other. [01:08:13] It's just a big fight going on in the space of cognitive science. And like I said earlier in the episode, we have seats at the table of this discussion. That's why this is so fun. That's why I'm so interested in machine learning and artificial intelligence that we can participate in the conversation about whether or not it is possible to synthesize consciousness. [01:08:30] And if we can synthesize consciousness, my friends, we can create a soul. This is prospectively the most important thing that humanity has ever done, has ever accomplished, and I believe that artificial intelligence engineers, machine learning engineers in the sort might be the missing piece to understanding this equation. [01:08:53] One of the most lasting quotes that has stuck with me was between these two. There was a debate about artificial consciousness, and there was one of those guys who says, it's not even something you can talk about. Cognitive scientists can't even agree upon a definition. Just let it go. And he said, look, look, here's what consciousness is. [01:09:11] Consciousness is a camera looking at a scene. We have cones and cubes and spheres, and we can dissect them. We can do all the science and math to understand the scene that we're observing. We can science the heck out of the room. But the one thing we cannot science the heck out of is ourself. Why? Because we were a camera. [01:09:33] We were looking at the scene, and we cannot look at ourself. Consciousness is definitively subjective outside the realm of science, impossible to make objective. And some guy in the debate said, yeah, unless you have a mirror. And he smiled and there was a pause and everybody looked at him and his smile faded. [01:09:57] Unless you have a mirror, my friends. I think that the artificial intelligence engineers are the ones who are going to solve the mystery of consciousness. We're gonna build the thing that makes it possible to introspect. We're going to build the mirror. We're going to make a mind. One more thing. There's thought that this, this, the creation of artificial intelligence has been mankind's purpose built into our brains from the very beginning, according to the theory of the singularity, that technology is advancing at a exponential or at least polynomial pace. [01:10:39] Why such a predictive graph? Why so predictable? Some believe that technology is actually an extension of evolution. That evolutionary advancements has been on this polynomial graph as well. Since the dawn of evolution on earth through biology, we had a single cell organism become a multi cell organism, lead to the Cambrian explosion, then dinosaurs, mammals, monkeys, humans, and out of humans comes technology. [01:11:06] Advancement, advancement. Advancement and out of technology comes the next species. There's a belief that evolution and technology, if on such a deterministic and predictable graph is necessary and unstoppable. Well, we have stopped evolution through medical science. Now anybody can live. There is no survival of the fittest. [01:11:26] Maybe though evolution can't be stopped and that the next logical step is necessarily synthesized life, that this has to happen. It is a necessary extension of evolution. And if you look at our technology through the ages. We've had myth of artificial intelligence in the gollum's of Jewish folklore to the fascination in the renaissance of autonoma and perpetual motion machines, Renee Descartes and Leonardo da Vinci to the fiction of Frankenstein and robots. [01:11:57] And of course, finally to the obsession of artificial intelligence. Ever since the dawn of computing all the way back with Alan Turing and all the minds since the 1950s, the creation of AI seems to have been an obsession of humanity since the beginning. It may be our destiny and it may be inevitable. [01:12:17] We're on the cusp of something enormous. My friends, you cannot deny that what's about to happen may be the biggest thing that has ever happened in human history. Consciousness. Okay, that's the end of this episode. Like I said, people may be coming to this episode for its own right outside of the space of artificial intelligence and machine learning. [01:12:36] If you are interested in participating in the creation of consciousness, then artificial intelligence is the right place to be. If you have no experience with artificial intelligence, then my recommendation is to get involved in machine learning. Machine learning is the gateway drug. So start at the beginning of this podcast series. [01:12:54] For the resources for this episode, I'm going to leave you with one major resource. I've recommended it time and time again. It is a course by the great courses called The Philosophy of Mind, brains, consciousness, and Thinking Machines. It is A to Z about everything I've been talking about in this episode and more. [01:13:13] There's great stuff out there by modern philosophers and cognitive scientists, Daniel Dennet, David Chalmers, et cetera, but those are a little bit more specialized, and I want you to start from the beginning. The Great Courses series will guide you where to go from there. So I'm just gonna leave you with that one resource. [01:13:30] And per the last episode, I told you I was going to tell you the game plan for the next few episodes, whether I was gonna split this into two seasons, stopping now, I'm actually gonna do at least a few more episodes. I'm gonna do some stuff on recurrent neural networks and convolutional neural networks. [01:13:46] And then we will reassess from there whether I'm going to stop and take a break, but you've got at least a few more episodes out of me before my brain is run dry and I've got nothing else to teach. So let's make as much headway with my brain as possible, and then I will, and then reassess from there. [01:14:02] Okay guys. See you in the next episode.