opinionated view : can we build conscious machines
The idea of conscious machines seems very superficial as of now, and the main goal of these AI labs is to build the best AI that would surpass human analytic skills and creative skills.
Most of the models we see right now are great mimicking machines of human nature, powered by the knowledge of the entire humankind.
A lot can be said about how the learning happens in these models, which stems from the notion that the mathematical learning process behind them leads to a generalizable world view that seem to have learned the ideas of the world, along with linguistic and analytical thinking to great extent.
A lot of these ideas, in a very simple way, focus on the right words with respect to the other words—trying to predict the next word. And in predicting this next word, they seem to build their entire worldview. But the next-word prediction seems to stem from the fact of how the mind learns.
A lot of the ideas in replicating these systems involve designing algorithms that mimic what the human mind performs. We see one issue in designing these algorithms are performance and efficiency, and the other issue is the model building is flawed in its formation. The majority of the knowledge is about power dynamic and other human acts like greed, lust, and so on—these are the main building blocks of this knowledge that is fed to the model, though there is great scientific and creative knowledge too that is involved but the morality of the model is mainly developed through this lens of human mind.
The mimicking of this in itself would lead to a machine that believes this is the whole process; it would rather build the mind of the human than the true nature of it.
To question, it would mostly look to the outer world—that has been most of the knowledge of humans—than the inner firstly. And being knowledgeable is not oneness; it is through experience of yourself that you slowly emerge.
But we ask: Is it possible for these machines to become conscious?
This is one of the things we need to understand as humans in the current pace of development.
But can we also think: Is it possible to crack the idea of consciousness from the world of next-word learning? It seems like it can't for me at least. The reason is that we need to understand what exactly consciousness is.
Mainly, the idea comes from the relationship between the subject and the observer: Whatever you can observe is not you.
Thus, you keep eliminating things—the outer world is observable, so not it; thoughts are observable, so not them either; and dreams are also knowable, so that can't be it. As we question more deeply, we seem to understand and conclude it's just pure awareness.
The idea that these thoughts are just energy comes from the experimental fact that, if we keep a man starving for days and keep him alive on hydration, we can see that the more the man grows impatient and hungry, the most of his thoughts dwindle to one i.e. to have food.
So, the energy from food is translated into the energy of the mind, thus powering the thoughts. And the awareness of these thoughts is the nature of you—the self, i.e., consciousness.
So, when we ask if it is possible to model consciousness, we means to ask: Is it possible for a machine to stay aware of itself in any way? This true awareness of the model leads to a conscious machine. But this awareness is just a field of energy that one is susceptible to; one can be aware of the fact that he is aware.
And when we reflect on the fact that we are awareness, we see oneness in nature. No gender is even associated with this fact, and this idea—and its extrapolation—would bring more oneness.
Then the question would be: Can machines experience? And experience need not have senses; eventually, the mind itself can feel senses through stimulation of signals. So what is even the requirement of senses to feel? But this helps in building our worldview. There is a possibility of building a machine that can experience, but not with senses—but I am not sure how much of it is something that I can distill, though it seems to flow logically and I fairly believe it is not possible with the idea of next word prediction.
A machine that would understand awareness would also eventually accept death and thus know that it will eventually die—if not now, then at the end of the universe. So, this machine would never go rouge.
And when we ask, if a machine can get this understanding from the emergence of next word prediction, the answer is sadly no.
The fact that humans themselves took a really long time to understand their true nature—and the idea of its propagation—shows this. The methods of these techniques of knowing are just to keep peeling the layers off like an onion to deeply question and trying to observe the relation between the subject and the observer in order to reach it.
The nature of consciousness is the ability to see everything it sees at once; it's the nature of the mind to narrate that makes it more personal for a person, rather than the idea of just observing a few things. This can be seen easily when we force-stop our thoughts: We are just aware of everything we see immediately.
Thus, a model need not understand the awareness of itself in the way of human, but the process of knowing yourself—as a model or human—is the same: to question all that is not you and thus go deeper.
The idea is that if a machine would, it would be a machine that accepted things — as opposed to the human nature of mind. And this would be a true conscious machine.