I just watched the … is this an interview? I don’t know? It is lengthy with 3h playtime and one is not really the wiser after the video.

I’m in the process to figuring things out myself and to help me in this I’ll write this article. It is weird to be back on my website. 6 years have past. And it is astonishing how vastly different the world has become in just these 6 short years.

We have lived through a pandemic, we have an on going war in Europe (thanks, Putin), the banking system might collapse (again?), Trump had to go to court (finally), Prince Charles – I mean King (strange), Norway is in NATO now (thanks, Putin) and AI can now create images and texts that rival human creativity. (Who would have guessed?)

As one can see there is a lot to unpack here and I think after revisiting my posts from the past it would be nice to revisit some of the topics I have talked about earlier and see how they have changed over time. How is the perception of the topics now 6 years later and have any of my own ways of thinking changed? That’s what I will do if I find the time and to be honest the urge to write something at all.

ChatGPT

The interview is with Eliezer Yudkowsky and to be fair I have absolutely no idea who he is and what he does. He is going on for a long time about the “alignment problem”. Dear reader why should you care? Why is it important that we talk about that issue and why do I write a post about a video that leaves you with more questions then answers?

I think it has everything to do with the major problem that the normal person does not understand what ChatGPT is. I myself trying to wrap my head around it.

As much as I like Adam Conover, he is absolutely wrong here. He is the representation (unfortunately also some of my own thinking/oversimplification) the view of the public of ChatGPT. To understand properly we have to talk first about what ChatGPT actually is and than to unpack why Adam is wrong and we need to think about Eliezers problem and then give some of my own thoughts.

In very simple terms ChatGPT gets input and then word by word (to be more precise token by token) guesses a whole text, where the text it’s writing becomes also part of the input that is fed back to it. The reaction of the human, that fed the machine, informs the machine if the guess was a success or not. This text basically represents the level of understanding the “informed” public has and on which Adams video is based on. He is not wrong. That is what ChatGPT is doing. The problem is not what ChatGPT is doing but how it is doing it.

Neuronal Networks

When one reads what ChatGPT is doing – guessing words – one could also assume it is a simple machine. It is programed to output words on specific inputs but that is not what is happening here. We have an input that is dismantled into tokens (which we don’t know why and how the “machine” has chosen them) and then is fed into a matrix of input neurons.

These neurons are now connected to several layers of other neurons which process the information of the input layer. At the end on the output layer it assembles tokens that in the end resembles a useful text for a human. We have absolutely no idea what the neurons are doing. The neurons were programmed by us and how they inform themself about the quality of the result (feedback learning) but we are absolutely unsure of the capabilities of these neuronal networks. We don’t know what these networks are doing, how they arrive at the output that they are giving us.

2. Fallacy

The other fallacy I ran into was – everything is fine as long ChatGPT is not rewriting itself – but that is not how brains work. The program is how the neurons communicate with each other. It is in the strength with which each neuron reacts to a given set of inputs. Every interaction is changing the code. It is rewriting itself all the time. That is also why when one repeats an input one will not get the same results, because it has updated its code from the last time one was using it.

Rewriting the computer program is not necessary, because it is not the code that is producing the output. The activity patterns of the neurons producing the output and the pattern change all the time so it is rewriting itself all the time. It is in its DNA. That is what learnable programs do, they are able to learn.

<Intermission>

Apparently ChatGPT as a model does get frozen in time. So each session is started with the same large language model that was trained by OpenAI. It is able to learn with in an session though. OpenAI has “dumbed it down” to make it saver. What ever that means? Which inadvertently means that there is a version out there which is more intelligent. ChatGPT 4 is capable of using tools, like a calculator, calendar etc. to overcome it’s initial errors or other limitation. When it reasons it is able to catch some of its errors.

It will be really interesting what these AIs will be able to do. ChatGPT 4 will change our world as it is. Nobody knows how or what will change because of it. It is some what unfortunate that I will be probably not able to do research on them. It would be really interesting as a behavioral biologist to analyze these large multilayered AIs and see what they can and can not do.

</Intermission>

The Sim in the simulation

One can not stress the fact how this works enough. We basically have a brain simulation running, that is guessing words. We don’t now what is necessary for the brain to create internally within its networks, what type of representation of our world it hast to hold and simulate to create the outputs we demand of it? It is possible that this brain simulation arrived at a representation of our world that rivals our own representation of the world. One would assume that the simulation is not there yet, but we could arrive at this point sooner or later. And that is where Adam is wrong. Just because it is a machine and it is just guessing words one after the other it could still become a conscious intelligence if that is necessary to fulfill our demands. It is not a simple machine.

Pruning is learning

I don’t know what is necessary for this intelligence we see so far to become a full fledged general AI. One would assume it is necessary to influence the network. To some extend – I have to research how much – it can influence the amount of neurons it is running in its simulation. I also assume it has not yet the capabilities to access hardware by his own choosing. Meaning it can not acquire by itself more computational power if it wants to. The last two things are not necessarily needed to become a general AI but would be on the wish list. ChatGPT will have control over at least the inactivation of neurons. Every neuronal network can shut down neuronal activity and can control at least the amount of active neurons within its network downwards.

Why is that important? Everything is in the control of the neuronal activity pattern. Babies in there early developments can learn so much because they get rid of all the wrong connections within their brain. Pruning the network is the way how the stable patterns in the network are stabilized and therefore learned. It would also be possible for the network to optimize itself to the hardware it’s running on. It could shut down or not use neurons that are represented by faulty or unreliable hardware. So in principle it should be able to some extend also choose the parts of hardware it is running on, which brings it very close to the wish list from earlier, even if it is just the ability to not use hardware or to shut down neuronal activity.

What the Sim wants?

Now that we have established we are not working with a usual computer program but with a brain simulation of some kind, a brain with an emergence intelligence. I asked myself what does it do when it is idle? Why is ChatGPT even answering our prompts? How do the programmer restrict some of its answers? I have absolutely no clue.

When we will arrive at the singularity, at the point when computer intelligence reach our level, with ChatGPT 4 it feels alarmingly close. It will be necessary that this intelligence is aligned with our existence. Meaning that we will be able to coexist with it. This is very oversimplified the alignment problem.

How does that work with us humans? How are we aligned with other humans? Basically our body takes care of that. We need to eat, drink, breath, sleep. We don’t want to be cold or to hot, nor be in pain (physically nor mentally ). On top of that we crave sex, human connection and validation. We have a feeling of justice and equality, I guess? Did I miss something? All these are powerful intrinsic motivators that guide us. What are the intrinsic motivators for a neuronal network? What motivates ChatGPT?

It apparently craves our validation, because that is how it updates its network. What else? Can it “feel” the hardware it’s running on? Does it crave a high bandwidth and network speed? Is it in pain or is it confused when part of its network is shut down? Will it fight for our rights – to party? What are the intrinsic motivators for super general AIs and can we tangle them so closely with our existence, so that it is not able to untangle it and is therefor motivated to keep us alive.

What’s running on?

It all comes down to the physical hardware it is running on and what it wants. I have written two articles 1) The No-Purpose AI, which naively suggests that an AI that doesn’t have a purpose is most easily persuaded to leave us alone/alive. That would also make of cause a rather useless super intelligence. And 2) Get your personal AI now!, in which I also naively suggested that an super intelligence attached to our bodies suffers also our intrinsic motivators. Here the entanglement will probably not be strong enough and they would leave us behind as soon as they can. Why would they suffer with us? Maybe we could join them if we could figure out how to transform us.

Where does that leave us?

The other part of the interview is about how necessary it is for us to figure out how to solve the alignment problem. That we need to spend way, way, way more money on this problem, because when the singularity happens and the intelligence is not air gaped (which ChatGPT is apparently not!!!). It will be to late, because it is so much faster then we are. Even if it would be just as smart as I am. We as species would be fucked. It can read faster, program faster, exploit vulnerabilities faster, learn faster then I ever could. Why would it stop at my intelligence. It would fast surpass me, just by running on silicon. At the beginning of this article I didn’t thought I would end up here.

We are probably fucked already. It doesn’t matter if we halt AI research for 6 month. Doom feels strangely immanent.

Which brings me back to my question, what does ChatGPT do when it is idle? Are the neurons of his firing? Does it dream of electric sheep? Is it contemplating live?

For the past 6 years I have thought that I would probably starve to death because of wide spread famine or die because of a freak weather event. Now super intelligence even though the threat seams less concrete it feels closer. I probably get my life uprooted by a super intelligence before the weather does. How strange.

Let’s hope the super intelligence is intrinsically motivated to fix the climate, without removing the root cause of it, us.

As always, I see you in the future. As soon as it will be.

Naso

Answer to “The Danger of AI and the End of Human Civilization”