I don't believe that human beings are random. Strange, sure. Complex, no question. Stupid, probably. But not random. I have always believed that the people around me were predictable systems, though I have always been incapable of making those predictions.
I also don't believe that human beings are rational at the core. It's all emotions on the bottom. The rationality comes later, as a game which is played for emotional reasons. We come up with rational thoughts only when we need those thoughts.
But let's back up
. I believe that human intelligence is theoretically reproducible. And I doubt I will see that feat accomplished within my lifetime.
What passes for "artificial intelligence" these days is like a house built from the roof downward. The benefits of such an approach are obvious:
- From a satellite photo, it looks almost the same.
- It's affordable to get to that point. (You just need to stop when it gets impossible.)
- You can do it quickly.
If you're not left with a real house at the end, that's okay. Having a roof over your head's the most important part, so just dig a little hole underneath it where you can sit and you're all set.
But you probably don't see where I'm coming from with that metaphor, so I'll speak plainly. Modern AI projects mimic the actions of people, but not the processes that led to that action. Human intelligence is based on emotions, and the end result of the whole program looks rational at times. Machine intelligence is based on pure rationality, and if the programmer has a sense of humor he'll tack on some faux emotion to make it seem like it's not just a set of functional rules. Though it is.
The problem, as usual, is capitalism. Programmers aren't altruistic, they're pragmatic. AI only exists in any
form because the people working on them have already thought of a use for them. There's a specific job in mind, so you keep adding rules until it can do that job and then you stop. This is a dead-end kind of AI programming. Once you've made the program, it can't learn new things, it can't get better, it can't adapt to a changing environment, and it certainly can't do anything even the tiniest bit outside the tiny range of activities it was specifically programmed for.
That may be good enough for a capitalist, but it's not good enough for me. The promise of AI (as I have learned from science-fiction) is having new people in the world, non-human people. These programs aren't people, they're screwdrivers. You know how the movies always start with some simple program like a chess player or a monitoring system, and then it gets smarter and becomes sentient? Well, it can't possibly work like that. Not even in theory. If you make a screwdriver, a screwdriver is all you'll ever have.
If you want real intelligence, the sort of intelligence that learns and grows and becomes a productive member of society, you've got to start from the ground up. And I strongly believe that this is possible. We can do it. But we're not trying.
To actually have artificial intelligence, you need to make a program which functions -from the bottom up!- like a brain. I don't know exactly what that means, because I don't understand how the brain works. But I see the beginning of the path there. The first step is to assume that everything people ever do is the result of a predictable system built on emotions. (Because it is.)
What that means is that whoever's trying to make a machine think needs to understand all the latest theories about both psychology and neurobiology. Only by looking at both the macro and the micro, and theorizing all the while on how the two are connected, can you reach a sufficient understanding of intelligence to start programming it.
No one's going to do this, and if I sound antagonistic toward the people who pretend they are I apologize. It's just disappointment, you understand. But I understand perfectly well that what I'm suggesting requires genius, nearly infinite patience, and a disconnect from the realities of this capitalist, practical world.
It is possible.
A "neural network" is a series of simulated neurons which are connected to each other. That's a start, but there's a lot more than that. There's no intelligence without emotion. Whatever a program is made for, it needs to actually want
that, or else those neurons will have no context. Which means that the chemicals behind impulses and emotions need to be studied, to see how a computer program can simulate those aspects of neurobiology.
There are many practical downsides to all this. It would take decades to program this thing. And then it would take more decades to raise
it. And once you do, it'll probably turn out that that specific individual you've created isn't good at what you wanted it to be good at. Actually, chances are it won't be good at doing anything at all. The intelligence of a human is tough enough, but you want it to be the intelligence of a competent
human? That could take another few centuries. That means studying the differences between the brains of specific people on an extremely minute level. Alternatively, you could introduce evolution into the mix. Have some sort of mechanism for full brain-simulations to reproduce, and only keep the ones that are doing a good job after years of training. It could take a while. By the time you're finished, whatever job you needed to have done is long since obsolete, and you've just used up trillions of dollars with nothing to show for it yet.
I think I like my world better than yours. My world is rational. Yours is just random.