Difference between revisions of "AI Can Now Self-Reproduce - Should Humans Be Worried (YouTube Content)"

Line 40: Line 40:


== Transcript ==
== Transcript ==
{{no transcript blurb}}
 
There are a bunch of questions next to or adjacent to general Artificial Intelligence that have not gotten enough alarm because, in fact, there’s a crowding out of mind share. I think that we don’t really appreciate how rare the concept of selection is in the machines and creations that we make. So in general, if I have two cars in the driveway I don’t worry that if the moon is in the right place in the sky and the mood is just right that there will be a third car at a later point, because in general I have to go to a factory to get a new car. I don’t have a reproductive system built into my sedan. Now almost all of the other physiological systems—what are there, perhaps 11?—have a mirror.
 
So my car has a brain, so it’s got a neurological system. It’s got a skeletal system in its steel, but it lacks a reproductive system. So you could ask the question: are humans capable of making any machines that are really self-replicative? And the fact of the matter is that it’s very tough to do at the atomic layer but there is a command in many computer languages called Spawn. And Spawn can effectively create daughter programs from a running program.
 
Now as soon as you have the ability to reproduce you have the possibility that systems of selective pressures can act because the abstraction of life will be just as easily handled whether it’s based in our nucleotides, in our A, C, Ts and Gs, or whether it’s based in our bits and our computer programs. So one of the great dangers is that what we will end up doing is creating artificial life, allowing systems of selective pressures to act on it and finding that we have been evolving computer programs that we may have no easy ability to terminate, even if they’re not fully intelligent.
 
Further if we look to natural selection and sexual selection in the biological world we find some very strange systems, plants or animals with no mature brain to speak of effectively outsmart species which do have a brain by hijacking the victim species’ brain to serve the non-thinking species. So, for example, I’m very partial to the mirror orchid which is an orchid whose bottom petal typically resembles the female of a pollinator species. And because the male in that pollinator species detects a sexual possibility the flower does not need to give up costly and energetic nectar in order to attract the pollinator. And so if the plant can fool the pollinator to attempt to mate with this pseudo-female in the form of its bottom petal, it can effectively reproduce without having to offer a treat or a gift to the pollinator but, in fact, parasitizes its energy. Now how is it able to do this? Because if a pollinator is fooled then that plant is rewarded. So the plant is actually using the brain of the pollinator species, let’s say a wasp or a bee, to improve the wax replica, if you will, which it uses to seduce the males.
 
That which is being fooled is the more neurologically advanced of the two species. And so what I've talked about, somewhat controversially, is what I call Artificial Out-telligence. Where instead of actually having an artificially intelligent species you can imagine a dumb computer program that uses the reward, through let's say genetic algorithms and selection within a computer framework, to increasingly parasitize using better and better lures, fully intelligent humans.
 
And in the case of Artificial Intelligence I don't think we're there yet. But in the case of Artificial Out-telligence, I can't find anything that's missing from the equation. So we have self-modifying code. You have Bitcoin so you could have a reward structure and Blockchains. And there's nothing that I see that keeps us from creating.
Now that’s such a such a strange and quixotic possibility. Now in this framework I don't see an existential risk so that my friends who worry about machine intelligence being a terminal invention for the human species probably don't need to be worried.
 
But I think that there's a lot of exotica around Artificial Intelligence which hasn't been explored and I think which is much closer to fruition. Perhaps that's good. Maybe it's a warning shot so that we're going to find that we just as we woke up to Bitcoin as digital gold we may wake up to a precursor to artificial general intelligence which alerts us to the fact that we should probably be devoting more energy into this absolutely crazy sounding future problem which no humans have ever encountered.


[[Category:Eric Weinstein Content]]
[[Category:Eric Weinstein Content]]