Difference between revisions of "AI Can Now Self-Reproduce - Should Humans Be Worried (YouTube Content)"

Jump to navigation Jump to search
no edit summary
 
Line 51: Line 51:
That which is being fooled is the more neurologically advanced of the two species. And so what I've talked about, somewhat controversially, is what I call Artificial Out-telligence. Where instead of actually having an artificially intelligent species you can imagine a dumb computer program that uses the reward, through let's say genetic algorithms and selection within a computer framework, to increasingly parasitize using better and better lures, fully intelligent humans.
That which is being fooled is the more neurologically advanced of the two species. And so what I've talked about, somewhat controversially, is what I call Artificial Out-telligence. Where instead of actually having an artificially intelligent species you can imagine a dumb computer program that uses the reward, through let's say genetic algorithms and selection within a computer framework, to increasingly parasitize using better and better lures, fully intelligent humans.


And in the case of Artificial Intelligence I don't think we're there yet. But in the case of Artificial Out-telligence, I can't find anything that's missing from the equation. So we have self-modifying code. You have Bitcoin so you could have a reward structure and Blockchains. And there's nothing that I see that keeps us from creating.
And in the case of Artificial Intelligence I don't think we're there yet. But in the case of [[Artificial Outelligence]], I can't find anything that's missing from the equation. So we have self-modifying code. You have Bitcoin so you could have a reward structure and Blockchains. And there's nothing that I see that keeps us from creating.
 
Now that’s such a such a strange and quixotic possibility. Now in this framework I don't see an existential risk so that my friends who worry about machine intelligence being a terminal invention for the human species probably don't need to be worried.
Now that’s such a such a strange and quixotic possibility. Now in this framework I don't see an existential risk so that my friends who worry about machine intelligence being a terminal invention for the human species probably don't need to be worried.


Navigation menu