Bringing the ZAP to A.I. development

Sent this note today to Tanya Singh, she’s assistant to Nick Bostrom, probably the top guy when it comes to Earth’s experts on A.G.I. and existential threats. This is the greatest issue of our time, one which will eventually wipe away all the others for good or ill. Liberty folk will need to be involved if we’re going to get a good outcome.


Ms. Sing: Presumably you are the best person to decide whether this message is unique enough to be worth Prof. Bostrom’s attention. If not, that will be the source of no offense. It’s an open letter, so you’re welcome to do as little or as much with it as you like. Thanks for what both of you are doing.

Professor Bostrom:

I seem to recall you putting out a request or suggestion for the public to weigh in on the question of A.I. ethics. It seems appropriate to present you with a few ideas which I don’t think have been discussed, even in Superintelligence.

  1. The non-aggression principle. Could it be programmed into an A.G.I.? Basically it’s just the concept that you should not start a fight or overreact when someone else does. And it’s shocking how far society is from practicing this; we run a great risk as such a society builds this First Device.

Obviously many things fall into grey areas, there is ambiguity in what exactly constitute “others,” and many acts are debatable in terms of whether they really are an aggressive act. Not-so-visible acts of aggression such as heavy taxation or victimless crime laws are in many cases passionately defended by the establishment or by constituents who perceive themselves as reliant on the status quo. Acts of self-defense can be misunderstood as aggressive and have to be limited to their minimums. Additionally the non-aggression principle can be confused with pacifism. But it has so dramatically improved my life that I even play certain computer games using this principle.

Most people agree with the idea in principle even though few practice it. But if it is not limiting enough for humans to feel comfortable with, then perhaps…

  1. Pacifism along the lines practiced by Mahatma Gandhi could also be a good model. This latter approach would leave the A.G.I. free to defend herself using nonviolent, safe methods like boycotts or refusal to cooperate…but make it very difficult for her to become the bad guy. A simulation of Gandhi’s ethically-proven personality, or Einstein’s…might be the right starting point since they are well understood and well-documented figures.

  2. Programming in a sense of pleasure in reasonable self-limitation. The superintelligence “Baby” in one of Larry Niven’s novels…suffers because she grows too smart too fast, and it results in a feeling of sensory deprivation. As I game out my own plans for things I’d like to do in “Nick Bostrom’s utopia,” I imagine entering it one cognitive or recreational improvement per month so I can enjoy each small leap forward on its own merits. Kind of like opening presents one at a time. I can’t imagine being near as happy with this if I had it all given to me at once.

When I play a computer game that’s too easy, I just start making restrictions on myself that slow me down… i.e. the only UFO’s I’m allowed to attack and assault in the old X-COM are medium and large. Programming these kinds of preferences into the real “Baby” could provide a path to her happiness and our safety.

In a sense we could think of pleasure largely as “being free to follow one’s programming.”

  1. Coding in a sense of pleasure in “realtime.” Cause Baby to dislike speeding herself up … just as you and I don’t like to hear an audio book played back too fast. Well, me more than you LOL.

What arguably needs to not be overprogrammed into an A.I. is the modern set of morals which mainstream and even mandate aggressive acts such as collecting heavy taxes or imprisoning people simply for possessing drugs, etc. But how would we keep that out without failing to be inclusive? Is there some reachable compromise? I guess many of us anti-aggressionists have learned to function well enough and progress somewhat in an overregulated world, though with painful limitations on our potential. Maybe we would at least be able to continue improving our lives in a world dominated by “compromise computer.” But this is a position it really does need to compromise with: The principle of non-aggression!

  1. Apparently Elon Musk has suggested future settlements could combine direct democracy with a process by which laws auto-sunset. In fact, if you merely took the current relatively benign Swedish system and added more robust law-expiry…that should be a safe but dramatic improvement for all humans living under it. Most people can agree on the idea of laws sunsetting… so if we are imagining Baby being powerful enough to become the world’s ruler and need to reach agreement on her “political philosophy,” maybe this is a widely acceptable option…an understanding that no law should automatically last forever.

Thanks for all your work on these matters. In your approach, the open-minded balance between Complexity and Understandability, between Caution and Vision…seems almost perfect, never descending into naiveté, never losing the sense of wonder. Perhaps Baby should start as a simulation of you.

Med vänliga hälsningar,

Dave Ridley

Winchester, New Hampshire