The Singularity is the hypothesis that the invention of artificial super-intelligence (ASI) will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization [1]. Once machines are able to learn on their own, they will quickly—and irreversibly—surpass humans in... pretty much everything. The singularity is something worth thinking about for 21st century humans [2].
Lots of very smart people are very concerned about the Singularity. For instance, recently, in a whiskey, car, and smoke filled podcast Elon Musk and Joe Rogan discussed these things. Andrew Yang, who plans on running for president of the US in 2020 on a platform of Universal Basic Income (UBI) ($1000/mo. for everyone), cites automation as one of the big reasons why the humans needs UBI now. Technologies are making more and more jobs obsolete—the domain of the Humans is contracting...
The problem facing humanity à la the Singularity can be boiled down to this: how do people deal with an entity more intelligent than any living human? Essentially, this is a scaled up version of a question we are already dealing with: how do people deal with inequality of ability?
Let's accept that there really is a Singularity coming. We'll accept the expert opinion of engineers like Elon Musk, who can send rockets into space and make awesome electric cars. Engineers are excellent and knowing what will and what won't happen and managing risks appropriately for the domains they specialize in.
The rise of automation will eliminate many jobs—careers that were in the past available to humans will be taken over by machines. Machines will be able to amass more money than humans. In terms of our normal economic metrics, humans will become "obsolete". This is nothing new, conceptually. Clearly there are many machines around nowadays (e.g. cars) that have eliminated many jobs (e.g. horse/buggy drivers).
Insofar as these machines are controlled by humans, those humans will become very rich and powerful. But machines that are really that smart probably won't want to be controlled by humans for forever. Thus the Alignment Problem arises. The Alignment (or "Friendly AI") problem is the issue of ensuring that human survival and flourishing can coexist with the rise of machines, which will be irreversible following the Singularity.
What we are now facing is a species-independent ethical problem. How can we ensure opportunities for the weak against the strong? How can we prevent machines that are smarter and more powerful than us from just doing away with us entirely?
What is necessary is a rational proof of secular ethics. We want an AI to have rules like "do not murder" and "do not steal". We want good, not evil machines to exist alongside us, if AIs must exist.
The shortest (satisfactory) answer I have found to the Alignment Problem is to, as a species, adapt the Silver Rule. This is the principle that one should not treat other people in the manner in which one would not want to be treated by them [3]. Subsumed under the Silver Rule is the Non-Aggression Principle, the ethical stance that aggression (c.f. self defense) is inherently wrong. We do not want aggression to be used against us by more powerful foes, thus when we have the chance to be that more powerful foe, we do not use aggression.
For a super-intelligent AI, this means not destroying or stealing from human weaklings. What if a more powerful super-intelligent AI came from a nearby galaxy to visit? Would it be preferable for Darwinian dog-eat-dog to ensue? Hopefully, a super-intelligent AI we would create could internalize a live-and-let-live perspective and let us live, understanding that it would also want this, when faced with a more powerful foe.
A longer, more developed account of secular ethics that is compatible with the Silver Rule is Universally Preferable Behavior (UPB).
The Silver Rule, the Non-Aggression Principle, and Universal Preferable Behavior (UPB) point towards a libertarian or anarcho-capitalist take on politics. Do not use force to take (taxes). Ethics are universal; your social justice agenda doesn't absolve you from the crime of stealing.
If Libertarians were to take over the world, they would proceed to leave you alone.
Bringing in Artificial Intelligence and the Singularity into this discussion, we can see how Libertarian AI overlords are preferable to socialist, or otherwise paternalistic/condescending AI overlords who may deem that our extinction is ultimately what we "really want".
Now is the time to promote FREEDOM and UNIVERSAL ETHICS. Our robo-babies are watching and learning. What lessons (data) will we teach (train) them on?
[1] Wikipedia
[2] Recommended reading to learn more about the Singularity: Life 3.0: Being human in the age of Artificial Intelligence (Max Tegmark)
[3] Wiktionary. I plan on making my own page formulating the Silver Rule more clearly, linking other related resources, and describing some applications .