Thoughts on Privacy Rights, Philosophy, and Robot Love

If total transparency of communication and meta data was handled by magically benevolent angels of justice, then it would be the best thing for society.

But when that formula is poisoned in any way, it becomes danger, fear, possibly repression, and anti benevolent to the innocently observed.

This will be an incredibly fine line to balance by government. In fact, with the lobbied interests represented today, I don’t see how this can function in a truthful manner. I don’t see how any member of national leadership could honestly disagree with this statement.

Surrendering privacy is not viable at this point in history for that very reason.

Here lies the problem of the progress to detect malicious communication. It’s engineering will take place regardless of benevolence. Ingenuity and superiority in the field of global communication is as unstoppable as the search for immortality, and it is directly intertwined. It will be pursued by evil as quickly as good.

The benevolent must interject themselves into the process, and that takes the exact, counterintuitive, measure that we are seeing our governments employ today, that is unfortunately conflicted by interests. Good intentions are corrupted by other interests of varying quality in terms of peaceful unity.

How do you deploy dogs on your oppressors without creating dogs that kill with a wide range of indiscriminance*? It’s a difficult topic.

I don’t know how to solve this problem, unless we can come up with a way to unify all the benevolent people without interests in conquest or vengeance.

How could a government become this?
How could a business become this?
How could the average human become truly benevolent?

We must, by law, and with temperance, empower the good people without subjecting them to fear of their privacy. This is what we stand for in the U.S.

Good people need to pay attention to how this evolves and provide a guiding voice on this matter. Some of the greatest minds of our time have wrestled with this topic. Isaac Asimov dealt with it in terms of programming the laws of benevolent robots in the foundation series. (spoiler: In the series you find out that robots in space have been gently watching over humans for a very long time.)

The laws of robotics, as originally stated, read.

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
If you juxtapose the concept of a sentient robot (forgive me while I get weird and nerdy here) for government, these laws might metaphorically represent the intent of our Constitution.
Asimov later introduced the concept of the Zeroth Law which changed things a bit. Sort of like how the Homeland Security Act changed our constitutional rights.
A robot may not harm a human being, unless he finds a way to prove that ultimately the harm done would benefit humanity in general!
– Caves of Steel, Asimov 1952
The problem lies in accurately determining what benefits humanity in general in the long term.

Trevize frowned. “How do you decide what is injurious, or not injurious, to humanity as a whole?” “Precisely, sir,” said Daneel. “In theory, the Zeroth Law was the answer to our problems. In practice, we could never decide. A human being is a concrete object. Injury to a person can be estimated and judged. Humanity is an abstraction.”

— Foundation and Earth, Asimov  via Wikipedia

*yeah I know it’s not a word. Tough noogies. It is now.