Ethicode™ Artificial Conscience: “AC-1”
The Ethicode Artifical Conscience program is a system whereby humans worldwide can collectively program, through the internet and world wide web, the world’s first artificial intelligence (AI) system for use in making ethical decisions: an “articial conscience” (AC). The Ethicode AC—“AC-1”—enables machines to make ethical decisions in a manner that reflects the accumulated, collective wisdom of milliennia of human learning.
Portions of the Ethicode project, designed in January, 1999, were incorporated into the Universal Electronic Transaction system in 2003-2004 and successfully demonstrated through the Jatalla search engine prototype in 2006. Additional features will be published when the need for a viable AC gains greater recognition and acceptance.
The Ethicode Artificial Conscience works by querying users to make relative evaluations—for instance, which is more valuable, a living animal vs. an inanimate object—and uncomfortable moral decisions, uncomfortable in that neither choice is “good.” Thus, for instance, Ethicode forces users to choose between, for instance, killing a person or killing an animal; killing an old person vs. killing a young person; killing one person vs. killing two people; killing a person vs. hurting their feelings; lying to someone vs. hurting their feelings; and so on.
The Ethicode also tracks “high performers,” people who show high levels of emotional intelligence and whose ethical decisions are virtually always in accord with those of the majority, super-majority, or even 5-to-1 majority. These super-performers are then included in queries that are not binary, meaning they involve more subtlety and multiple levels of degree, layers of consequences, and so on.
The Ethicode questions themselves are also drawn from user-generated data, the cessionist Propositions module, which includes user-submitted statements and evaluations of the authority of those statements, and the Taxonomy module, a global taxonomy of all human-known or human-assigned sets, such as “dog” and “cat” are both subsets of “mammal” but do not intersect with each other.
Update: Currently—while less so than in 1999—this project still comes off as far too “futuristic” for most people. But the success of Wikipedia during the interim has effectively eliminated one of the original concerns, namely, whether people would be willing to collectively generate, edit, and maintain a globally accessible, comprehensive reference resource—and to do so for free. That point has now been proven in general.
What remains to be seen is whether—or, rather, when—the need for enabling machines to make ethical decisions will become apparent. Let’s hope that time comes soon. . .
Update 10/28/18: Since the above comments were made, the “self-driving car” craze has generated a wide-scale recognition that ethical decisions will indeed have to be made by machines. MIT has apparently done a poll that serves as a sort of proof-of-concept for the Ethicode, as their poll takes a very similar approach. MIT claims to have had more than 2 million participants, which demonstrates that people are not only willing to populate a database generally but also to do so specifically in the context of ethical decisions.
back to science | technology page