Latest YouTube Video

Tuesday, October 11, 2016

Godseed: Benevolent or Malevolent?. (arXiv:1402.5380v2 [cs.AI] UPDATED)

It is hypothesized by some thinkers that benign looking AI objectives may result in powerful AI drives that may pose an existential risk to human society. We analyze this scenario and find the underlying assumptions to be unlikely. We examine the alternative scenario of what happens when universal goals that are not human-centric are used for designing AI agents. We follow a design approach that tries to exclude malevolent motivations from AI agents, however, we see that objectives that seem benevolent may pose significant risk. We consider the following meta-rules: preserve and pervade life and culture, maximize the number of free minds, maximize intelligence, maximize wisdom, maximize energy production, behave like human, seek pleasure, accelerate evolution, survive, maximize control, and maximize capital. We also discuss various solution approaches for benevolent behavior including selfless goals, hybrid designs, Darwinism, universal constraints, semi-autonomy, and generalization of robot laws. A "prime directive" for AI may help in formulating an encompassing constraint for avoiding malicious behavior. We hypothesize that social instincts for autonomous robots may be effective such as attachment learning. We mention multiple beneficial scenarios for an advanced semi-autonomous AGI agent in the near future including space exploration, automation of industries, state functions, and cities. We conclude that a beneficial AI agent with intelligence beyond human-level is possible and has many practical use cases.



from cs.AI updates on arXiv.org http://ift.tt/1e8nZ8D
via IFTTT

No comments: