Verification and validation of agentic behavior have been suggested as important research priorities in efforts to reduce risks associated with the creation of general artificial intelligence (Russell et al 2015). In this paper we question the appropriateness of using language of certainty with respect to efforts to manage that risk. We begin by establishing a very general formalism to characterize agentic behavior and to describe standards of acceptable behavior. We show that determination of whether an agent meets any particular standard is not computable. We discuss the extent of the burden associated with verification by manual proof and by automated behavioral governance. We show that to ensure decidability of the behavioral standard itself, one must further limit the capabilities of the agent. We then demonstrate that if our concerns relate to outcomes in the physical world, attempts at validation are futile. Finally, we show that layered architectures aimed at making these challenges tractable mistakenly equate intentions with actions or outcomes, thereby failing to provide any guarantees. We conclude with a discussion of why language of certainty should be eradicated from the conversation about the safety of general artificial intelligence.
from cs.AI updates on arXiv.org http://ift.tt/1WmQbti
via IFTTT
No comments:
Post a Comment