December 30, 2019

274 words 2 mins read

Maintaining human control of artificial intelligence

Maintaining human control of artificial intelligence

Although not a universally held goal, maintaining human-centric artificial intelligence is necessary for societys long-term stability. Joanna Bryson discusses why this is so and explores both the technological and policy mechanisms by which it can be achieved.

Talk Title Maintaining human control of artificial intelligence
Speakers Joanna Bryson (University of Bath)
Conference O’Reilly Artificial Intelligence Conference
Conf Tag Put AI to Work
Location New York, New York
Date April 16-18, 2019
URL Talk Page
Slides Talk Slides
Video

Although not a universally held goal, maintaining human-centric artificial intelligence is necessary for society’s long-term stability. Fortunately, the legal and technological problems of maintaining control are actually fairly well understood and amenable to engineering. The real problem is establishing the social and political will for assigning and maintaining accountability for artifacts when these artifacts are generated or used. Joanna Bryson discusses the necessity and tractability of maintaining human control and explores both the technological and policy mechanisms by which this can be achieved. What makes the problem most interesting—and most threatening—is that achieving consensus around such an approach requires at least some measure of agreement on broad existential concerns. But without clear accountability across the sector, AI will be used to facilitate fraud, with AI legal persons proving the ultimate technology as both shell companies and bought votes. AI is not a new species discovered; it’s a software engineering technique that has to date often been implemented in ways woefully lacking in DevOps. We can have and use extremely complex AI systems so long as we provide enough documentation, live testing, and ring fencing to ensure that we can demonstrate due diligence and lack of liability.

comments powered by Disqus