The update opening letter online by The Future of Life Institute (FLI) knows a success without precedent among researchers. More than 5,000 have already signed this letter which highlights the dangers caused by the acceleration of research in artificial intelligence. The approach may seem anecdotal if the profile of the signatories. All business leaders of artificial intelligence responded the appeal herein: Google, Facebook, DeepMind, Vicarious, IBM. The most prestigious universities such as MIT, Harvard, Yale, Berkeley, joined them, as well as a few emblematic figures like Elon Musk and Stephen Hawking who had been the first to highlight the dangers of this runaway artificial intelligence research.
An initiative to divert the search for the ‘dark side’ of artificial intelligence
Support of the scientific community seems solid for the initiative of the FLI, the Future of Life Institute”created by Jaan Tallinn, the co-founder of Skype, with researchers from MIT, Harvard, the University of Santa Cruz and the University of Boston. This establishes a aims to stimulate research and initiatives to protect life, to use the official terminology, and develop an optimistic vision of the future. Elon Musk recently paid $ 10 million to the Institute to fund programs of research in this direction. In its Scientific Council, scientific luminaries ten, Elon Musk and Stephen Hawking and the actor Alan Alda and Morgan Freeman, well known in the United States for comment on various broadcasts popular science TV.
The first priority of the FLI focuses on the study of the economic impact of artificial intelligence. What will be the markets that will be most strongly challenged by intelligent agents? Financial markets, insurance, industries, the labour market will no doubt know of strong disturbances due to the automation of many of new tasks. Similarly, at the level of Governments, decision-making will be more influenced by data and perhaps even a day taken via agents. How will evolve our societies in an economy based on software agents?
A second research axis focuses on the legal and ethical implications of the rise of artificial intelligence in our daily lives. What laws for autonomous vehicles, when international regulations on autonomous weapons, how to protect our privacy against software agents? So many questions to which it becomes urgent to answer because what were still only research a few years ago themes is materializing today.
Finally, the action of the FLI will stand on the plan of research. Also funding research in artificial intelligence that will improve the fate of humanity, FLI will also study ways to make the safest artificial intelligence to humans. The researchers are working on ways to verify the behavior of the AI until they don’t fit cars, drones or are launched on the financial markets, the means also to validate them, certify them to ensure our security.
A scenario to the Skynet is not inevitable and the FLI will endeavour to what artificial intelligence does not return one day against his creator.
Translation : Bing Translator
Source : “Research priorities for robust and beneficial artificial intelligence”, Future of life Institute release, January 23, 2015