On 20 April 2018, the VCDNP, in collaboration with the International Institute for Peace (IIP), organized a seminar entitled "Rapidly Emerging Technologies: What are the Ethical and Legal Challenges?" that brought together experts in the field of artificial intelligence (AI), machine learning and predictive applications. In the aftermath of the Facebook/Cambridge Analytica scandal which unfolded a case of political misuse of Big Data, many started to question how much further advanced technology could go and what the moral and ethical boundaries of its development are. Can artificial intelligence improve the world in which we are living? Are emerging technologies posing a threat to society or are they offering viable solutions to current challenges?
Moderated by VCDNP Senior Fellow Angela Kane, the speakers Sean Leggasick, Co-Lead of the Ethics and Society Group with DeepMind, and Jane Zavalishina, President and co-founder of Mechanica AI, discussed the possible impacts of the rapidly emerging technologies on society, ethics and law. VCDNP Executive Director Laura Rockwood and the IIP President Dr. Hannes Swoboda opened the seminar with comments on the current security challenges associated with the development of new technologies.
Mr. Legassick presented the main aims of his research team DeepMind Ethics and Society, which brings together AI researchers with the social mission of making the world a better place and with the incentive of increasing problem-solving capacity in fields such as health care and climate change. Raising awareness among technologists of the ethical and social impact of their work is Mr. Leggasick’s teams major goal. He referred to the astonishing progress in AI research. By way of example, he cited the contribution made by AI in reducing Google’s Data Centre energy consumption by 40 percent. However, he cautioned that, in striving for further technological advancement, ethical and social questions regarding the impacts of AI implementation must not be ignored. Mr. Legassick stressed the importance of helping society to anticipate and control the effects of AI. In his view, we all should be participating in addressing the key ethical challenges in areas such as privacy, transparency, fairness, governance and accountability.
Ms. Zavalishina opened with a rhetorical question: “What is moral?” She underlined the fact that AI is just an algorithm, a programme – not a super-mind – and that we should not expect it to evaluate the morality of its actions. Technology only produces what people want it to produce. Artificial intelligence inherits a bias from humans, whose ethics it is not clear how to measure. A more balanced approach is required from both scholars and the general public. According to Ms. Zavalishina, a paranoid attitude towards AI could be counterproductive as emerging technologies could provide solutions to many challenges so we should not stigmatize their application. She posed the further question as to whether or not implementing AI would be even more immoral and unethical.
During the question and answer period, an engaged audience inquired about the possibility of losing human intellectual capacity as the result of an excessive use of artificial intelligence. Both speakers expressed optimistic views regarding AI implementation. Ms. Zavalishina stressed that AI is a knowledge-based model and therefore the human factor in AI will always be necessary. Mr. Leggasick also noted that emerging technologies can be used as a tool for maintaining human capacity, not a threat to it. Questions were also raised about the use of AI in autonomous weapons and the possibility of unintended consequences connected to the use of emerging technologies. Mr. Leggasick emphasized that, in his opinion, machines should not be given any capacity to identify targets, at least not human ones.