IIIM News

News about the institution, its research & recent advances

Civilian AI Policy

We are proud to announce our brand new Civilian AI Ethics Policy for Peaceful R&D. The policy takes aim at two major threats to societal prosperity and peace. On the one hand, increases in military spending continue throughout the world, including automated weapons development. Justified by “growing terrorist threats”, these actions are themselves resulting in increased use of undue and unjustified force, military and otherwise — the very thing they are aiming to suppress. On the other, the increased possibility — and in many cases documented efforts — of governments wielding advanced technologies to spy on their law-abiding citizens, in numerous ways, and sidestep long-accepted public policy and human rights intended to protect private lives from public exposure, is gradually becoming too acceptable. In the coming years and decades artificial intelligence (AI) technologies — and powerful automation in general — has the potential to make these matters significantly worse.

In the US a large part of research funding for AI has come from the military. Since WWII, Japan took a clear-cut stance against military-oriented research in its universities, standing for over half a century as a shining example of how a whole nation could take the pacifist high road. Instead of more countries following its lead, the exact reverse is now happening: Japan is relaxing these constraints (1), as funding for military activities continues to grow in the U.S., China, and elsewhere. And were it not for the extremely brave actions of a single individual, Edward Snowden, we might still be in the dark about the NSA’s pervasive breach of the U.S. constitution, trampling on civil rights that took centuries to establish.

In the past few years the ubiquity of AI systems, starting with Google’s powerful search engine, Apple’s Siri, and IBM’s question-answering system Watson, has resulted in an increased interest in AI across the globe, increasing funding available for such technologies in all its forms. We should expect a speedup, not a status quo or slowdown, of global advances and adaptation of AI technologies, across all industries. It is becoming increasingly important for researchers and laboratories to take a stance on who is to benefit from their R&D efforts — just a few individuals, groups, and governments, or the general people of planet Earth? This is what we are doing today. This is why our Ethics Policy for Peaceful R&D exists. As far as we know, no other R&D laboratory has initiated such a policy.

REF

1. Japan Looks to End Taboo on Military Research at Universities — Government wants to tap best scientists to bolster defenses. By Eric Pfanner and Chieko Tsuneoka, March 24, 2015, 11:02 p.m. ET

IIIM’s AI Ethics Policy for Peaceful R&D

The Board of Directors of IIIM believes that the freedom of researchers to explore and uncover the principles of intelligence, automation, and autonomy, and to recast these as the mechanized runtime principles of man-made computing machinery, is a promising approach for producing advanced software with commercial and public applications, for solving numerous difficult challenges facing humanity, and for answering important questions about the nature of human thought.

A significant part of all past artificial intelligence (AI) research in the world is and has been funded by military authorities, or by funds assigned various militaristic purposes, indicating its importance and application to military operations. A large portion of the world’s most advanced AI research is still supported by such funding, as opposed to projects directly and exclusively targeting peaceful civilian purposes. As a result, a large and disconcerting imbalance exists between AI research with a focus on hostile applications and AI research with an explicitly peaceful agenda. Increased funding for military research has a built-in potential to fuel a continual arms race; reducing this imbalance may lessen chances of conflict due to international tension, distrust, unfriendly espionage, terrorism, undue use of military force, and unjust use of power.

Just as AI has the potential to enhance military operations, the utility of AI technology for enabling perpetration of unlawful or generally undemocratic acts is unquestioned. While less obvious at present than the military use of AI and other advanced technologies, the falling cost of computers is likely to make highly advanced automation technology increasingly accessible to anyone who wants it. The potential for all technology of this kind to do harm is therefore increasing.

For these reasons, and as a result of IIIM’s sincere goal to focus its research towards topics and challenges of obvious benefit to the general public, and for the betterment of society, human livelihood and life on Earth, IIIM’s Board of Directors hereby states the Institute’s stance on such matters clearly and concisely, by establishing the following Ethical Policy for all current and future activities of IIIM:

1 – IIIM’s aim is to advance scientific understanding of the world, and to enable the application of this knowledge for the benefit and betterment of humankind.

2 – IIIM will not undertake any project or activity intended to (2a) cause bodily injury or severe emotional distress to any person, (2b) invade the personal privacy or violate the human rights of any person, as defined by the United Nations Declaration of Human Rights, (2c) be applied to unlawful activities, or (2d) commit or prepare for any act of violence or war.

2.1 – IIIM will not participate in projects for which there exists any reasonable evidence of activities 2a, 2b, 2c, or 2d listed above, whether alone or in collaboration with governments, institutions, companies, organizations, individuals, or groups.

2.2 – IIIM will not accept military funding for its activities. ‘Military funding’ is defined as any and all funds designated to support the activities of governments, institutions, companies, organizations, and groups, explicitly intended for furthering a military agenda, or to prepare for or commit to any act of war.

2.3 – IIIM will not collaborate with any institution, company, group, or organization whose existence or operation is explicitly, whether in part or in whole, sponsored by military funding as described in 2.2 or controlled by military authorities. For civilian institutions with a history of undertaking military-funded projects a 5-15 rule will be applied: if for the past 5 years 15% or more of their projects were sponsored by such funds, they will not be considered as IIIM collaborators.

IIIM’s New Ethics Policy Aims for Peaceful Use of Artificial Intelligence

CADIAclauseLogoReykjavik, Iceland August 31st, 2015 – The Icelandic Institute of Intelligent Machines (IIIM) has formulated the first research policy that repudiates development of technologies intended for military operations. The Ethics Policy for Peaceful R&D outlines founding principles for a strong stance against collaboration with any organization even partially funded by military in the last five years. The policy has been several years in the making and has the support of 100% of the Institute’s employees and Board of Directors. Continue reading IIIM’s New Ethics Policy Aims for Peaceful Use of Artificial Intelligence

Fact or Fiction: The Perils of the Path to Artificial Intelligence

Screen Shot 2015-08-13 at 15.35.00Developments and research within the field of artificial intelligence (AI) have been discussed greatly in recent times, especially in the context of how the developments affect modern society and its foundations.

While participating in a newscast program, Spegillinn, on Icelandic radio, Dr. Kristinn R. Thórisson shared his views and expertise in AI where he said that Iceland is in the frontline of developing a machine capable of independent thinking. Continue reading Fact or Fiction: The Perils of the Path to Artificial Intelligence

Towards True AI: Artificial General Intelligence (Video from AI Festival 2014)

Screen Shot 2015-04-15 at 15.37.07Dr. Kristinn R. Thórisson gave a talk on Artificial General Intelligence at IIIM’s and CADIA’s AI festival where he emphasised that truly intelligent systems will not be anything like the software we know today, for at least two reasons: Today’s software cannot “figure stuff out for itself” and it has no “life of its own” — since, whenever its environment changes even slightly, it relies completely and utterly on its designers to make things right. Continue reading Towards True AI: Artificial General Intelligence (Video from AI Festival 2014)