A Civilian A.I. Policy

We are proud to announce our brand new A.I. Policy for Peaceful R&D. The policy takes aim at two major threats to societal prosperity and peace. On the one hand, increases in military spending continue throughout the world, including automated weapons development. Justified by “growing terrorist threats”, these actions are themselves resulting in increased use of undue and unjustified force, military and otherwise — the very thing they are aiming to suppress. On the other, the increased possibility — and in many cases clearly documented efforts — of governments wielding advanced technologies to spy on their law-abiding citizens, in numerous ways, and sidestep long-accepted public policy intended to protect private lives from public exposure has gradually become too acceptable. In the coming years and decades artificial intelligence (AI) technologies — and powerful automation in general — has the potential to make these matters significantly worse.

In the US a large part of research funding for AI came from the military. Since WWII, Japan took a clear-cut stance against military-oriented research in its universities, standing for over half a century as a shining example of how a whole nation could take the pacifist high road. Instead of more countries following its lead, the exact reverse is now happening: Japan is relaxing these constraints [1], as funding for military activities continues to grow in the U.S., China, and elsewhere. And were it not for the actions of extremely brave individuals, we might still be in the dark about governments breaches of their countries’ constitutions, laws and regulations, and trampling on civil rights that have taken centuries to establish.

In the past few years the ubiquity of AI systems, such as OpenAI’s ChatGPT, DeepMind’s AlphaFold, a variety of humanoid robots has increased public interest in AI across the globe. As a result we have seen a speedup in development of applied AI and there is no reason to expect this to slow down globally, across all industries. However, not all application of AI is for the betterment of humanity; it is becoming increasingly important for researchers and laboratories to take a stance on who is to benefit from their R&D efforts — is it to the advantage of the general population of planet Earth, just a handful of countries or companies, or perhaps only to the benefit of a few governments, institutions or individuals? This is why we created our Ethics Policy for Peaceful R&D in 2015. As far as we know, it was the first such ethics policy of its kind, and it may still hold a unique place in the world.

REFERENCE

[1] Japan Looks to End Taboo on Military Research at Universities — Government wants to tap best scientists to bolster defenses. By Eric Pfanner and Chieko Tsuneoka, March 24, 2015, 11:02 p.m. ET

IIIM’s Civilian A.I. Policy

UPDATED: Apr. 6, 2025

IIIM’s Board of Directors believes that the freedom of researchers to explore and uncover the principles of intelligence, automation, and autonomy, and to recast these as the mechanized runtime principles of man-made computing machinery, is a promising approach for producing advanced software with commercial and public applications, for solving numerous difficult challenges facing humanity, and for answering important questions about the nature of human thought.

While applied research for civilian use has increased greatly in the past decade, a notable part of past applied artificial intelligence (AI) research in the world is still funded by military authorities or by funds assigned various militaristic purposes. A significant portion of the world’s basic research in AI is still supported by such funding, as opposed to projects directly and exclusively targeting peaceful civilian purposes. A large and disconcerting imbalance exists therefore between advanced AI research with a focus on hostile applications and AI research with an explicitly peaceful agenda. Increased funding for military research has a built-in potential to fuel a continual arms race; reducing this imbalance may lessen chances of conflict due to international tension, distrust, unfriendly espionage, terrorism, undue use of military force, and unjust use of power.

Just as AI has the potential to enhance military operations, the utility of AI technology for enabling perpetration of unlawful or generally undemocratic acts is unquestioned. While less obvious at present than the military use of AI and other advanced technologies, the falling cost of computers is likely to make highly advanced automation technology increasingly accessible to anyone who wants it. The potential for all technology of this kind to do harm is therefore increasing.

 

For these reasons, and as a result of IIIM’s sincere goal to focus its research towards topics and challenges of obvious benefit to the general public, and for the betterment of society, human livelihood and life on Earth, IIIM’s Board of Directors hereby states the Institute’s stance on such matters clearly and concisely, by establishing the following Ethical Policy for all current and future activities of IIIM:

1 – IIIM’s aim is to advance scientific understanding of the world, and to enable the application of this knowledge for the benefit and betterment of humankind.

2 – IIIM will not undertake any project or activity intended to (2a) cause bodily injury or severe emotional distress to any person, (2b) invade the personal privacy or violate the human rights of any person, as defined by the United Nations Declaration of Human Rights, (2c) be applied to unlawful activities, or (2d) commit or prepare for any act of violence or war.

2.1 – IIIM will not participate in projects for which there exists any reasonable evidence of activities 2a, 2b, 2c, or 2d listed above, whether alone or in collaboration with governments, institutions, companies, organizations, individuals, or groups.

2.2 – IIIM will not accept military funding for its activities. ‘Military funding’ is defined as any and all funds designated to support the activities of governments, institutions, companies, organizations, and groups, explicitly intended for furthering a military agenda, or to prepare for or commit to any act of war.

2.3 – IIIM will not collaborate with any institution, company, group, or organization whose existence or operation is explicitly, whether in part or in whole, sponsored by military funding as described in 2.2 or controlled by military authorities. For civilian institutions with a history of undertaking military-funded projects a 5-15 rule will be applied: if for the past 5 years 15% or more of their projects were sponsored by such funds, they will not be considered as IIIM collaborators.

Catalyzing innovation and high-technology research in Iceland