IIIM News

News about the institution, its research & recent advances

Fractured Perception: AI, Groupthink, and the Fragility of Thought

The frantic hum of modern life is orchestrated by  an ever-present conductor: the information we consume daily. Some of it looks mundane—headlines, emails—but much is far more insidious, slipping into our consciousness through suggestion, repetition, and emotional appeal. The human brain, despite its capacity for reason, remains highly susceptible to manipulation. In an era where artificial intelligence amplifies these tactics, cognitive warfare—the battle for perception and belief—has become as significant as any physical conflict. If, as Dr. James Giordano said, “the brain is the battlefield of the twenty-first century,” do we want to sit idly by as the very technologies designed to inform and connect us are weaponized against our ability to think critically, erode our democracies, and weaken the institutions we rely on?

For centuries, strategists have understood that shaping perception can be more powerful than brute force. From the deception of the Trojan Horse to modern propaganda, those who design, curate, frame, and shape how we perceive reality inevitably influence the beliefs we adopt and the decisions we make. In today’s interconnected world, these techniques are no longer restricted to warfare nor to geographically bound communities; they saturate social media feeds, news cycles, and digital conversations crossing entire continents in the literal blink of an eye. A false narrative, once seeded, can sweep across the globe in minutes, reinforced by sheer repetition rather than veracity. And so it goes that when outrage becomes the currency that sustains viral momentum, sensational narratives, founded or not, outpace and overshadow measured debate and fact-checking efforts.

Our susceptibility to these tactics arises from deeply ingrained cognitive instincts—confirmation bias, groupthink, social proof, among others—that have always shaped human behavior. We like to believe our choices are independent, carefully reasoned, yet when uncertainty looms and consensus emerges, those boundaries begin to blur. At the collective level, these individual blind spots can snowball into herding phenomena; information cascades thrive in this space between individual perception and collective influence. They occur when one abandons their own judgment in favor of following the observable actions of others. As an evolutionary mechanism, this tendency can be highly adaptive—if a few members of a group detect genuine danger, following them without overthinking might save everyone. But it’s a double-edged sword: the same impulses that help us adapt and make rapid decisions under uncertainty can also leave us vulnerable to mass errors: from consumer fads and harmful viral trends on social media to speculative manias in finance and public health crises.

Artificial intelligence serves as the ultimate force multiplier in this domain. From personalized content and algorithmic persuasion to generative AI blurring the line between fact and fabrication. Moreover, the everyday architecture of our digital lives often works hand in hand with those who seek to manipulate it. This dynamic has transformed perception manipulation into a potent tool wielded not just by state actors but by advertisers, political campaigns, and influencers—each vying to shape our understanding of our reality, imagination, and self. The erosion of trust in what we see and hear fosters a precarious environment where misinformation can trigger unrest before the truth has a chance to surface.

A potential danger is not merely that people might believe a false story, but that the constant swirl of contradictory claims and digitally concocted evidence induces a kind of resignation. Observers might sink into the belief that “nothing is reliable anyway, so why bother?” This cynicism is a potent weapon. If a critical mass of society decides that truth itself is always suspect, it undermines the foundation upon which collective decisions are made. Elections, public health directives, environmental policies, humanity at large—all rely on at least some shared consensus about reality. The moment people reject the possibility of consensus is also the moment a democracy loses its core mechanisms: the ability to deliberate, weigh evidence, and pursue the common good.

Meanwhile, the private sector’s embrace of AI in data analytics adds another layer of complexity. Consider how companies analyzing consumer preferences can also glean insights into psychological vulnerabilities: the subtle triggers that might push one demographic toward a conspiracy-laden documentary or another toward extremist political content. Advertisers have done this for decades, but AI supercharges the process with pattern recognition at a scale and speed beyond any marketing department of the past. The result can be an ecosystem in which people are constantly nudged, cajoled, or frightened into certain behaviors without ever realizing they are being orchestrated. Traditional ideals of autonomy and informed choice become fragile illusions when entire digital landscapes are curated to fit our emotional and cognitive weak spots.

The collective erosion of trust, fueled by AI-driven misinformation, has set the stage for a future where fact battles fiction in an endless arms race. This precarious environment leaves us questioning whether democratic discourse can endure under such strain. 

Are we truly helpless, or can we reclaim a digital ecosystem that so powerfully shapes our perceptions? In Part 2, we’ll explore strategies and frameworks that might reorient technology toward collective intelligence rather than collective manipulation.

Laughable Madness: AGI ‘Manhattan Project’ Proposed by USCC to U.S. Congress

By Kristinn R. Thórisson

On November 19th, 2024 the US-China Economic & Security Review Commission produced a report to Congress, recommending it initiate a “Manhattan Project-like program” to develop artificial general intelligence (AGI). This is not only laughable, it is madness. Let me explain why. It took 70 years for contemporary generative artificial intelligence (Gen-AI) technologies to mature. Countless scientific questions have yet to be answered about how human intelligence works. To think that AGI is so “just around the corner” that it can be forced into existence by a bit of extra funding reveals a lack of understanding of the issues involved. In the below I compress into 2000 words what has taken me over 40 years to comprehend. Enjoy!

US-Congress Commission AGI recommendation 2024

I. Congress establish and fund a Manhattan Project-like program dedicated to racing to and acquiring an Artificial General Intelligence (AGI) capability. AGI is generally defined as systems that are as good as or better than human capabilities across all cognitive domains and would surpass the sharpest human minds at every task. Among the specific actions the Commission recommends for Congress:
▶ Provide broad multiyear contracting authority to the executive branch and associated funding for leading artificial intelligence, cloud, and data center companies and others to advance the stated policy at a pace and scale consistent with the goal of U.S. AGI leadership; and
▶ Direct the U.S. secretary of defense to provide a Defense Priorities and Allocations System “DX Rating” to items in the artificial intelligence ecosystem to ensure this project receives national priority. (USCC 2024 REPORT TO CONGRESS, p.10)

This blurb not only gets the concept of AGI wrong, it reveals a deep misunderstanding about how science-based innovation works. The Manhattan Project was done in the field of physics. In this field, a new theory about nuclear energy had recently been proposed, and verified by a host of trusted scientists. The theory, although often attributed to a single man, was of course the result of a long history of arduous work by generations of academics, as every historian knows who understands the workings of science. Additionally, physics is our oldest and most developed field of science. Sure, funding was needed for making the Manhattan Project happen, but the existence of the prerequisite foundational knowledge on which it rested was surely not due to large business conglomerates wanting to apply nuclear power to their products and services, like the current situation is with AI. For anyone interested in creating a ‘Manhattan Project’ for AGI: Without a proper theory of intelligence, it will never work!

Continue reading Laughable Madness: AGI ‘Manhattan Project’ Proposed by USCC to U.S. Congress

Iceland’s AI readiness: IIIM’s unique role

The pivotal role the Icelandic Institute for Intelligent Machines (IIIM) played in Iceland’s AI readiness and IIIM’s contribution to Iceland’s post-crisis recovery, was recently detailed in a thorough report produced by the Canadian research outfit Small Globe.

Following the 2008 world-wide financial meltdown, Iceland needed innovative solutions to rebuild its economy. The establishment of IIIM in 2009 turned out to play an important part in that process. Leveraging artificial intelligence and robotics in economic revitalization is a strategy for long-term growth. 

Since its inception, IIIM has developed into a self-sustaining, world-renowned research facility that bridges the gap between academic research and industry-driven development. Many of its projects are open-source, allowing for wide-reaching impact across organizations and countries, which aligns with its mission to provide AI solutions for the betterment of society.

IIIM has also taken a leadership role in ethical AI development and advising European governments about AI strategy. Its Civilian AI Ethics Policy, introduced in 2015, underscores its commitment to ensuring that AI research and development are conducted responsibly, balancing both technical and ethical considerations.

The institute has demonstrated its value by providing AI-driven solutions tailored to Icelandic industries. One such achievement was the development of an AI tool aimed at tackling youth substance use, a pressing issue in Iceland. This highlights not only IIIM’s technical capabilities but also its commitment to applying technology to societal problems. The institute’s ability to work across sectors—from private enterprises to public institutions—has helped redefine the role of AI in the Icelandic economy, showing it as a tool for both innovation and societal progress.

IIIM’s influence now extends beyond Iceland, with nearly half of its research collaborations involving international partners. Through partnerships with universities such as Reykjavik University and the University of Camerino, IIIM has created numerous opportunities for young researchers. These collaborations have boosted Iceland’s global presence in AI research, attracting international talent and fostering the next generation of AI and robotics experts,  ensuring that Iceland remains connected to the broader European R&D community, securing its place in AI research for years to come.

 

Resources:

Thorsteinsdóttir, H. (2024). Impact Analysis: Strategic Initiative on Centres of Excellence and Clusters. Small Globe Inc., Rannís. https://www.rannis.is/media/rannsoknasjodur/Small-Globe-Impact-Analysis-Centres-of-Excellence-Initiative.pdf

Are Super-Intelligent Machines Coming?
Dr. Kristinn R. Thórisson in MIT Tech Review

A lot of talk about super-intelligent machines has been circulating in social media and news reports in the past few months, fueled by recent advances in applied genAI technologies. One of the most trusted and revered sources of discussion on this topic is MIT Technology Review. In its March German issue, reporters dive into questions surrounding this hot topic, including whether general machine intelligence – also called AGI – is anywhere on the horizon. To answer this question they contacted a few respected researchers in AI, including Dr. David Chalmers of New York University, Dr. Jurgen Schmidhuber of IDSIA, Dr. Katharina Zweig of TU Keiserslautern and IIIM Director Dr. Kristinn R. Thórisson.

In the issue Dr. Thórisson says “A valid AGI test would need to measure an AI’s capacity to learn autonomously, innovate, and pursue new objectives, while also being able to explain, predict, create, and simulate various phenomena.” These are capabilities that his team’s AGI-aspiring system AERA (Autocatalytic Endogenous Reflective Architecture) is capable of. AERA learns from experience, is capable of what Dr. Thórisson calls ‘machine understanding,’ and corrects its own understanding when it gets things wrong. Thórisson continues “[when learning from experience] we can misunderstand things. When a piece of a puzzle is missing, we seldom choose to start [learning] from scratch – instead, we adjust our existing knowledge based on what we’ve identified as incorrect.”

Continue reading Are Super-Intelligent Machines Coming?
Dr. Kristinn R. Thórisson in MIT Tech Review