All posts by Kristinn Thorisson

Laughable Madness: AGI ‘Manhattan Project’ Proposed by USCC to U.S. Congress

By Kristinn R. Thórisson

On November 19th, 2024 the US-China Economic & Security Review Commission produced a report to Congress, recommending it initiate a “Manhattan Project-like program” to develop artificial general intelligence (AGI). This is not only laughable, it is madness. Let me explain why. It took 70 years for contemporary generative artificial intelligence (Gen-AI) technologies to mature. Countless scientific questions have yet to be answered about how human intelligence works. To think that AGI is so “just around the corner” that it can be forced into existence by a bit of extra funding reveals a lack of understanding of the issues involved. In the below I compress into 2000 words what has taken me over 40 years to comprehend. Enjoy!

US-Congress Commission AGI recommendation 2024

I. Congress establish and fund a Manhattan Project-like program dedicated to racing to and acquiring an Artificial General Intelligence (AGI) capability. AGI is generally defined as systems that are as good as or better than human capabilities across all cognitive domains and would surpass the sharpest human minds at every task. Among the specific actions the Commission recommends for Congress:
▶ Provide broad multiyear contracting authority to the executive branch and associated funding for leading artificial intelligence, cloud, and data center companies and others to advance the stated policy at a pace and scale consistent with the goal of U.S. AGI leadership; and
▶ Direct the U.S. secretary of defense to provide a Defense Priorities and Allocations System “DX Rating” to items in the artificial intelligence ecosystem to ensure this project receives national priority. (USCC 2024 REPORT TO CONGRESS, p.10)

This blurb not only gets the concept of AGI wrong, it reveals a deep misunderstanding about how science-based innovation works. The Manhattan Project was done in the field of physics. In this field, a new theory about nuclear energy had recently been proposed, and verified by a host of trusted scientists. The theory, although often attributed to a single man, was of course the result of a long history of arduous work by generations of academics, as every historian knows who understands the workings of science. Additionally, physics is our oldest and most developed field of science. Sure, funding was needed for making the Manhattan Project happen, but the existence of the prerequisite foundational knowledge on which it rested was surely not due to large business conglomerates wanting to apply nuclear power to their products and services, like the current situation is with AI. For anyone interested in creating a ‘Manhattan Project’ for AGI: Without a proper theory of intelligence, it will never work!

Continue reading Laughable Madness: AGI ‘Manhattan Project’ Proposed by USCC to U.S. Congress

Are Super-Intelligent Machines Coming?
Dr. Kristinn R. Thórisson in MIT Tech Review

A lot of talk about super-intelligent machines has been circulating in social media and news reports in the past few months, fueled by recent advances in applied genAI technologies. One of the most trusted and revered sources of discussion on this topic is MIT Technology Review. In its March German issue, reporters dive into questions surrounding this hot topic, including whether general machine intelligence – also called AGI – is anywhere on the horizon. To answer this question they contacted a few respected researchers in AI, including Dr. David Chalmers of New York University, Dr. Jurgen Schmidhuber of IDSIA, Dr. Katharina Zweig of TU Keiserslautern and IIIM Director Dr. Kristinn R. Thórisson.

In the issue Dr. Thórisson says “A valid AGI test would need to measure an AI’s capacity to learn autonomously, innovate, and pursue new objectives, while also being able to explain, predict, create, and simulate various phenomena.” These are capabilities that his team’s AGI-aspiring system AERA (Autocatalytic Endogenous Reflective Architecture) is capable of. AERA learns from experience, is capable of what Dr. Thórisson calls ‘machine understanding,’ and corrects its own understanding when it gets things wrong. Thórisson continues “[when learning from experience] we can misunderstand things. When a piece of a puzzle is missing, we seldom choose to start [learning] from scratch – instead, we adjust our existing knowledge based on what we’ve identified as incorrect.”

Continue reading Are Super-Intelligent Machines Coming?
Dr. Kristinn R. Thórisson in MIT Tech Review

Cisco Debuts New System Capable of Cumulative Learning – ‘a New Dawn in Video Analytics’

In traditional image analytics, each frame – whether a photo or a frame from a video – is analyzed in a single pass. If more analysis is desired or needed, incorrect classifications were made, or important details missed, so be it. Not so in the revolutionary new computer vision system that Cisco, in collaboration with a number of academic and industry collaborators, plans to release as open-source software in the coming days.

One of these collaborators is IIIM, whose research on novel AI approaches has, for the past 3 years, been funded in part by Cisco Systems. The system, called Ethosight, uses reasoning to enhance the ability of traditional ANN-based large language models to dissect and classify objects and events in images and video, in realtime. Like a human looking for more clues about what is happening in a particular scenario, the system can improve the quality and depth of its analysis over time, the longer it looks at it, collecting more information about what may be going on. The system is possibly the first of its kind to demonstrate what has been called cumulative learning, that is, the ability to autonomously improve its knowledge about a particular thing over time. A preprint of a paper describing Ethosight has been published on ArXiV repository.

For Ethosight, the things it can address may for instance involve a variety of social situations, such as a child playing near a hot stove, or opening a closet where chemical are stored. According to the blog of Cisco’s Principal Engineer and the first author (see here) of the paper Hugo Latapie, Ethosight breaks away from the traditional limitations of AI systems being positioned “…not just as a real-time video analysis tool but as a vanguard in the continuous learning paradigm…”.

-RT

Resources

Ethosight ArXiV paper
Hugo Latapie’s Cisco blog
Paper on cumulative learning

A Grounded Assessment of the Generative A.I. Explosion

by Helgi Páll Helgason

We now live in a world where generative AI can conjure photorealistic images of pretty much anything we can think of with results that are often indistinguishable from the real thing (this comes with its own set of problems but that’s a topic for another time). Then we have highly potent Large Language Models (LLMs) that can service very complex requests phrased in natural language, OpenAI’s GPT-4 reigning supreme at the moment. Consider that you can make absurd requests, such as…

The image was generated with Midjourney 5.1. The prompt used was simply “a man looking at the generative AI explosion”.
Image generated with Midjourney 5.1.

“Prove the Pythagorean theorem in a German poem and then list the elements in the periodic table in Chinese”

… and the model will usually generate a correct result from scratch in mere seconds. The same goes for more useful requests such as writing a piece of code and reviewing, rewriting or even generating written content on almost any topic. These examples just begin to scratch the surface of what is possible.

It is clear that LLMs at their present state of development can create significant business value already, but these models have limitations that are sometimes overlooked.

In the midst of the storm of progress and activity currently taking place with Generative AI, I’d like to stop for a moment to reflect, and offer a grounded and practical assessment.

LLMs are very large artificial neural networks. It is sometimes said that they simulate the inner workings of the human brain, but this is true to a far lesser extent than commonly perceived. Since neural networks were first introduced in the 80s, it has been well understood that they are function approximators. Even with the introduction of new features (e.g. attention) and new architectures (e.g. transformers) this fundamental nature remains unchanged. Although simplified, you can think of how they work as learning to map a set of training data points to their correct result values and then interpolating between these data points when given novel data. While often very effective, there is no guarantee that this will always produce correct results. An approximation of a function is not the same as the actual function. As statistician George Box famously said, “All models are wrong, but some are useful”.

Continue reading A Grounded Assessment of the Generative A.I. Explosion