
The report on this research is called “The Iceberg Index: Measuring Skills-Focused Exposure in the AI Economy”, but it also has its own dedicated page called “Project Iceberg” that lives on the MIT website. Compared to a research paper, there are a lot more emojis on the project page. Where the study paper comes off as a warning about AI technology is the project page, titled “Can AI Work with You?” It feels like an advertisement for AI, partly thanks to text like this:
“AI is transforming work. We’ve spent years making AI smart – they can read, write, make songs, shop for us. But what happens when they interact? When millions of smart AIs work together, intelligence emerges not from individual agents but from the protocols that coordinate them. Project Iceberg explores this new frontier: how AI agents coordinate with each other and with humans at scale.”
The title “Iceberg Index” comes from an AI simulation the paper uses called a “large population model” that apparently runs on a processor housed at the federally funded Oak Ridge National Laboratory, which is affiliated with the Department of Energy.
Legislators and CEOs appear to be the target audience, and they use Project Iceberg to “identify exposure hotspots, prioritize training and infrastructure investments, and test interventions before investing billions for implementation.”
Large Population Model – should we start shortening it to LPM? – Claims to be able to digitally track the behavior of 151 million human workers “as autonomous agents” with 32,000 trackable “skills,” along with other factors such as geography.
The director of the AI program at Oak Ridge explained the project to CNBC this way: “Basically, we’re creating a digital twin for the American labor market.”
The researchers claim that the overall conclusion is that current AI adoption accounts for 2.2% of “labor market wage value”, but 11.7% of labor is exposed – clearly substitutable based on the model’s understanding of what a human can currently do that an AI software widget can also do.
It should be noted that in real jobs humans constantly work outside their job descriptions, handle extraordinary and non-routine situations, and—for now—are uniquely capable of handling many of the social aspects of a given job. It is not clear what the model accounts for, although it notes that its findings are correlational, not causal, and says, “External factors—state investment, infrastructure, regulation—mediate how capacity translates into impacts.”
However, the newspaper says, “Policymakers cannot wait for evidence of disruption before formulating responses.” In other words, according to the study, it is very important not to get stuck on the limitations of AI studies.
<a href