'Intelition' changes everything: AI is no longer a tool you invoke

Thinking together
AI is evolving faster than our vocabulary to describe it. We may need some new words. we have “Feeling” We have no words for how a brain thinks, what happens when human and machine intelligence work together to understand, decide, create, and act. let’s call that process Intelligence,

Intelligence is not an attribute; This is the organizing principle for the next wave of software where humans and AI work inside the same shared model of enterprise. Today’s systems treat AI models as things you call from the outside. You act as a “user” who prompts for responses or steps in the agentic workflow, the “human in the loop.” But it is constantly evolving into co-production: people and agents making decisions, reasoning and shaping actions together in real time.

Read on for details on the three forces driving this new paradigm.

A unified ontology is just the beginning

one in Recent Shareholder LetterPalantir CEO Alex Karp wrote that “all the value in the market is going to go into chips and what we call ontology,” and argued that this change ”is only the beginning of something larger and more important.” By ontology, Karp means a shared model of objects (customers, policies, assets, events) and their relationships. It also includes what Palantir calls a “kinetic layer” of the ontology that defines the actions and security permissions that connect objects.

In the SaaS era, each enterprise application creates its own object and process models. With many legacy systems and often chaotic models, enterprises face the challenge of tying it all together. It is a big and difficult task with redundancies, incomplete structures and missing data. Reality: No matter how many data warehouse or data lake projects are undertaken, few enterprises come close to creating a cohesive enterprise ontology.

Today’s agentic AI tools require a unified ontology. As organizations link and federate ontologies, a new software paradigm emerges: Agent AI can reason and act not just within an app, but across suppliers, regulators, customers, and operations.

As Karp describes it, it aims to “connect the power of artificial intelligence to objects and relationships in the real world.”

World model and continuous learning

Today’s models can hold broader context, but possessing information is not the same as learning from it. Continuous learning requires the accumulation of understanding rather than a reset with each retraining.

For its purpose, Google recently announced “Nested learning” as a possible solution, based directly on existing LLM architecture and training data. The authors do not claim to have solved the challenges of building world models. But, nested learning can supply the raw material for them: durable memory with continuous learning in the system. Endpoints will make retraining obsolete.

In June 2022, Meta’s chief AI scientist Yann LeCun created a blueprint For “Autonomous Machine Intelligence” which features a hierarchical approach to using joint embeddings to make predictions using world models. He called this technique H-JEPA, and later clearly state, “LLMs are good at manipulating language, but not at thinking.”

Over the past three years, LeCun and his colleagues at Meta have put the H-ZEPA principle into practice with the open source models V-ZEPA and I-ZEPA, which learn image and video representations of the world.

personal intelligence interface

The third force in this agentic, ontology-driven world is the personal interface. It puts people at the center rather than “users” on the periphery. This is not any other app; It is the individual’s primary way of participating in the next era of work and life. Rather than treating AI as something viewed through a chat window or API call, the personal intelligence interface will be always on, aware of our context, preferences, and goals, and able to act on our behalf across the entire federal economy.

Let’s analyze how it’s already coming together.

In May, Jony Ive sold his AI device company io to OpenAI to accelerate the new AI device category. He said at the time: “If you create something new, if you innovate, there will be unexpected consequences, and some will be wonderful, and some will be harmful. While some of the less positive consequences were unintentional, I still feel responsibility. And the expression of that is a determination to try and be useful.” That is, getting personal intelligence tools right is much more than a lucrative enterprise opportunity.

Apple is looking beyond LLM for on-device solutions that require less processing power and have lower latency when building AI apps to understand “user intent.” Last year, he made ui-jpaAn innovation that performs “on-device analysis” of what the user wants. This strikes directly at the business models of today’s digital economy, where centralized profiling of “users” turns intent and behavior data into huge revenue streams.

Tim Berners-Lee, inventor of the World Wide Web, recently said: “The user has been turned into a consumable product for advertisers… There is still time to create machines that work for humans, not the other way around." Moving user intent to device will increase interest in a secure personal data management standard, SolidWhich Berners-Lee and his colleagues are developing from 2022. The standard is ideally suited to connect with new personal AI devices. For example, Inrupt, Inc., a company founded by Berners-Lee. has recently integrated SOLID with Anthropic’s MCP standard. agent walletPersonal control is more than a feature of this paradigm; This is architectural security as the system continuously acquires the ability to learn and act,

Ultimately, these three forces are moving and converging faster than most people think. Enterprise ontologies provide nouns and verbs, world-model research supplies durable memory and learning and the personal interface becomes the permitted point of control. The next software era is not coming. It’s already here.

Brian Mulconrey is SVP Suresh Labs,



<a href

Leave a Comment