Databricks' serverless database slashes app development from months to days as companies prep for agentic AI

database in a lake smk1
Five years ago, Databricks coined the term ‘data lakehouse’ to describe a new type of data architecture that combines a data lake with a data warehouse. This term and data architecture is now common practice in the data industry for analytics workloads.

Now, Databricks is once again looking to create a new category with its LakeBase service, now generally available today. While data lakehouse creation is about OLAP (Online Analytical Processing) databases, LakeBase is about OLTP (Online Transaction Processing) and operational databases. The Lakebase service is in development since June 2025 and is based on the technology acquired through the acquisition of Databricks PostgreSQL Database Provider Neon. It was further extended to October 2025 Mooncake acquisition, Which brought the ability to help connect PostgreSQL with Lakehouse data formats.

Lakebase is a serverless operational database that represents a fundamental rethinking of how databases work in the age of autonomous AI agents. Early adopters, including easyJet, Hafnia and Warner Music Group, are cutting application delivery times by 75 to 95%, but deep architectural innovation positions databases as short-lived, self-service infrastructure that AI agents can provision and manage without human intervention.

It’s not just another managed Postgres service. LakeBase treats operational databases as lightweight, disposable compute running on data lake storage rather than monolithic systems requiring careful capacity planning and database administrator (DBA) oversight.

"In fact, to push the vibe coding trend, you need developers to believe that they can actually create new apps very quickly, but you also need the central IT team or DBAs to be comfortable with a tsunami of apps and databases," Databricks co-founder Reynold Zinn told VentureBeat. "Classic databases simply won’t scale that well because they can’t afford to have a DBA per database and per app."

92% faster delivery: two months to five days

Production numbers demonstrate immediate impact beyond agent provisioning vision. Hafnia reduced the delivery time for production-ready applications from two months to five days – or 92% – by using Lakebase as the transaction engine for its internal operations portal. The shipping company moved beyond static BI reports to real-time business applications for fleet, commercial and finance workflows.

EasyJet consolidated more than 100 Git repositories into just two and reduced the development cycle from nine months to four months – a 56% reduction – while building a web-based revenue management center on Lakebase to replace a decade-old desktop app and one of Europe’s largest legacy SQL Server environments.

Warner Music Group is using Unified Foundations to transfer insights directly into production systems, while Quantum Capital Group uses it to maintain consistent, controlled data for identifying and evaluating oil and gas investments – eliminating data duplication that previously forced teams to maintain multiple copies in different formats.

The acceleration results from the elimination of two major bottlenecks: database cloning for test environments and ETL pipeline maintenance to coordinate operational and analytical data.

Technical Architecture: Why It’s Not Just Managed Postgres

Traditional databases combine storage and compute – organizations provision a database instance with attached storage and scale by adding more instances or storage. AWS Aurora innovated by separating these layers using proprietary storage, but the storage remained locked inside AWS’s ecosystem and not freely accessible for analysis.

Lakebase separates storage and takes storage to its logical conclusion by putting it directly into the data lakehouse. The compute layer essentially runs vanilla PostgreSQL – maintaining full compatibility with the Postgres ecosystem – but each write goes to Lakehouse storage in formats that Spark, Databricks SQL, and other analytics engines can immediately query without ETL.

"The unique technical insight was that data lakes separated storage from compute, which was great, but we need to introduce data management capabilities like governance and transaction management into the data lake," Shin explained. "We’re not really that different from the Lakehouse concept, but we’re building lightweight, short-lived compute for an OLTP database on top."

Databricks created Lakebase with technology acquired from its acquisition of Neon. But Shin emphasized that Databricks expanded Neon’s core capabilities enough to create something fundamentally different.

"They didn’t have enterprise experience, and they didn’t have cloud scale," Shin said. "We brought the innovative architectural ideas of the Neon team with the strength of the Databricks infrastructure and combined them. So now we have built a super scalable platform."

From hundreds to millions of databases built for agentic AI

Shin outlined a perspective tied directly to the economics of AI coding tools that explains why the creation of Lakebase makes sense beyond current use cases. As development costs continue to decline, enterprises will move from purchasing hundreds of SaaS applications to building millions of custom applications in-house.

"As the cost of software development continues to decline, which we are seeing today due to AI coding tools, it will shift from the proliferation of SaaS over the last 10 to 15 years to the proliferation of in-house application development." Shin said. "Perhaps instead of creating hundreds of applications, they will create millions of specialized applications over time."

This creates an impossible fleet management problem with traditional approaches. You can’t hire enough DBAs to manually organize, monitor, and troubleshoot thousands of databases. Xin’s solution: Treat database management as a data problem rather than an operations problem.

LakeBase stores all telemetry and metadata – query performance, resource usage, connection patterns, error rates – directly in LakeHouse, where it can be analyzed using standard data engineering and data science tools. Instead of configuring dashboards in database-specific monitoring tools, data teams query telemetry data with SQL or analyze it with machine learning models to identify outliers and predict issues.

"Instead of creating a dashboard for every 50 or 100 databases, you can actually look at the charts to understand if something went wrong," Shin explained. "Database management will look very similar to an analytical problem. You look at outliers, you look at trends, you try to understand why things happen. This is how you manage at scale when agents are programmatically creating and destroying databases."

This implication extends to autonomous agents themselves. An AI agent experiencing performance issues can query telemetry data to diagnose problems – treating database operations as just another analytical task rather than requiring specialized DBA knowledge. Database management becomes something that agents can do for themselves using the same data analysis capabilities they already have.

What this means for enterprise data teams

The LakeBase creation signals a fundamental shift in how enterprises should think about operational databases – not as valuable, carefully managed infrastructure requiring specialized DBAs, but as ephemeral, self-service resources that scale programmatically, like cloud computing.

It matters whether autonomous agents materialize as quickly as Databricks envisions, because the underlying architectural principle – treating database management as an analytical problem rather than an operations problem – changes the skill sets and team structures required for enterprises.

Data leaders must pay attention to the convergence of operational and analytical data happening across the industry. When writing to an operational database can be immediately queried by analytics engines without ETL, the traditional boundaries between transactional systems and data warehouses blur. This integrated architecture reduces the operational overhead of maintaining separate systems, but it also requires rethinking data team structures built around those limitations.

When Lakehouse launched, competitors rejected the concept before eventually adopting it themselves. Zinn expects a similar trajectory for Lakebase.

"It just makes sense to separate storage and compute and put all storage in the lake – it enables a lot of capabilities and possibilities," He said.



<a href

Leave a Comment