
Intercom is taking an unusual gamble for a legacy software company: building its own AI model.
The 15-year-old, Dublin, Ireland-based customer service platform giant on Thursday announced Finn Apex 1.0, a smaller, purpose-built AI model that the company claims outperforms OpenAI and Anthropic’s leading Frontier models on the metrics that matter most for customer support.
The model powers Intercom’s existing Fin AI agent, which already handles more than one million customer conversations weekly.
According to benchmarks shared with VentureBeat, Fin Apex 1.0 achieves a 73.1% resolution rate – the percentage of customer issues completely resolved without human intervention – compared to 71.1% for both GPT-5.4 and Cloud Opus 4.5, and 69.6% for Cloud Sonnet 4.6. A difference of roughly 2 percentage points may seem modest, but it is wider than the typical difference between successive generations of marginal models.
"If you’re running a large service operation at scale and you have 10 million customers or a billion dollars in revenue, a delta of 2% or 3% is really a huge amount of customers and interactions and revenue," Intercom CEO Eoghan McCabe told VentureBeat in a video call interview earlier this week.
The model also shows significant improvements in speed and accuracy. The Finn Apex responds in 3.7 seconds – 0.6 seconds faster than the next fastest competitor – and the Cloud Sonnet shows a 65% reduction in hallucinations compared to 4.6.
Perhaps most attractive to enterprise buyers: It’s about one-fifth the cost of using a straight Frontier model, and is included in Intercom’s existing "per result"-based pricing structure for your existing customer plans.
What is the base model? Does it even matter?
but there is a problem. When asked to specify on what base model the Apex was built – and its parameter sizes – Intercom declined.
"We are not sharing the base model used for Apex 1.0 – for competitive reasons and also because we plan to change the base model over time," a company spokesperson told VentureBeat. The company will only confirm what the model is "Hundreds of millions of parameters in size."
This is a remarkable little model. For comparison, Meta’s Llama 3.1 ranges from 8 billion to 405 billion parameters; Even efficient open-weight models like the Mistral 7B dwarf the sub-billion scale described by Intercom.
Whether Apex’s performance claims go against that context – or whether the benchmarks only reflect possible optimizations in narrow, domain-specific applications – remains an open question.
Intercom says it learned from the backlash AI coding startup Cursor faced from critics who accused the coding assistant of burying the fact that its Composer 2 model was built on a fine-tuned open-weight model rather than proprietary technology. But the lessons Intercom learned may not satisfy skeptics: The company is transparent that it used an open-weight base, but no Who One.
"What we have is very transparent" An open-weighted model was used, the spokesperson said. Yet refusing to name models while claiming transparency is a contradiction that is likely to come under scrutiny – especially as more companies make claims. "property" The AI equivalent of a post-trained open-source foundation.
After training as the new frontier
Intercom’s argument is that the base model doesn’t matter much anymore.
"Pre-training is now a thing," McCabe said. "The limit is really after training, if you like. The time after training is the hard part. You need proprietary data. You need proprietary sources of truth."
The company trained its chosen Foundation using proprietary customer service data accumulated over the years through Finn, who now addresses 2 million customer queries per week. That process involved more than just feeding transcripts into a model. Intercom built a reinforcement learning system based on real resolution outcomes, which teaches models what successful customer service really looks like – appropriate tone, decision calls, conversation structure, and critically, how to recognize when an issue has actually been resolved versus when a customer is still frustrated.
"Generic models are trained on generic data on the Internet. Specialized models are trained on hyper-specific domain data," McCabe explained. "So it stands to reason that the intelligence of general models is general, and the intelligence of specialized models is domain-specific and therefore works far better for that use case."
If McCabe is correct that the magic is entirely post-training, then the reluctance to name the base becomes hard to justify. If the foundation is truly interchangeable, what competitive advantage does secrecy protect?
100 million dollar bet succeeds
The announcement comes as Intercom’s AI-first pivot appears to be working. Fin’s annual recurring revenue is approaching $100 million and growing at a rate of 3.5x, making it the fastest growing segment of the company’s $400 million ARR business. Fin is estimated to represent half of Intercom’s total revenue by early next year.
That trajectory represents a remarkable change. When Fin launched, its resolution rate was just 23%. The average among customers today is 67%, with some large enterprise deployments having rates as high as 75%.
To do this, Intercom has grown its AI team from about 6 researchers to 60 over the past three years — a significant investment for a company that McCabe considers. "in a very bad place" Before its AI pivot. The average growth rate of public software companies sits at around 11%; Intercom expects growth of 37% this year.
"We are so far the first in the category to train our own models," McCabe said. "There is no one else who will have it for a year or more."
Uniqueness and expertise of AI
McCabe’s thesis aligns with a broader trend that Andrzej Karpathy, former AI leader at Tesla and OpenAI, recently described "speciation" AI models – proliferation of specialized systems optimized for narrow tasks rather than general intelligence.
McCabe argues that customer service is uniquely suited to this approach. It’s one of only two or three enterprise AI use cases that have gotten real economic momentum so far, along with coding assistants and potentially legal AI. It has attracted more than a billion dollars in venture funding and created space for competitors like Decagon and Sierra, in McCabe’s words, "Brutally competitive."
The question is whether domain-specific models represent a sustainable advantage or a temporary arbitrage that marginal laboratories will eventually close down. McCabe believes that laboratories face structural limitations.
"Perhaps the future is that Anthropic has a larger offering of many different niche models. Maybe it looks like this," He said. "But the reality is that I don’t think generic models will be able to keep up with domain-specific models right now."
beyond the ability to experience
Early enterprise AI adoption focused heavily on cost reduction – replacing expensive human agents with cheaper automated agents. But McCabe sees the conversation moving toward the quality of the experience.
"Basically it was like, ‘Oh crap, we can actually do this for a lot cheaper.’ And now they’re thinking, ‘Wait, no, we can give customers a much better experience.’" He said.
This approach extends beyond simple query resolution. McCabe envisions AI agents that act as consultants — a shoe retailer’s bot that not only answers shipping questions but offers styling advice and shows customers how different options might look on them.
"Customer service has always been very poor," McCabe said bluntly. "Even for the best brands you have to wait for a call, you have to go around to different departments. Now is the opportunity to deliver a truly great customer experience."
Pricing and Availability
For existing FIN customers, the upgrade to Apex comes at no additional cost. Intercom confirmed that customer pricing remains unchanged – users continue to pay as before, at $0.99 per resolved interaction, and automatically benefit from the new model.
Apex is not available as a standalone model or through an external API. It is only accessible through FIN, meaning businesses cannot freely license the model or integrate it into their products. This constraint may limit Intercom’s ability to monetize the model beyond the existing customer base – but it also keeps the technology proprietary in a practical sense, regardless of the underlying base model.
what will happen next
Intercom plans to extend Fin beyond customer service into sales and marketing – positioning it as a direct competitor to Salesforce’s AgentForce vision, which aims to provide AI agents throughout the customer lifecycle.
For the broader SaaS industry, Intercom’s move raises uncomfortable questions. If a 15-year-old customer service company can create a model that outperforms OpenAI and Anthropic in its domain, what does it mean for vendors who still rely on generic API calls? and if "After training there is a new frontier," As McCabe insists, will companies claiming success face pressure to show their work – or will competitors continue to hide behind secrecy while trumpeting transparency?
McCabe’s answer to the first question, posed in a recent LinkedIn post, is clear: "If you can’t become an agent company, the future of your CRUD app business will be weak."
The answer to the second remains to be seen.
<a href
More details One click: https://jakabferenc.hu/2026/03/11/best-mms-person-movies-inside-high-definition-quality-mms-100-percent-free-vids/
Most Interesting: https://www.10eurotaxi.nl/2026/03/18/casino-online-guiao-qualquer-para-jogadores-acimade-portugal/