What to be thankful for in AI in 2025

ChatGPT Image Nov 28 2025 10 42 26 AM
Hello dear readers. Belated Thanksgiving and Happy Black Friday!

This year has felt like living inside a permanent God’s Day. Every week, some lab introduces a new model, a new agent framework, or a new “this changes everything” demo. This is tremendous. But this is also the first year I feel that AI is finally diversifying – not just one or two frontier models in the cloud, but an entire ecosystem: open and closed, huge and small, Western and Chinese, cloud and local.

So for this Thanksgiving edition, I’m really grateful for AI in 2025 — releases that feel like they’ll matter in 12-24 months, not just during this week’s hype cycle.

1. OpenAI Keeps Shipping Strong: GPT-5, GPT-5.1, Atlas, Sora 2 and Open Vet

As the company that unquestionably gave birth to "Generative AI" With its viral hit product ChatGPT due out in late 2022, OpenAI arguably had one of the toughest tasks of any AI company in 2025: continuing its growth path, even as well-funded competitors like Google with its Gemini model and other startups like Anthropic fielded their own highly competitive offerings.

Thankfully, OpenAI rose to the challenge and then some. Its main work was GPT-5, unveiled in August as the next marginal reasoning model, followed in November by GPT-5.1 with new Instant and Thinking variants, which dynamically adjust how much “thinking time” they spend per task.

In practice, the launch of GPT-5 was bumpy – VentureBeat documented initial math and coding failures and greater than expected community reaction in “OpenAI’s GPT-5 rollout isn’t going smoothly”." But this was quickly fixed based on user feedback and, as a daily user of this model, I am personally pleased and impressed with it.

Plus, enterprises that actually use the models are reporting solid profits. For example, ZenDesk Global says that GPT-5-powered agents now resolve more than half of customer tickets, with some customers seeing 80-90% resolution rates. That’s the cool story: These models may not always be influencing the chattering classes on X, but they are starting to move the real KPIs.

On the tooling side, OpenAI finally gave developers a serious AI engineer with gpt-5.1-codecs-max, a new coding model that can run longer, agentic workflows and is already the default in OpenAI’s codecs environment. VentureBeat described it as “OpenAI introduced the GPT-5.1-codecs-max coding model and has already completed 24 hours of work internally.”

Then there’s ChatGPT Atlas, a full browser with ChatGPT right in Chrome – sidebar summaries, on-page analytics, and search tightly integrated into regular browsing. This is the clearest sign yet that “assistant” and “browser” are on a collision course.

On the media side, Sora 2 transformed the original Sora Video demo into a full video-and-audio model with improved physics, synchronized sound and dialogue, and greater control over style and shot composition, as well as a dedicated Sora app with a full social networking component, allowing any user to create their own TV network in their pocket.

Finally – and perhaps most symbolically – OpenAI released gpt-oss-120B and gpt-oss-20B, open-weight MoE reasoning models, under an Apache 2.0-style license. Whatever you think of their quality (and early open-source users have been vocal about their complaints), this is the first time since GPT-2 that OpenAI has made a serious dent in the general public.

2. China’s open-source wave goes mainstream

If 2023-24 was about Llama and Mistral, 2025 belongs to China’s open-wet ecosystem.

A study by MIT and Hugging Face found that China is now slightly ahead of the US in global open-model downloads, thanks largely to DeepSeek and Alibaba’s Qian family.

Main characteristics:

  • DeepSeek-R1 Launched in January as an open-source reasoning model to rival OpenAI’s O1, with a family of MIT-licensed weight and distillate small models. VentureBeat has followed the story from its release to its cybersecurity implications to performance-tuned R1 variants.

  • kmi k2 thinking From Moonshot, a “thinking” open-source model that does step-by-step reasoning with tools, very much in the O1/R1 mold, and is positioned as the best open reasoning model in the world to date.

  • Z.ai GLM-4.5 and GLM-4.5-air are posted on GitHub as “agent” models, open-sourcing base and hybrid reasoning variants.

  • Baidu’s Ernie 4.5 Under Apache 2.0 the family arrived as a fully open-source, multimodel MoE suite, including 0.3B dense models and visual “thinking” variants focused on charts, STEM, and tool usage.

  • Alibaba’s quen3 The line – which includes the Quen3-Coder, large reasoning models, and the Quen3-VL series released in the summer and fall months of 2025 – continues to set a high bar for open weights in coding, translation, and multimodal reasoning, which led me to announce last summer. "

    Quen’s heat."

VentureBeat has been tracking these changes, including Chinese math and logic models like Lite-R1-32B and Weibo’s tiny VibeThinker-1.5B, which outperforms DeepSeek baselines on a shoestring training budget.

If you care about open ecosystems or on-premises options, this is the year China’s open-web scene stopped being a curiosity and became a serious option.

3. Small and local models grow up

Another thing I’m grateful for: We’re finally getting Good Small models, not just toys.

Liquid AI spent 2025 advancing its Liquid Foundation Model (LFM2) and LFM2-VL vision-language variants, which were designed from day one for low-latency, device-aware deployments — edge boxes, robots, and restricted servers, not just giant clusters. The new LFM2-VL-3B targets embedded robotics and industrial autonomy, with a demo planned at ROSCon.

In terms of big tech, Google’s Gemma 3 line made a strong case that the “little ones” can still be capable. The Gemma 3 spans from 270M parameters all the way up to the 27B, with open weights and multimodal support in all larger variants.

Most notably is the Gemma 3 270M, a compact model designed for fine-tuning and structured text tasks—think custom formatter, router, and watchdog—that features in both Google’s developer blog and community discussions in local-LLM circles.

These models may never trend on

4. Meta + Midjourney: Aesthetics as a Service

One of this year’s strange twists: Meta partnered with Midjourney instead of trying to defeat it.

In August, Meta announced a deal to license MidJourney’s “beauty technology” – its image and video generation stack – and integrate it into Meta’s future models and products, from Facebook and Instagram feeds to Meta AI features.

VentureBeat covered the partnership in “Meta is partnering with MidJourney and will license its technology for future models and products”, which raises the obvious question: Does this slow down or reshape MidJourney’s own API roadmap? Still waiting for a reply there, but unfortunately, stated plans for an API release have not yet materialized, suggesting that this has already happened.

For creators and brands, however, the immediate implication is simple: Midjourney-grade visuals begin to appear in mainstream social tools instead of being locked away in Discord bots. It could normalize high-quality AI art to a much broader audience — and force rivals like OpenAI, Google, and Black Forest Labs to raise the bar.

5. Google’s Gemini 3 and Nano Banana Pro

Google attempted to answer GPT-5 with Gemini 3, which is considered its most capable model to date with better reasoning, coding, and multimodal understanding, as well as a new Deep Think mode for slower, harder problems.

VentureBeat’s coverage, “Google unveils Gemini 3, claiming lead in math, science, multimodal, and agentic AI,” framed it as a direct shot at Frontier benchmarks and agentic workflows.

But the surprise hit is Nano Banana Pro (Gemini 3 Pro Image), Google’s new flagship image generator. It specializes in infographics, diagrams, multi-subject visuals, and multilingual text that renders really clearly in 2K and 4K resolution.

In the world of enterprise AI – where charts, product plans, and “explain this system visually” images matter more than fantasy dragons – this is a big deal.

6. I have my eye on the wild cards

A few more releases I’m grateful for, even if they don’t fit neatly into one bucket:

  • Black Forest Labs’ Flux.2 Image model, which launched earlier this week with the ambition of challenging both the Nano Banana Pro and MidJourney on quality and control. VentureBeat discovered details in “Black Forest Labs launches Flux.2 AI image models to challenge Nano Banana Pro and MidJourney”."

  • Anthropic’s Cloud Opus 4.5A new flagship aimed at cheaper, more efficient coding and longer-horizon task execution is included in “Anthropic’s Cloud Opus 4.5: cheap AI, infinite chat, and coding skills that outperform humans.”"

  • A steady drumbeat of open math/logic models – from Lite-R1 to Vibethinker and others – shows that you don’t need $100M training runs to move the needle.

Final Thoughts (for now)

If 2024 was the year of “a big model in the cloud,” then 2025 is the year the map exploded: multiple frontiers on top, China leading the way in open models, small and efficient systems rapidly maturing, and creative ecosystems like MidJourney being pulled into the big-tech pile.

I’m not just grateful for any one model, but also for the fact that we now have Option – Closed and open, local and hosted, logic-first and media-first. For journalists, builders and enterprises, that diversity is the real story of 2025.

Happy Holidays and best wishes to you and your loved ones!



<a href

Leave a Comment