
One of the key challenges of building effective AI agents is teaching them to choose between using external tools or relying on their internal wisdom. But large language models are often trained blindly applying tools, which leads to poor inference due to latency bottlenecks, unnecessary API costs, and environmental noise.
To address this challenge, Alibaba researchers introduced Hierarchical Decoupled Policy Optimization (HDPO), a reinforcement learning framework that trains agents to balance both execution efficiency and task accuracy.
Metis, a multimodal model they trained using this framework, reduces unnecessary tool invocations from 98% to just 2%, setting new state-of-the-art reasoning accuracy in key industry benchmarks. This framework helps create AI agents that are not trigger-happy and know when to avoid using tools, which enables the development of responsive and cost-effective agent systems.
metacognitive deficit
Current agentic models suffer from what researchers call a “profound metacognitive deficit.” Models have difficulty deciding when to use their internal parametric knowledge and when to query an external utility. As a result, they blindly implement tools and APIs such as web search or code execution, even when the user’s prompt already contains all the necessary information to solve the task.
This trigger-happy tool-calling behavior creates serious operational bottlenecks for real-world applications. Because models are trained to focus almost entirely on task completion, they are indifferent to latency. These agents often face excessive tool call rates. Each unnecessary external API call introduces a sequential processing bottleneck, turning technically capable AI into a sluggish system that frustrates users and wastes tool budgets.
Also, there is no better argument than burning computational resources on excessive tool usage. Redundant tool interactions generate noise in the context of the model. This noise can disturb the model, derail an otherwise sound chain of logic and actively degrade the final output.
To address the latency and cost issues of blind tool invocation, previous reinforcement learning methods attempted to penalize excessive tool usage by combining task accuracy and execution efficiency into a reward signal. However, this convoluted design creates an insoluble optimization dilemma. If the efficiency penalty is too aggressive, the model becomes overly conservative and suppresses necessary tool usage, sacrificing accuracy in difficult tasks. Conversely, if the penalty is light, the optimization signal loses its value and does not prevent overuse of the tool on simple tasks.
Furthermore, this shared reward creates semantic ambiguity, where an incorrect trajectory with zero tool calls may yield the same reward as an accurate trajectory with excessive tool usage. Because training signals are entangled for accuracy and efficiency, the model cannot learn to control tool-use without diminishing its basic reasoning abilities.
hierarchical decoupled policy optimization
To solve the optimization dilemma of coupled rewards, researchers introduced HDPO. HDPO separates accuracy and efficiency into two independent optimization channels. The accuracy channel focuses on maximizing the accuracy of work across all rollouts of the model. The efficiency channel optimizes for performance economics.
HDPO calculates the training signals for both of these channels independently and combines them only in the last step of the loss calculation. Efficiency is conditional on signal accuracy channel. This means that an incorrect response is not rewarded simply for being faster or using fewer tools. This decoupling avoids situations where accuracy and efficiency gradients cancel each other out, giving the AI clean learning signals for both goals.
The most powerful emerging property of this decoupled design is that it creates an implicit cognitive curriculum. Early in training, when the model still struggles with the task, optimization takes over the accuracy objective, forcing the model to prioritize learning correct reasoning and knowledge. As the model’s reasoning capabilities mature and it consistently arrives at correct answers, the efficiency signal increases smoothly. This mechanism causes the model to first master the task solution, and only then refine its self-sufficiency by avoiding unnecessary, expensive API calls.
To complement the HDPO, researchers have developed a rigorous, multi-step data curation regime that addresses serious shortcomings found in existing tool-enhanced datasets. Their data curation pipeline covers supervised fine-tuning (SFT) and reinforcement learning (RL) stages.
For the SFT stage, they obtained data from publicly available tool-enhanced multimodal trajectories and filtered them to remove low-quality examples containing execution failures or feedback anomalies. They aggressively filtered out any training samples that the base model could solve directly without tools. Finally, using Google’s Gemini 3.1 Pro as an automated judge, they filtered the SFT corpus to keep only those examples that demonstrated strategic tool use.
For the RL phase, curation focuses on ensuring a stable optimization signal. They filtered out corrupted sequences or signals with semantic ambiguity. The HDPO algorithm depends on comparing correct and incorrect responses. If a task is marginally easy where the model always gets it right, or prohibitively difficult where the model always fails, then there is no meaningful mathematical variation to learn. The team strictly retained only those signals that displayed a non-trivial mix of successes and failures to guarantee actionable gradient signals.
Metis Agent: HDPO in action
To test HDPO in action, researchers used the framework to develop Metis, a multimodal reasoning agent equipped with coding and search tools. Metis is built on top of the Qwen3-VL-8B-instruct vision-language model. The researchers trained it in two separate stages. First, they implemented SFT using their curated data to provide cold-start initialization. Next, they implemented RL using the HDPO framework, exposing the model to multi-turn interactions where it can invoke tools such as Python code execution, text search, and image search.
The researchers pitted Metis against standard open-source vision models such as LLaVA-OneVision, text-only reasoners, and state-of-the-art agentive models including DeepEyes V2 and the 30-billion-parameter Skyworks-R1V4. The evaluation covers two main areas: visual perception and document comprehension datasets such as HRBench and V*Bench, and rigorous mathematical and logical reasoning tasks such as VMath and MathVista.
Across all tasks, Metis achieved state-of-the-art or highly competitive performance, outperforming existing agentic models in both visual perception and reasoning tasks – including the much larger 30-billion-parameter Skywork-R1v4.
Equally important is the actual behavior shown by Metis in experiments. For example, when presented with an image of a museum sign and asked what the center text says, standard agentic models waste time writing a Python script to crop the image just to read it. However, Métis admits that the text in the original image is clearly readable. This abandons the tool altogether and uses a single estimation pass.
In another experiment, the model was given a complex chart and asked to identify the second highest line at a specific data point within a smaller subplot. Metis recognized that fine-grained visual analysis exceeded its native resolution capabilities and could not accurately distinguish overlapping lines. Instead of guessing from the full image, this invited Python to crop and zoom specifically on that specific subplot area, enabling it to correctly identify the line. It treats the code as a precision tool that is deployed only when the visual evidence is truly unambiguous, not as a default fallback.
The researchers released Metis with the code for HDPO under the permissive Apache 2.0 license.
“Our results demonstrate that strategic tool use and stronger reasoning performance are not a trade-off; rather, eliminating noisy, unnecessary tool calls directly contributes to improved accuracy,” the researchers concluded. “More broadly, our work suggests a paradigm shift in tool-enhanced learning: from simply teaching how to execute tools, to developing meta-cognitive knowledge of when to step away from them.”
<a href