Understanding Python’s Coroutines vs Tasks — Mergify

Every engineer has that moment during a review when a comment sticks in his mind longer than it should.

In my case, it was a simple suggestion:

“You should add more locks here: this code is async, so anything can interact.”

The code in question touched a shared cache, and on the surface the comment made sense. Multiple async functions were operating on the same structure, and the function modifying it was async. Shouldn’t this mean I need more locks?

That review sent me down the rabbit hole. Not about cache (it was small) but about the mental model many engineers (including experienced ones) bring to Python’s async system. A model shaped by JavaScript or C#: all languages ​​where wait makes sense “Get to runtime now.”

But Python is not those languages. and getting it wrong fundamental difference Leads to unnecessary locking, accidental complexity and subtle bugs.

This post is the clarification I wish more engineers had.

Misconception: Waiting gives up control (in every language… right?)

If you’re coming from JavaScript, the rule is simple:

  • Each wait always spawns an event loop.

  • Every async function always returns a task (a promise).

  • The moment you write wait, the runtime can schedule something else.

In C#, the story is almost the same:

  • async function return Task Or Task,

  • await Always represents a suspension point.

  • The runtime decides when you have to restart.

In Java’s virtual-thread world (Project Loom), the principle is much the same: when you submit work to run asynchronously, usually through a ExecutorService Supported by virtual threads, you are creating tasks. and when you call Future.get()The virtual thread remains suspended until the result is ready. Suspension is cheaper, but it still constitutes an absolute scheduling limitation.

So developers internalize one big rule:

“Any async limit is a suspension point.”

And then they bring that rule into Python.

But Python is different: it has two async concepts

Python breaks things down into:

1. Coroutines

Defined with async def, but not scheduled. is a coroutine object Now! A state machine with possible suspension points.

When you run:

Python immediately steps into the coroutine and executes it inside current workSynchronously, until it either terminates or reaches the suspension point (waiting for something_not_ready).

There is no event-loop scheduling here.

2. Work

Created with asyncio.create_task(coro). Tasks in Python are the unit of concurrency. The event loop interleaves tasks, not coroutines.

This difference is not cosmetic: it is why many developers misunderstand Python’s async semantics.

Key Truth: Waiting on a coroutine does not yield an event loop

This sentence is the entire post:

Waiting for the coroutine does not return control to the event loop. One has to wait for some work.

A coroutine is like a nested function call that can pause, but does not stop. As a defaultIt bears fruit only when it reaches a waiting state that is not ready,

On the contrary:

Do not reveal this secret. In those languages, there is an “async function” Always A task. You never wait for a “bare coroutine”. Each wait is a possible context switch.

Python breaks that assumption.

Concrete example 1: waiting coroutine is synchronous

Let’s make the behavior painfully clear.

Output:

notice what was not Happen:

  • No other tasks ran between “child start” and “child end”.

  • await child() Didn’t give the event loop a chance to schedule anything else until child() self awaited asyncio.sleep,

await child() Only outlined Coroutine’s body.

JavaScript doesn’t behave like this. C# does not behave this way. Java doesn’t behave like this.

Concrete Example 2: Tasks actually introduce concurrency

Change one line:

Now the output is interleaved depending on the scheduler:

because now we have a WorkAnd waiting for a task does Go to the event loop.

Tasks are where concurrency comes in, not coroutines.

This single difference is where most incorrect locking recommendations arise.

Suspend points define concurrency, not async or await.

Let us now derive the general rules:

  • is an async def function No Concurrent automatically.

  • await Is No A scheduling point until the internal waitable is suspended.

  • exists concurrently only in works And Only at actual suspension points,

This is why the code review suggestion I received, “Add more locks, it’s async!”, was based on the wrong mental model.

my mutation block is contained no waitAll it took was a wait before getting the lock, so:

  • The important section relative to the event loop was atomic.

  • No other work could exist within the mutation.

  • More locks will not increase security.

Cash was not the story. My reviewer had a misconception.

Why did Python choose this design?

Python’s async model evolved from generators (yield, yield from), instead of green threads or promises. Coroutines are an evolution of these primitives.

This inheritance leads to:

  • a more clear boundary between structured control flow And scheduled concurrent,

  • The ability to write async code that behaves synchronously until the actual suspension occurs.

  • Fine-grained control over when interleaving can occur.

This also causes confusion among developers coming from JavaScript, Java, or C#, languages ​​where async automatically means “this is a function.”

Python skips “Is this a task?” up to you.

Putting It All Together: A Mental Model That Actually Works

Here is the model I now advocate when reviewing async code:

  1. Coroutines are callable with possible suspension points: They don’t go together.

  2. Only tasks introduce concurrency: If you never call asyncio.create_taskYou may not have any symmetry at all.

  3. Concurrency occurs only at suspension points: No waiting inside a block → no interleave → no need for locks there.

  4. Locks should protect data in all functions, not coroutines: Lock where suspension is possible, not where the keyword async appears.

Practical Guidelines for Real Codebases

  • Audit where tasks are created: Everyone asyncio.create_task() There is a concurrency limit.

  • Scan critical sections for suspension points: If there is no wait inside the lock, the block is atomic relative to the event loop.

  • Prioritize “calculate outside, transform inside”: Calculate values ​​before acquiring the lock, then quickly make changes inside.

  • Teach the difference clearly: A surprising number of experienced engineers have not internalized coroutines vs. task separation.

Conclusion: Python async is not JavaScript async

Once you’ve internalized it:

  • JavaScript: Async Function → Always a Task

  • C#: async → always a task

  • Java (Loom) VirtualThread): async → always a task

  • Python: async def → just one coroutine; work creation is clear

Then the whole model makes sense.

Python’s wait is not a context switch. It is a structured control flow It is possible hover over.

The only difference is that I did not add more locks to my cache code. And that’s why I now review Python async code by asking a better question:

“Where exactly might this code reside?”

That single question catches more bugs and eliminates more unnecessary complexity than any blanket rule about locking in async systems.



<a href

Leave a Comment