Adopting Python 3’s native coroutines

07.06.18 by Adam Serafini

Adopting Python 3’s native coroutines

Delivery Hero Logo

4 min read

At Lieferheld and Pizza.de, most of our server-side platform code was written in Python 2.7 until last year. I want to share some of our experience migrating elements of this codebase to Python 3 and, in particular, taking advantage of the asyncio module and native coroutines available in Python 3.5.

Why asyncio?

In traditional WSGI-style app scaling, IO, including network, database and cache calls are blocking. This means the process cannot answer other requests while it waits for the database, cache or upstream service. This tends not to matter because we scale the application by running many processes simultaneously: if one process is blocked, the request can be handled by another.

Figure 1: traditional WSGI-style app scaling – run a lot of processes.

We’ve used and continue to use this scaling approach extensively – running up to 60 uwsgi processes of our core application on each 64-core server.

This model offers broadly acceptable performance when there are only a couple of blocking calls per request (for example: to a cache and then a database). However, it becomes more cumbersome when the response must be composed of a large number of upstream services.

In the worst case, some of our API endpoints must make 16 network calls in order to form a response but many of those calls are independent and can be made concurrently. As we will see in this post, Python 3 and asyncio offer much better developer ergonomics for this style of networking.

Using the new keywords: async and await

The asyncio module introduces many new concepts including event loops, awaitables, coroutines, futures, tasks, handles, executors, transports and protocols.

Despite this explosion of concepts, understanding two new keywords and one function is enough to demonstrate the main benefits of asyncio in the context of writing HTTP services.

The two new keywords, introduced in Python 3.5, are async and await.

The async keyword turns a normal function into a coroutine which must be called with the new await keyword. Coroutines allow flow control to be cooperatively passed between different routines (functions) without returning. Let’s compare the normal function with its async version:

Figure 2: a normal Python function

def handle_request():

    # Next line is blocking. Nothing
    # happens in this process until it returns.
    data = some_io_fetch()
    # enrich data
    result = transform(data)
    return result

def some_io_fetch():
    ## Call database, cache, or another service.
Figure 3: an async Python function

async def handle_request():

    # Next line is non-blocking. Another request can be handled
    # while we are waiting. This coroutine is suspended until it returns.
    data = await some_io_fetch()
    # enrich data
    result = transform(data)
    return result

async def some_io_fetch():
    ## Call database, cache, or another service.

The innocuous introduction of the async and await keywords radically changes the timeline of events for a single process of the application. In the blocking version: the green request must finish before work on the red one can start. In the asyncio version, the red request can be handled even while the green is waiting for some_io_fetch() to return.

Figure 4: blocking timeline vs. coroutine timeline where request handling is interleaved

This is already an improvement over our blocking version: it allows each process to handle more requests because it works on multiple requests at the same time – in other words, it increases throughput.

Making requests in parallel with asyncio.gather

In addition to the new keywords, the asyncio module offers a convenient function that improves not only throughput but also latency – asyncio.gather. As mentioned previously, many of our network calls are independent. For example: to serve a list of restaurants we need to know which restaurants deliver to a certain location and the user’s order history in order to personalise our response.

With asyncio.gather we can simply call multiple services, databases or caches simultaneously and wait for the result without blocking the process.

Figure 5: calling multiple upstream services simultaneously
async def do_two_things_at_once():
    resp1, resp2 = await asyncio.gather(call_service1(), call_service2())
    # co-routine is suspended until both services have returned.
    return merge_results(resp1, resp2)

The simplicity of this code, using only functionality from Python’s stdlib, compares favourably to WSGI-style apps which would normally require additional dependencies like Gevent to perform simultaneous network requests in a single process.

Conclusion

Adopting asyncio has allowed us to deliver two new services that compose responses from many network calls without introducing much additional cognitive burden on our developers. We don’t need to think about callbacks or promises and can easily make independent requests in parallel, further improving performance. This functionality is part of Python’s stdlib and doesn’t introduce new dependencies.

While there are some downsides – asyncio client libs and frameworks are not as mature as their blocking equivalents – there’s no question that the Python community is coalescing around the asyncio module for asynchronous IO. We’ll continue to adopt this approach for new Python 3 projects at Lieferheld and Pizza.de.

Adopting Python 3’s native coroutines
Adam Serafini
Principal Software Engineer
10 NewRelic tricks – we wish we would have known

Next

Engineering

10 NewRelic tricks – we wish we would have known

Delivery Hero Logo
4 min read