Python REST frameworks performance comparison

Python REST frameworks performance comparison



10.08.2020

Introduction

Modern DevOps solutions allow us to scale the applications quickly. Scaling doesn’t always mean that we are slowly gaining customers and need to add more instances or servers. Scaling up can also happen periodically as the demand for our app might be higher in the evening or during the weekends.

There is also a possibility to reach unexpected peaks of traffic that relate to particular events or happen when our app becomes viral. No matter what the reason is, it is essential to keep up with these changes. Keep in mind that tools used for scaling work best if the architecture of the app support scaling in the first place.

One of the examples of an architecture that scales well is microservices. According to the study published by Mike Loukides and Steve Swoyer, over 76% of organizations use microservices. Each microservice is deployed separately in its own container. In the case of higher traffic or demand, we can easily replicate a microservice, assigning more resources to it. When the demand decreases again, we can remove the replicas. Microservices might communicate with each other using RESTful APIs. Due to this feature, it’s essential to select the right framework for a particular microservice. The focus of this blog post is Python REST frameworks comparison in terms of performance.

Libraries under tests

We gathered 11 popular Python frameworks that can be used as REST API servers and put them under the same synthetic test. The table below shows a list of selected Python REST frameworks with the most up-to-date version available during analysis.

Library Version
aiohttp 3.6.2
bottle 0.12.18
djangorestframework 3.11.0
eve 1.1.2
falcon 2.0.0
fastapi 0.60.1
Flask 1.1.2
hug 2.6.1
pyramid 1.10.4
sanic 20.6.3
tornado 6.0.4

Test conditions

Tests were performed on a single machine (i.e., REST API server and client were executed on localhost, and requests were performed using loopback interface). Hardware that ran the tests was as follows:

CPU: Intel Core i7-8550U @ 8x 4GHz

MEM: 16 GB

OS: Ubuntu 20.04 focal

Kernel: x86_64 Linux 5.4.0-40-generic

All servers were run in parallel, each on a separate port. Outputs were redirected to /dev/null so that printing any additional information wouldn’t slow down the responses. Every server used a similar, most basic “Hello World” example with a single endpoint (‘/’), and requests were made using the GET method.

Although the servers were run all at the same time, they were tested separately. In order to avoid the influence of varying load from OS processes, tests were repeated ten times using the Round Robin approach, as shown in the Figure below. In the first round of tests, response times from all servers were gathered, and then the whole process was repeated.

For measuring the number of requests that a particular framework can handle, we used wrk tool. It allows us to spawn several threads, where each thread can use multiple connections (clients) to perform requests. One of the outputs of the wrk is the value of requests per second, which in case of this Python REST frameworks comparison is most impressive.

Test results

Let’s take a look at the most exciting part of this blog post, namely, results. The Figure shows the mean value and standard deviation of 10 test runs. As you can see from the standard deviations, the results were repetitive, so that excellent performance is not a matter of luck.

Sanic is by far the best framework in terms of requests per second, followed by fastapi and aiohttp. On the other hand, Flask, eve (which is powered by Flask) and djangorestframework might not be the best choice if you are a speed-oriented kind of developer.

Library Requests/sec
sanic 10377
aiohttp 7947
fastapi 7501
falcon 4904
bottle 3900
tornado 3819
hug 3512
pyramid 3346
Flask 2272
eve 1401
djangorestframework 584

To better grasp the differences between libraries, take a look on the table that defines the slowest framework as a baseline and shows how many times faster, the other libraries are:

Library Relative performance
sanic 17.77
aiohttp 13.61
fastapi 12.84
falcon 8.4
bottle 6.68
tornado 6.54
hug 6.01
pyramid 5.73
Flask 3.89
eve 2.4
djangorestframework 1

NOTE 1: The exact values of requests per second might vary based on OS, hardware, load, and many other terms. The purpose of these tests is to compare the performance of different frameworks in terms of performance and point the ones that are noticeably faster than others. If you are required to obtain, e.g., min, 4000 rps, make sure that the selected library can handle such traffic in your conditions.

NOTE 2: The processing done by the library was minimal (Hello world example) with default settings. If you add logic to your endpoints, data processing, database connections, and so on, your results will be different.

Bonus

If you are interested in repeating these tests you mind find following bash command useful:

wrk --duration 10s --threads 5 --connections 20 http://localhost:8000 | awk '/Requests/sec:/{printf "%sn", $2;}'

It performs wrk-based benchmarking using five threads and 20 connections on localhost:8000 (change it to your needs) and then extracts only the number of requests per second from whole wrk output.

Then you can easily repeat this test several times using loop in bash:

for run in {1..10}; do wrk -d10s -t5 -c20 http://localhost:8000 | awk '/Requests/sec:/{printf "%sn", $2;}'; done

 

Other materials

If you are interested in more details about aiohttp framework, take a look at the Routing order in aiohttp library in Python.

Author

Mateusz Buczkowski

Mateusz Buczkowski received his M.Sc. degree from Poznan University of Technology in 2012. Since then he is employed at the Chair of Telecommunication Systems and Optoelectronics in the Faculty of Electronics and Telecommunications as a teaching assistant. He is pursuing his PhD in field of image processing. His research interest cover the wide spectrum of image and video processing. In particular he is interested in image quality assessment, which is his PhD topic. As a R&D engineer he took part in two FP7 EU projects, namely 5GNOW and SOLDER, where he worked on solutions that can be used in 5-th Generation wireless networks, which is to come in 2020. In Grandmetric he is involved in wireless systems research.

Leave a Reply

Your email address will not be published. Required fields are marked *


 

Newsletter