Changes suggestion to improve performance under production workload #1414
Labels
enhancement
Suggested enhancement
feature
Feature request, suggested feature or feature implementation
web server
Functionality related to the web server serving HTTP requests and websockets
Hi recently i've bump into this framework and very interested with its idea. In the past, I've work (and contribute) with a couple of micro-services framework. I would like to proposed a list of changes below:
Change 1: Allow to run multiple HTTP service on a same port.
Currently in this tomodachi framework, i can run multiple service in single process by import them in a same file
services.py
then run it usingtomodachi run services.py --production
. This behaviour works great for AMQP services since there is no port conflict when run more than 1 service. But for HTTP services it make access those HTTP services become difficult since each service need to be configured on a different port. Especially when run inside docker container with defaultbridge
isolation network (multiple ports need to be forwarded to the host Or having a nginx configured to route each service path to its corresponding port). If some how the framework can merge all route and corresponding handler of multiple HTTP services to use a single socket bind to a same (address, port). It will eliminate this problem. Most of the time, different service will have different path routing any way so conflicting on a same path is not really a big issue. If conflicted, then developer should always have an option to run the service on different port (current behaviour).Change 2: Automatically set socket
SO_REUSEPORT
to True on Linux platform.As of today, all cloud provider are providing virtual machine using CPU with hyper threading feature. You will get minimum 2 vCPU by default. To fully utilised these CPU for Python application, standard Production setup will alway run the application using process manager
supervisord
or using web servergunicorn
with number of process/worker is (n+1) or (2n+1). Currently this is not possible without socket SO_REUSEPORT flag set to True. This change along with change 1 above will allow multiple tomodachi HTTP service to be easily deployed for Production setup usingsupervisord
with a simple config inside a docker container.Change 3: Set a default prefetch_count for AMQP service.
Current implementation of tomodachi framework doesn't set the prefetch_count for channel when consume message from a queue. If this number is not set (or set to 0), then when ever a message is successfully published to a queue, it will immediately be sent to the tomodachi AMQP service which is currently consuming the queue regardless of how busy this service worker is. This behaviour is only optimal for service which is very lightweight (very low memory consumption per request and low cpu consumption overall). When the AMQP service doing some thing heavy this behaviour will instead causing the service to bottle-neck itself under high/average load (may cause the Out Of Memory situation as well). I suggest to set a default prefetch_count of
100
in the framework and allow developer to specify this number in service configuration. (see: https://www.rabbitmq.com/confirms.html#channel-qos-prefetch-throughput)p/s: I will personally create the PR for change 2 and change 3. For change 1, it require to touch/rewrite part of the HTTP core of this framework i feel its better to be implement by framework owner if he agree to this change.
The text was updated successfully, but these errors were encountered: