简体   繁体   中英

Best wsgi service for handling webhooks with few resources

Currently working on a Virtual server with 2 CPU's 4GB of ram. I am running a Flask + uwsgi + nginx to host the webserver. I need the server to be capable of accepting about 10 out of 2500-ish the requests a day. The requests that don't pass average about 2ms yet the queue is consistently backed up. The issue I have been encountering lately is both speed and duplication when it does work. As the accepted webhooks are sent to another server and I will get duplicates or completely miss a bunch.

[uwsgi]
module = wsgi

master = true
processes = 4
enable-threads = true
threads = 2

socket = API.sock
chmod-socket = 660
vacuum = true

harakiri = 10
die-on-term = true

This is my current.ini file I have messed around with harakiri and have read countless hours through the uwsgi documentation trying different things it is unbelievably frustrating.

Picture of Systemctl status API

The check for it looks similar to this redacted some info.

@app.route('/api', methods=['POST'])
def handle_callee():
    authorization = request.headers.get('authorization')
    if authorization == SECRET and check_callee(request.json):
        data = request.json
        name = data["payload"]["object"]["caller"]["name"]
        create_json(name, data)

        return 'success', 200
    else:
        return 'failure', 204

The json is then parsed through a number of functions. This is my first time deploying a wsgi service and I don't know if my configuration is incorrect. I've poured hours of research into trying to fix this. Should I try switching to gunicorn. I have asked this question differently a couple of days ago but to no avail. Trying to put more context in hopes someone could point me in the right direction. I don't even know in the systemctl status whether the | req: 12/31 is how many it's done thus far and what's queued for that PID. Any insight into this situation would make my week. I've been unable to fix this for about 2 weeks of trying different configs increasing working, processes, messing with the harakiri, disabling logging. But none of this has proved to get the requests to process at a speed that I desire.

Thank you to anyone who took the time to read this, I am still learning and have tried to add as much context as possible. If you need more I will gladly respond. I just can't wrap my head around this issue.

You would need to take a systematic approach in figuring out:

  • How many requests per second can you handle
  • What are your apps bottlenecks and scaling factors

Cloudbees have written a great article on performance tuning for uwsgi + flask + nginx .

To give an overview of the steps to tune your service here is what it might look like:

First, you need to make sure you have the required tooling, particularly a benchmarking tool like Apache Bench , k6 , etc.

  1. Establish a base. This means that you configure your application with the minimum setup to run ie single thread and single process, no multi-threading. Run the benchmark and record the results.
  2. Start tweaking the setup. Add threads, processes, etc.
  3. Benchmark after the tweaks.
  4. Repeat steps 2 & 3 until you see the upper limits, and understand the service characteristics - are you CPU/IO bound?
  5. Try changing the hardware/vm, as some offerings come with penalties in performance due to shared CPU with other tenants, bandwidth, etc.

Tip: Try to run the benchmark tool from a different system than the one you are benchmarking, since it also consumes resources and loads the system further.

In your code sample you have two methods create_json(name, data) and check_callee(request.json) , do you know their performance?

Note: Can't comment so had to write this as an answer.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM