简体   繁体   中英

Running out of memory when deploying an extremely simple Flask app in Heroku

I want to deploy a simple machine learning model (resnet34) made with Fast AI to Heroku.

My whole flask app is a single file:

from flask import Flask
from fastai.vision.all import *

app = Flask(__name__)

learn = load_learner("./export.pkl")

@app.route("/<path:image_url>")
def hello_world(image_url):
    print(image_url)
    response = requests.get(image_url)
    img = PILImage.create(response.content)
    predictions = learn.predict(img)
    print(predictions)
    return predictions[0]

It works fine a couple of times, but heroku then starts logging things like: 图像图像 691×44 18.9 KB

I don't understand why this is happening... my intuition tells me that the garbage collector should be fine here.

Here are my requirements.txt

-f https://download.pytorch.org/whl/torch_stable.html

torch==1.8.1+cpu
torchvision==0.9.1+cpu
fastai>=2.3.1
Flask==2.0.1
gunicorn==20.1.0

Pillow

requests==2.26.0

EDIT: The answer I posted myself is not completely right. The root cause was that I wasn't closing the images: correct code:

@app.route("/<path:image_url>")
def hello_world(image_url):
    print(image_url)
    response = requests.get(image_url)
    img = PILImage.create(response.content)
    predictions = learn.predict(img)
    img.close()
    return predictions[0]

I believe the issue was that the pycache was getting bigger and bigger.

Be sure to run your app with the following env var set:

PYTHONDONTWRITEBYTECODE=1

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM