[英]How to use a linear regression model in Lambda using a pkl file?
I have trained a linear regression model and saved the model in a pkl
file, with the following code:我训练了一个线性回归 model 并将 model 保存在pkl
文件中,代码如下:
import pickle
# save the model
filename = 'linear_model.pkl'
pickle.dump(mod, open(filename, 'wb'))
# load the model
load_model = pickle.load(open(filename, 'rb'))
After that I tried to use the model in Lambda by importing the pkl
file.之后我尝试通过导入pkl
文件在Lambda中使用model。 I did a lot of research and many attempts, but could not figure out how to do this.我做了很多研究和尝试,但无法弄清楚如何做到这一点。 My last attempt was this one:我最后一次尝试是这个:
from io import BytesIO
import pickle
import boto3
import base64
import json
s3_client = boto3.client('s3')
def lambda_handler(event, context):
# getting bucket and object key from event object
source_bucket = event['Records'][0]['s3']['bucket']['name']
key = event['Records'][0]['s3']['object']['key']
data = BytesIO()
s3_client.download_fileobj(source_bucket, key, data)
data.seek(0) # move back to the beginning after writing
print("Data", data.read())
load_model = pickle.load(open(data, 'rb'))
print("load_model", load_model)
y_pred = load_model.predict([[140000]])
[ERROR] TypeError: expected str, bytes or os.PathLike object, not _io.BytesIO
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 25, in lambda_handler
load_model = pickle.load(open(data, 'rb'))
I tried to fix this error using data.read()
, but if you do that no file is found in the directory.我尝试使用data.read()
修复此错误,但如果您这样做,则在目录中找不到任何文件。
data
is a BytesIO
but you try to open
it like a file. data
是BytesIO
但您尝试像打开文件一样open
它。 You can not do it on BytesIO
.你不能在BytesIO
上这样做。data
stream. You need to use loads
function.但是你有data
stream,你需要使用loads
function。Update version of lambda function.更新版本 lambda function。
from io import BytesIO
import pickle
import boto3
import base64
import json
s3_client = boto3.client('s3')
def lambda_handler(event, context):
# getting bucket and object key from event object
source_bucket = event['Records'][0]['s3']['bucket']['name']
key = event['Records'][0]['s3']['object']['key']
data = BytesIO()
s3_client.download_fileobj(source_bucket, key, data)
data.seek(0) # move back to the beginning after writing
load_model = pickle.loads(data.read())
print("load_model", load_model)
y_pred = load_model.predict([[140000]])
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.