简体   繁体   中英

Invoking Endpoint in AWS SageMaker for Scikit Learn Model

After deploying a scikit model on AWS Sagemaker, I invoke my model using below:

import pandas as pd
payload = pd.read_csv('test3.csv')
payload_file = io.StringIO()
payload.to_csv(payload_file, header = None, index = None)

import boto3
client = boto3.client('sagemaker-runtime')
response = client.invoke_endpoint(
    EndpointName= endpoint_name,
    Body= payload_file.getvalue(),
    ContentType = 'text/csv')
import json
result = json.loads(response['Body'].read().decode())
print(result)

The above code works perfectly but when I try:

payload = np.array([[100,5,1,2,3,4]])

I get the error:

ModelError: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received server error (500) from container-1 with message 
"<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"> <title>500 Internal Server Error</title> <h1>
Internal Server Error</h1> <p>The server encountered an internal error and was unable to complete your request.  
Either the server is overloaded or there is an error in the application.</p> 

It was mentioned in Scikit-learn SageMaker Estimators and Models , that the

SageMaker Scikit-learn model server provides a default implementation of input_fn. This function deserializes JSON, CSV, or NPY encoded data into a NumPy array.

I would like to know how I can modify the default to accept a 2D numpy array so it can be used for real-time prediction.

Any suggestion? I have tried using Inference Pipeline with Scikit-learn and Linear Learner as a reference but could not replace the Linear Learner with a Scikit model. I received the same error.

If anyone found a way to change the default input_fn, predict_fn, and output_fn to accept numpy array or string then please do share.

But I did find a way of doing this with the default.

import numpy as np
import pandas as pd

df = pd.DataFrame(np.array([[100.0,0.08276299999999992,77.24,0.0008276299999999992,43.56,
                             6.6000000000000005,69.60699488825647,66.0,583.0,66.0,6.503081996847735,44.765133295284,
                             0.4844340723821271,21.35599999999999],
                            [100.0,0.02812099999999873,66.24,0.0002855600000003733,43.56,6.6000000000000005,
                             1.6884635296354735,66.0,78.0,66.0,6.754543287329573,47.06480204081666,
                             0.42642318733140017,0.4703999999999951],
                            [100.0,4.374382,961.36,0.043743819999999996,25153.96,158.6,649.8146514292529,120.0,1586.0
                             ,1512.0,-0.25255116297020636,1.2255274408634853,-2.5421402801039323,614.5056]]),
                  columns=['a', 'b', 'c','d','e','f','g','h','i','j','k','l','m','n'])
import io
from io import StringIO
test_file = io.StringIO()
df.to_csv(test_file,header = None, index = None)

Then:

import boto3
client = boto3.client('sagemaker-runtime')
response = client.invoke_endpoint(
    EndpointName= endpoint_name,
    Body= test_file.getvalue(),
    ContentType = 'text/csv')
import json
result = json.loads(response['Body'].read().decode())
print(result)

But please if there is a better solution then it would be really helpful.

You should be able to set a serializer/deserializer for the predictor returned by your model.deploy(). There is an example of doing so in the FM example notebook here:

https://github.com/awslabs/amazon-sagemaker-examples/blob/master/introduction_to_amazon_algorithms/factorization_machines_mnist/factorization_machines_mnist.ipynb

Please try this and let me know if that works for you!

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM