簡體   English   中英

使用自定義訓練的 Keras model 和 Sagemaker 端點結果 ModelError:調用 InvokeEndpoint 操作時發生錯誤(ModelError):

[英]Using custom trained Keras model with Sagemaker endpoint results ModelError: An error occurred (ModelError) when calling the InvokeEndpoint operation:

我試圖通過在 sagemaker 中加載預訓練的 model 來進行預測,但出現以下錯誤

ModelError:調用 InvokeEndpoint 操作時發生錯誤 (ModelError):從主節點收到客戶端錯誤 (400),消息為“{“error”:“在 Run() 之前未使用圖形創建會話!”}

我的代碼

def convert_h5_to_aws(loaded_model):
    import tensorflow as tf
    if tf.executing_eagerly():
        tf.compat.v1.disable_eager_execution()
    """
    given a pre-trained keras model, this function converts it to a TF protobuf format
    and saves it in the file structure which aws expects
    """  
    from tensorflow.python.saved_model import builder
    from tensorflow.python.saved_model.signature_def_utils import predict_signature_def
    from tensorflow.python.saved_model import tag_constants
    
    # This is the file structure which AWS expects. Cannot be changed. 
    model_version = '1'
    export_dir = 'export/Servo/' + model_version
    
    # Build the Protocol Buffer SavedModel at 'export_dir'
    builder = builder.SavedModelBuilder(export_dir)
    
    # Create prediction signature to be used by TensorFlow Serving Predict API
    signature = predict_signature_def(
        inputs={"inputs": loaded_model.input}, outputs={"score":    loaded_model.output})
    
    from keras import backend as K
    with K.get_session() as sess:
        # Save the meta graph and variables
        builder.add_meta_graph_and_variables(
            sess=sess, tags=[tag_constants.SERVING], signature_def_map={"serving_default": signature})
        builder.save()
    
    #create a tarball/tar file and zip it
    import tarfile
    with tarfile.open('model.tar.gz', mode='w:gz') as archive:
        archive.add('export', recursive=True)
        
convert_h5_to_aws(model)



import sagemaker

sagemaker_session = sagemaker.Session()
inputs = sagemaker_session.upload_data(path='model.tar.gz', key_prefix='model')

!touch train.py #create an empty python file
import boto3, re
from sagemaker import get_execution_role

# the (default) IAM role you created when creating this notebook
role = get_execution_role()
import boto3, re
from sagemaker import get_execution_role

# the (default) IAM role you created when creating this notebook
role = get_execution_role()

# Create a Sagemaker model (see AWS console>SageMaker>Models)
from sagemaker.tensorflow.model import TensorFlowModel
sagemaker_model = TensorFlowModel(model_data = 's3://' + sagemaker_session.default_bucket() + '/model/model.tar.gz',
                                  role = role,
                                  framework_version = '1.12',
                                  entry_point = 'train.py')


# Deploy a SageMaker to an endpoint
predictor = sagemaker_model.deploy(initial_instance_count=1,
                                   instance_type='ml.m4.xlarge')
                                 

# Create a predictor which uses this new endpoint
import sagemaker
from sagemaker.tensorflow.model import TensorFlowModel

#endpoint = '' #get endpoint name from SageMaker > endpoints

predictor=sagemaker.tensorflow.model.TensorFlowPredictor(endpoint, sagemaker_session)
# .predict send the data to our endpoint
data = X_test #<-- update this to have inputs for your model
predictor.predict(data)

我還嘗試使用不同版本的 TensorFlowModel

所有這些代碼都在筆記本中嗎? 您要確保正確地對 model 工件和推理代碼進行壓縮。 確保您已正確存儲已保存的 model 的元數據,並且如果您有帶有推理函數(處理預處理和后處理)的推理腳本,則該腳本也應包含在代碼目錄中,腳本也位於 tar 文件中。 這是在 SageMaker 上部署預訓練的 Sklearn model 的示例,您可以對預訓練的 TensorFlow model 執行相同的操作。

Sklearn 預訓練示例: https://github.com/RamVegiraju/Pre-Trained-Sklearn-SageMaker

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM