簡體   English   中英

在 MLflow 中注冊 SageMaker model

[英]Register SageMaker model in MLflow

MLflow 可用於在訓練機器學習模型時跟蹤(超)參數和指標。 它將經過訓練的 model 存儲為每個實驗的工件。 然后可以將這些模型直接部署為SageMaker 端點

是否也可以反過來做,即將在 SageMaker 中訓練的模型注冊到 MLflow 中?

獲取輸入 object 的`dtype 與預期的 dtype 不匹配<u0` when invoking mlflow-deployed nlp model in sagemaker< div><div id="text_translate"><p> 我使用 MLflow 的sagemaker.deploy()在 SageMaker 中部署了一個 Huggingface Transformer model。</p><p> 在記錄 model 時,我使用infer_signature(np.array(test_example), loaded_model.predict(test_example))來推斷輸入和 output 簽名。</p><p> Model部署成功。 當嘗試查詢 model 時,我得到了ModelError (下面的完整回溯)。</p><p> 要查詢test_example ,我使用的 test_example 與用於infer_signature()的完全相同:</p><p> test_example = [['This is the subject', 'This is the body']]</p><p> 唯一的區別是,在查詢已部署的 model 時,我沒有將測試示例包裝在np.array()中,因為它不是json可序列化的。</p><p> 為了查詢 model,我嘗試了兩種不同的方法:</p><pre class="lang-py prettyprint-override"> import boto3 SAGEMAKER_REGION = 'us-west-2' MODEL_NAME = '...' client = boto3.client("sagemaker-runtime", region_name=SAGEMAKER_REGION) # Approach 1 client.invoke_endpoint( EndpointName=MODEL_NAME, Body=json.dumps(test_example), ContentType="application/json", ) # Approach 2 client.invoke_endpoint( EndpointName=MODEL_NAME, Body=pd.DataFrame(test_example).to_json(orient="split"), ContentType="application/json; format=pandas-split", )</pre><p> 但它們會導致相同的錯誤。</p><p> 將不勝感激您的建議。</p><p> 謝謝!</p><p> 注意:我使用的是 Python 3 並且所有<strong>字符串都是 unicode</strong> 。</p><pre class="lang-py prettyprint-override"> --------------------------------------------------------------------------- ModelError Traceback (most recent call last) <ipython-input-89-d09862a5f494> in <module> 2 EndpointName=MODEL_NAME, 3 Body=test_example, ----> 4 ContentType="application/json; format=pandas-split", 5 ) ~/anaconda3/envs/amazonei_tensorflow_p36/lib/python3.6/site-packages/botocore/client.py in _api_call(self, *args, **kwargs) 393 "%s() only accepts keyword arguments." % py_operation_name) 394 # The "self" in this scope is referring to the BaseClient. --> 395 return self._make_api_call(operation_name, kwargs) 396 397 _api_call.__name__ = str(py_operation_name) ~/anaconda3/envs/amazonei_tensorflow_p36/lib/python3.6/site-packages/botocore/client.py in _make_api_call(self, operation_name, api_params) 723 error_code = parsed_response.get("Error", {}).get("Code") 724 error_class = self.exceptions.from_code(error_code) --> 725 raise error_class(parsed_response, operation_name) 726 else: 727 return parsed_response ModelError: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (400) from primary with message "{"error_code": "BAD_REQUEST", "message": "dtype of input object does not match expected dtype <U0"}". See https://us-west-2.console.aws.amazon.com/cloudwatch/home?region=us-west-2#logEventViewer:group=/aws/sagemaker/Endpoints/bec-sagemaker-model-test-app in account 543052680787 for more information.</pre><p> 環境信息:</p><pre class="lang-py prettyprint-override"> {'channels': ['defaults', 'conda-forge', 'pytorch'], 'dependencies': ['python=3.6.10', 'pip==21.3.1', 'pytorch=1.10.2', 'cudatoolkit=10.2', {'pip': ['mlflow==1.22.0', 'transformers==4.17.0', 'datasets==1.18.4', 'cloudpickle==1.3.0']}], 'name': 'bert_bec_test_env'}</pre></div></u0`>

[英]Getting `dtype of input object does not match expected dtype <U0` when invoking MLflow-deployed NLP model in SageMaker

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

相關問題 獲取輸入 object 的`dtype 與預期的 dtype 不匹配<u0` when invoking mlflow-deployed nlp model in sagemaker< div><div id="text_translate"><p> 我使用 MLflow 的sagemaker.deploy()在 SageMaker 中部署了一個 Huggingface Transformer model。</p><p> 在記錄 model 時,我使用infer_signature(np.array(test_example), loaded_model.predict(test_example))來推斷輸入和 output 簽名。</p><p> Model部署成功。 當嘗試查詢 model 時,我得到了ModelError (下面的完整回溯)。</p><p> 要查詢test_example ,我使用的 test_example 與用於infer_signature()的完全相同:</p><p> test_example = [['This is the subject', 'This is the body']]</p><p> 唯一的區別是,在查詢已部署的 model 時,我沒有將測試示例包裝在np.array()中,因為它不是json可序列化的。</p><p> 為了查詢 model,我嘗試了兩種不同的方法:</p><pre class="lang-py prettyprint-override"> import boto3 SAGEMAKER_REGION = 'us-west-2' MODEL_NAME = '...' client = boto3.client("sagemaker-runtime", region_name=SAGEMAKER_REGION) # Approach 1 client.invoke_endpoint( EndpointName=MODEL_NAME, Body=json.dumps(test_example), ContentType="application/json", ) # Approach 2 client.invoke_endpoint( EndpointName=MODEL_NAME, Body=pd.DataFrame(test_example).to_json(orient="split"), ContentType="application/json; format=pandas-split", )</pre><p> 但它們會導致相同的錯誤。</p><p> 將不勝感激您的建議。</p><p> 謝謝!</p><p> 注意:我使用的是 Python 3 並且所有<strong>字符串都是 unicode</strong> 。</p><pre class="lang-py prettyprint-override"> --------------------------------------------------------------------------- ModelError Traceback (most recent call last) <ipython-input-89-d09862a5f494> in <module> 2 EndpointName=MODEL_NAME, 3 Body=test_example, ----> 4 ContentType="application/json; format=pandas-split", 5 ) ~/anaconda3/envs/amazonei_tensorflow_p36/lib/python3.6/site-packages/botocore/client.py in _api_call(self, *args, **kwargs) 393 "%s() only accepts keyword arguments." % py_operation_name) 394 # The "self" in this scope is referring to the BaseClient. --> 395 return self._make_api_call(operation_name, kwargs) 396 397 _api_call.__name__ = str(py_operation_name) ~/anaconda3/envs/amazonei_tensorflow_p36/lib/python3.6/site-packages/botocore/client.py in _make_api_call(self, operation_name, api_params) 723 error_code = parsed_response.get("Error", {}).get("Code") 724 error_class = self.exceptions.from_code(error_code) --> 725 raise error_class(parsed_response, operation_name) 726 else: 727 return parsed_response ModelError: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (400) from primary with message "{"error_code": "BAD_REQUEST", "message": "dtype of input object does not match expected dtype <U0"}". See https://us-west-2.console.aws.amazon.com/cloudwatch/home?region=us-west-2#logEventViewer:group=/aws/sagemaker/Endpoints/bec-sagemaker-model-test-app in account 543052680787 for more information.</pre><p> 環境信息:</p><pre class="lang-py prettyprint-override"> {'channels': ['defaults', 'conda-forge', 'pytorch'], 'dependencies': ['python=3.6.10', 'pip==21.3.1', 'pytorch=1.10.2', 'cudatoolkit=10.2', {'pip': ['mlflow==1.22.0', 'transformers==4.17.0', 'datasets==1.18.4', 'cloudpickle==1.3.0']}], 'name': 'bert_bec_test_env'}</pre></div></u0`> mlflow 無法從 model 注冊表中獲取 model 為 PyTorch Model 調用 SageMaker 端點 創建 SageMaker 時出現驗證錯誤 Model 賢者推理:如何加載model 從 SageMaker model 端點獲取 model 元數據 Model SageMaker中的管理:Model Package組的使用 AWS SageMaker Pipeline Model 端點部署失敗 部署 TensorFlow 概率回歸 model 作為 Sagemaker 端點 加載時運行 SageMaker Batch 轉換失敗 model
 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM