简体   繁体   English

使用带有本地模型的管道

[英]using pipelines with a local model

I am trying to use a simple pipeline offline.我正在尝试离线使用简单的pipeline I am only allowed to download files directly from the web.我只能直接从网上下载文件。

I went to https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english/tree/main and downloaded all the files in a local folder C:\\\\Users\\\\me\\\\mymodel我去了https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english/tree/main并下载了本地文件夹C:\\\\Users\\\\me\\\\mymodel所有文件

However, when I tried to load the model I get a strange error但是,当我尝试加载模型时,出现了一个奇怪的错误

from transformers import pipeline

classifier = pipeline(task= 'sentiment-analysis', 
                      model= "C:\\Users\\me\\mymodel",
                      tokenizer = "C:\\Users\\me\\mymodel")

ValueError: unable to parse C:\Users\me\mymodel\modelcard.json as a URL or as a local path

What is the issue here?这里有什么问题? Thanks!谢谢!

Must be either of the two cases:必须是以下两种情况之一:

  • You didn't download all the required files properly您没有正确下载所有必需的文件
  • Folder path is wrong文件夹路径错误

FYI, I am listing out the required contents in the directory:仅供参考,我在目录中列出了所需的内容:

  • config.json配置文件
  • pytorch_model.bin/ tf_model.h5 pytorch_model.bin/tf_model.h5
  • special_tokens_map.json special_tokens_map.json
  • tokenizer.json分词器.json
  • tokenizer_config.json tokenizer_config.json
  • vocab.txt词汇表.txt

the solution was slightly indirect:解决方案有点间接:

  1. load the model on a computer with internet access在可以访问互联网的计算机上加载模型
  2. save the model with save_pretrained()使用save_pretrained()保存模型
  3. transfer the folder obtained above to the offline machine and point its path in the pipeline call将上面得到的文件夹转移到离线机器上,并在pipeline调用中指向它的路径

The folder will contain all the expected files.该文件夹将包含所有预期文件。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 如何使用 sagemaker 管道部署最佳调整模型? - How to deploy the best tuned model using sagemaker pipelines? MultiInputOutput Model 随机搜索与 Scikit 管道 - MultiInputOutput Model RandomSearch with Scikit Pipelines 使用 python 脚本在 Azure DevOps 管道上调用 Databricks API 失败,但从本地计算机在 Postman 上成功运行 - Databricks API call fails on Azure DevOps pipelines using python script, but run successfully on Postman from local machine 使用GridSearchCV测试多条管道 - Using GridSearchCV to Test Multiple Pipelines 使用 Kubeflow Pipelines SDK 与 Python 和 PyCharm 连接到 AI Platform Pipelines - Connecting to AI Platform Pipelines using the Kubeflow Pipelines SDK with Python and PyCharm 使用 Python 在 Apache Beam 管道中进行异常处理 - Exception Handling in Apache Beam pipelines using Python 无法使用 Bitbucket 管道部署到 EB - Unable to deploy to EB using Bitbucket pipelines 使用 Azure Pipelines 为 Raspberry Pi 构建 Python 包 - Building Python packages for a Raspberry Pi using Azure Pipelines 无法使用Python和scrapy管道将数据插入MySQL - Unable to insert data to MySQL using Python and scrapy pipelines 使用scikit-learn使用管道时出错 - Error when using scikit-learn to use pipelines
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM