[英]problem when uploading file to google drive with its API with Python
I am trying to upload a file to Google Drive using its Python API since I need to make a script to upload automatic backup copies from my server to Google Drive if user interaction. 我正在尝试使用其Python API将文件上传到Google云端硬盘,因为我需要制作一个脚本,以便在用户互动时将自动备份副本从我的服务器上传到Google云端硬盘。 I have the following code which I have extracted from the Google Drive documentation.
我从Google Drive文档中提取了以下代码。
Code of my Script: 我的剧本代码:
from __future__ import print_function import pickle import os.path from googleapiclient.discovery import build from google_auth_oauthlib.flow import InstalledAppFlow from google.auth.transport.requests import Request from apiclient.http import MediaFileUpload # If modifying these scopes, delete the file token.pickle. SCOPES = ['https://www.googleapis.com/auth/drive.metadata.readonly'] def main(): """Shows basic usage of the Drive v3 API. Prints the names and ids of the first 10 files the user has access to. """ creds = None # The file token.pickle stores the user's access and refresh tokens, and is # created automatically when the authorization flow completes for the first # time. if os.path.exists('token.pickle'): with open('token.pickle', 'rb') as token: creds = pickle.load(token) # If there are no (valid) credentials available, let the user log in. if not creds or not creds.valid: if creds and creds.expired and creds.refresh_token: creds.refresh(Request()) else: flow = InstalledAppFlow.from_client_secrets_file( 'credentials.json', SCOPES) creds = flow.run_local_server() # Save the credentials for the next run with open('token.pickle', 'wb') as token: pickle.dump(creds, token) service = build('drive', 'v3', credentials=creds) # Call the Drive v3 API results = service.files().list( pageSize=10, fields="nextPageToken, files(id, name)").execute() items = results.get('files', []) if not items: print('No files found.') else: print('Files:') for item in items: print(u'{0} ({1})'.format(item['name'], item['id'])) file_metadata = { 'name' : 'report.csv', 'mimeType' : 'application/vnd.google-apps.spreadsheet' } media = MediaFileUpload('files/report.csv', mimetype='text/csv', resumable=True) file = drive_service.files().create(body=file_metadata, media_body=media, fields='id').execute() print ("File ID: %s" % file.get("id")) main()
The errors that it shows me are these: 它告诉我的错误是这些:
Traceback (most recent call last): File "gdriveprueba.py", line 55, in <module> resumable=True) File "/home/servicioweb/.local/lib/python2.7/site-packages/googleapiclient/_helpers.py", line 130, in positional_wrapper return wrapped(*args, **kwargs) File "/home/servicioweb/.local/lib/python2.7/site-packages/googleapiclient/http.py", line 554, in __init__ fd = open(self._filename, 'rb') IOError: [Errno 2] No such file or directory: 'files/report.csv'
the files directory create it manually in Google Drive but it keeps telling me that it can not find it, what could be happening that I can not see? files目录在Google云端硬盘中手动创建,但它一直告诉我它无法找到它,可能会发生什么,我看不到? I have 2 days in this and I have not been able to upload the files from the script.
我有两天的时间,我无法从脚本上传文件。
You are confusing the parameters on line 50
and line 53
. 您在第
50
行和第53
行上混淆了参数。 The parameter name
which goes in the file_metadata
structure is referring to the name of the file on the google drive . file_metadata
结构中的参数name
是指google驱动器上文件的名称 。 The first parameter to the MediaFileUpload
constructor refers to the path on the local drive . MediaFileUpload
构造函数的第一个参数引用本地驱动器上的路径 。 For your code to work this file needs to exist. 为了使您的代码工作,该文件需要存在。 Also you are referring to an undefined variable
drive_service
on line 56
. 你也指第
56
行的未定义变量drive_service
。 You can either redefine the variable service
, which is defined in the main function, as a global variable , or move the code which requests the api upload (starting on line 49
) into the function main
. 您可以将主函数中定义的变量
service
重新定义为全局变量 ,或者将请求api上载的代码(从第49
行开始)移动到函数main
。 Also main
needs to be called first before your upload code to actually create the service object. 在您的上传代码实际创建服务对象之前,还需要首先调用
main
。
If you just want to upload this to the root of your drive, you can just create the file files/report.csv
, relative to this file, and you will have the file report.csv
created on the root of your drive. 如果您只想将其上传到驱动器的根目录,则可以创建相对于此文件的文件
files/report.csv
,并且您将在驱动器的根目录中创建文件report.csv
。
To create the file files/report.csv
, you need to find the fileId
of the directory files
on your google drive, and send that as a parameter to the create
api call. 要创建文件
files/report.csv
,您需要在google驱动器上找到目录files
的fileId
,并将其作为参数发送到create
api调用。
To find the fileId
run this code: 要查找
fileId
运行以下代码:
dirp = "files" # Name of directory to find.
parent_id = "" # The id we are looking for.
query = ("name='%s'" % (dirp))
resp = service.files().list(
q=query,
fields="files(id, name)",
pageToken=None).execute()
files = resp.get('files', [])
if len(files) > 0:
parent_id = files[0].get('id')
Now use the variable parent_id
in the api request to create the file. 现在使用api请求中的变量
parent_id
来创建文件。
media = MediaFileUpload('report.csv',
mimetype='text/csv',
resumable=True)
meta_data= { 'name': 'report.csv',
'mimeType' : 'application/vnd.google-apps.spreadsheet',
'parents': [parent_id] }
f = service.files().create(
body=meta_data,
media_body=media,
fields='id').execute()
if not f is None: print("[*] uploaded %s" % (f.get('id')))
Here is more info on the parameters for the create
function. 以下是有关
create
函数参数的更多信息。
The working code would look like this: 工作代码如下所示:
from __future__ import print_function
import pickle
import os.path
from googleapiclient.discovery import build
from google_auth_oauthlib.flow import InstalledAppFlow
from google.auth.transport.requests import Request
from apiclient.http import MediaFileUpload
# If modifying these scopes, delete the file token.pickle.
SCOPES = ['https://www.googleapis.com/auth/drive.metadata.readonly']
service = None
def main():
"""Shows basic usage of the Drive v3 API.
Prints the names and ids of the first 10 files the user has access to.
"""
global service
creds = None
# The file token.pickle stores the user's access and refresh tokens, and is
# created automatically when the authorization flow completes for the first
# time.
if os.path.exists('token.pickle'):
with open('token.pickle', 'rb') as token:
creds = pickle.load(token)
# If there are no (valid) credentials available, let the user log in.
if not creds or not creds.valid:
if creds and creds.expired and creds.refresh_token:
creds.refresh(Request())
else:
flow = InstalledAppFlow.from_client_secrets_file(
'credentials.json', SCOPES)
creds = flow.run_local_server()
# Save the credentials for the next run
with open('token.pickle', 'wb') as token:
pickle.dump(creds, token)
service = build('drive', 'v3', credentials=creds)
# Call the Drive v3 API
results = service.files().list(
pageSize=10, fields="nextPageToken, files(id, name)").execute()
items = results.get('files', [])
if not items:
print('No files found.')
else:
print('Files:')
for item in items:
print(u'{0} ({1})'.format(item['name'], item['id']))
main()
# Retrieve the parent ID of the files/ directory
dirp = "files" # Name of directory to find.
parent_id = "" # The id we are looking for.
query = ("name='%s'" % (dirp))
resp = service.files().list(
q=query,
fields="files(id, name)",
pageToken=None).execute()
files = resp.get('files', [])
# Create a file object for file 'report.csv' on your local drive.
media = MediaFileUpload('report.csv',
mimetype='text/csv',
resumable=True)
# Upload the file.
if len(files) > 0:
parent_id = files[0].get('id')
meta_data= { 'name': 'report.csv',
'parents': [parent_id],
'mimeType' : 'application/vnd.google-apps.spreadsheet' }
f = service.files().create(
body=meta_data,
media_body=media,
fields='id').execute()
if not f is None: print("[*] uploaded %s" % (f.get('id')))
else: print("The folder files/ does not exist on your drive.")
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.