简体   繁体   English

如何在urllib.request.urlretrieve中添加标头以保留变量?

[英]How can I add a header to urllib.request.urlretrieve keeping my variables?

I'm trying to download a file from a website but it looks like it is detecting urllib and doesn't allow it to download (I'm getting the error "urllib.error.HTTPError: HTTP Error 403: Forbidden"). 我正在尝试从网站下载文件,但看起来它正在检测urllib,并且不允许下载(我收到错误“ urllib.error.HTTPError:HTTP错误403:禁止”)。

How can I fix this? 我怎样才能解决这个问题? I found on the internet that I had to add a header but the answers weren't going the way I need (It was using Request and I didn't find anything about an argument to add in urllib.request.urlretrieve() for a header) 我在互联网上发现必须添加标头,但答案并没有达到我所需要的方式(它使用的是Request,但我没有找到关于要添加到urllib.request.urlretrieve()中的参数的任何信息标头)

I'm using Python 3.6 我正在使用Python 3.6

Here's the code: 这是代码:

import urllib.request
filelink = 'https://randomwebsite.com/changelog.txt'
filename = filelink.rsplit('/', 1)
filename = str(filename[1])
urllib.request.urlretrieve(filelink, filename)

I want to include a header to give me the permission to download the file but I need to keep a line like the last one, using the two variables (one for the link of the file and one for the name that depends of the link). 我想包含一个标题,以授予我下载文件的权限,但我需要使用两个变量(最后一个保留一行)(一个用于文件的链接,另一个用于依赖链接的名称) 。

Thanks already for your help ! 已经感谢您的帮助!

Check the below link: https://stackoverflow.com/a/7244263/5903276 检查以下链接: https : //stackoverflow.com/a/7244263/5903276

The most correct way to do this would be to use the urllib.request.urlopen function to return a file-like object that represents an HTTP response and copy it to a real file using shutil.copyfileobj. 最正确的方法是使用urllib.request.urlopen函数返回表示HTTP响应的类似文件的对象,然后使用shutil.copyfileobj将其复制到真实文件中。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 如何在python 2.7中使用urllib.request.urlretrieve - How can I use urllib.request.urlretrieve with python 2.7 我们如何从通过 urllib.request.urlretrieve 获取的 csv 中删除标头 - How can we remove header from csv being fetched via urllib.request.urlretrieve Python,基本问题:如何使用 urllib.request.urlretrieve 下载多个 url - Python, basic question: How do I download multiple url's with urllib.request.urlretrieve 我应该从“urllib.request.urlretrieve(..)”切换到“urllib.request.urlopen(..)”吗? - Should I switch from "urllib.request.urlretrieve(..)" to "urllib.request.urlopen(..)"? urllib.request.urlretrieve返回损坏的文件(如何处理这种网址?) - urllib.request.urlretrieve returns corrupt file (How to handle this kind of url?) 使用urllib.request.urlretrieve下载需要固定的时间 - downloading with urllib.request.urlretrieve takes fixed time HTTP 错误 404:未找到 urllib.request.urlretrieve - HTTP Error 404: Not Found urllib.request.urlretrieve 使用什么命令代替 urllib.request.urlretrieve? - What command to use instead of urllib.request.urlretrieve? 无法在Python中使用“ urllib.request.urlretrieve”下载图像 - failing at downloading an image with “urllib.request.urlretrieve” in Python 单元测试模拟 urllib.request.urlretrieve() Python 3 和内部函数 - Unit test mock urllib.request.urlretrieve() Python 3 and internal function
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM