简体   繁体   English

Python3从txt文件中的URL下载csv文件

[英]Python3 to Download csv files from URL in txt file

The code below will parse JSON from the URL to retrieve 10 urls and put them in an output.txt file.下面的代码将从 URL 解析 JSON 以检索 10 个 url 并将它们放在 output.txt 文件中。

import json
import urllib.request

response = urllib.request.urlopen('https://json-test.com/test').read()
jsonResponse = json.loads(response)
jsonResponse = json.loads(response.decode('utf-8'))
for child in jsonResponse['results']:
    print (child['content'], file=open("C:\\Users\\test\\Desktop\\test\\output.txt", "a"))

Now that there are 10 links to csv files in the output.txt , trying to figure out how I can download and save the 10 files.现在 output.txt 中有 10 个指向 csv 文件的链接,我想弄清楚如何下载和保存这 10 个文件。 Tried doing doing something like this but not working.尝试做这样的事情但没有工作。

urllib.request.urlretrieve(['content'], "C:\\Users\\test\\Desktop\\test\\test1.csv")  

Even if I get the above working it is just for 1 file, there are 10 file links in the output.txt.即使我得到了上面的工作,它也只是 1 个文件,output.txt 中有 10 个文件链接。 Any ideas?有任何想法吗?

Here is a exhausting guide on how to download files over http . 这是有关如何通过 http 下载文件的详尽指南

If the text file contains one link per line, you can iterate through the lines like this:如果文本文件每行包含一个链接,您可以像这样遍历各行:

file = open('path/to/file.ext', 'r')
id = 0
for line in file:
   # ... some regex checking if the text is actually a valid url
   response = urllib.request.urlretrieve(line, 'path/to/file' + str(id) + '.ext')
   id+=1

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM