简体   繁体   中英

Python web-scraping and downloading specific zip files in Windows

I'm trying to download and stream the contents of specific zip files on a web page.

The web page has labels and links to zip files that use a table structure and appear like this:

Filename    Flag    Link    
testfile_20190725_csv.zip  Y  zip
testfile_20190725_xml.zip  Y  zip 
testfile_20190724_csv.zip  Y  zip 
testfile_20190724_xml.zip  Y  zip 
testfile_20190723_csv.zip  Y  zip 
testfile_20190723_xml.zip  Y  zip 
(etc.)

The word 'zip' above is the link to the zip file. I'd like to download ONLY the CSV zip files and only the first x (say 7) that appear on the page - but none of the XML zip files.

A sample of the webpage code is here:

<tr>
 <td class="labelOptional_ind">
  testfile_20190725_csv.zip
 </td>
 </td>
 <td class="labelOptional" width="15%">
  <div align="center">
  Y
  </div>
 </td>
 <td class="labelOptional" width="15%">
  <div align="center">
   <a href="/test1/servlets/mbDownload?doclookupId=671334586">
    zip
   </a>
  </div>
 </td>
</tr>
<tr>
 <td class="labelOptional_ind">
  testfile_20190725_xml.zip
 </td>
 <td class="labelOptional" width="15%">
  <div align="center">
  N
  </div>
 </td>
 <td class="labelOptional" width="15%">
  <div align="center">
   <a href="/test1/servlets/mbDownload?doclookupId=671190392">
    zip
   </a>
  </div>
 </td>
</tr>
<tr>
 <td class="labelOptional_ind">
  testfile_20190724_csv.zip
 </td>
 <td class="labelOptional" width="15%">
  <div align="center">

I think I'm almost there, but need a bit of help. What I've been able to do so far is: 1. Check for existence of a local download folder and create it if not there 2. Setup BeautifulSoup, read from the webpage all of the main labels (the first column of the table), and read all the zip links - ie the 'a hrefs' 3. For testing, manually set a variable to one of the labels and another to its corresponding zip file link, download the file and stream the CSV contents of the zip file

What I need help with is: Downloading all main labels AND their corresponding links, then loop through each, skipping any XML labels/links, and downloading/streaming only the CSV labels/links

Here's the code of I have:

# Read zip files from page, download file, extract and stream output
from io import BytesIO
from zipfile import ZipFile
import urllib.request
import os,sys,requests,csv
from bs4 import BeautifulSoup

# check for download directory existence; create if not there
if not os.path.isdir('f:\\temp\\downloaded'):
    os.makedirs('f:\\temp\\downloaded')

# Get labels and zip file download links
mainurl = "http://www.test.com/"
url = "http://www.test.com/thisapp/GetReports.do?Id=12331"

# get page and setup BeautifulSoup
r = requests.get(url)
soup = BeautifulSoup(r.content, "html.parser")

# Get all file labels and filter so only use CSVs
mainlabel = soup.find_all("td", {"class": "labelOptional_ind"})
for td in mainlabel:
    if "_csv" in td.text:
        print(td.text)

# Get all <a href> urls
for link in soup.find_all('a'):
    print(mainurl + link.get('href'))

# QUESTION: HOW CAN I LOOP THROUGH ALL FILE LABELS AND FIND ONLY THE
# CSV LABELS AND THEIR CORRESPONDING ZIP DOWNLOAD LINK, SKIPPING ANY
# XML LABELS/LINKS, THEN LOOP AND EXECUTE THE CODE BELOW FOR EACH, 
# REPLACING zipfilename WITH THE MAIN LABEL AND zipurl WITH THE ZIP 
# DOWNLOAD LINK?

# Test downloading and streaming
zipfilename = 'testfile_20190725_xml.zip'
zipurl = 'http://www.test.com/thisdownload/servlets/thisDownload?doclookupId=674992379'
outputFilename = "f:\\temp\\downloaded\\" + zipfilename

# Unzip and stream CSV file
url = urllib.request.urlopen(zipurl)
zippedData = url.read()

# Save zip file to disk
print ("Saving to ",outputFilename)
output = open(outputFilename,'wb')
output.write(zippedData)
output.close()

# Unzip and stream CSV file
with ZipFile(BytesIO(zippedData)) as my_zip_file:
   for contained_file in my_zip_file.namelist():
    with open(("unzipped_and_read_" + contained_file + ".file"), "wb") as output:
        for line in my_zip_file.open(contained_file).readlines():
            print(line)

For getting all required links you can use find_all() method with custom function. The function will search for <td> tags with text that ends with "csv.zip" .

data is HTML snippet from the question:

from bs4 import BeautifulSoup

soup = BeautifulSoup(data, 'html.parser')

for td in soup.find_all(lambda tag: tag.name=='td' and tag.text.strip().endswith('csv.zip')):
    link = td.find_next('a')
    print(td.get_text(strip=True), link['href'] if link else '')

Prints:

testfile_20190725_csv.zip /test1/servlets/mbDownload?doclookupId=671334586
testfile_20190724_csv.zip 

Instead of creating two separate lists for labels and URLs, you can capture the whole row, check if the label is csv , then use the URL to download it.

# Using the class name to identify the correct labels
mainlabel = soup.find_all("td", {"class": "labelOptional_ind"})

# find the containing row <tr> for each label 
fullrows =  [label.find_parent('tr') for label in mainlabel] 

Now you can test the label and download the file using:

for row in fullrows:
    if "_csv" in row.text:
        print(mainurl + row.find('a').get('href')) # download this!

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM