简体   繁体   中英

Automatically Downloading Files from an ASPX Website in Python

This is my first time posting on stack overflow, and I look forward to getting more involved with the community. I need to download, rename, and save many Excel files from an ASPX website, but I cannot access these Excel files directly via a URL (ie, the URL does not end with "excelfilename.csv"). What I can do is go to a URL which initiates the download of the file. An example of the URL is below.

https://www.websitename.com/something/ASPXthing.aspx?ReportName=ExcelFileName&Date=SomeDate&reportformat=csv

The inputs that I want to vary via loops are "ExcelFileName" and "SomeDate". I know one can fetch these files with urllib when the Excel files can be accessed directly via a URL, but how can I do it with a URL like this one?

Thanks in advance for helping out!

Using the requests library, you can fetch the file and iterate over chunks to write to file

import requests

report_names = ["Filename1","Filename2"]
dates = ['2016-02-22','2016-02-23']  # as strings
for report_name in report_names:
    for date in dates:
        with open('%s_%s_fetched.csv' % (report_name.split('.')[0],date,), 'wb') as handle:
            response = requests.get('https://www.websitename.com/something/ASPXthing.aspx?ReportName=%s&Date=%s&reportformat=csv' % (report_name,date,), stream=True)

            if not response.ok:
                # Something went wrong

            for block in response.iter_content(1024):
                handle.write(block)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM