简体   繁体   中英

How to stop scrapy from overriding CSV export file after every crawl

Currently, I use scrapy to crawl through multiple pages of a webpage and export the data to a CSV file. Each day, the spider crawls through the pages and saves the data; however, it will write over the data from the previous days. I was wondering how I could program the pipeline so that it just writes to CSV in the same file starting at the end of the file. This way I can save all my previous scraped data in one place.

generally just change the parameter in your open file routine to append

change

f = open('filename.txt','w')

to

f = open('filename.txt','a')

Of course if we could see your original code it would help us to be more specific.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM