简体   繁体   中英

Accepting cookies while scraping page with requests and beautifulsoup

I did a script that tracks the price of a product on many different pages. The problem is that some websites uses cookies and you have to click accept cookies to see the price.

This will probably not help but this is the website it's in swedish so many of you won't understand.

How do I accept cookies while web scraping?

There are no cookies being involved in doing a request. I feel you shouldn't be facing any problem doing a get or a post request.

Edit: try this peice of code:

r = requests.get('https://www.google.com/')

with open('test.html', 'w') as f:
    f.write(r.text)
    f.close()

Run the test.html file in your web browser and try to see the difference. The test.html is what your code sees which is different from what a normal person sees in their web browser with the full GUI.

When you scrape a site you don't have to accept those cookies. But if you want to accept then you can simply click on the "accept-button" from the website. You can do this with this method:

Get the X-Path with right clicking on the website and inspect the cookie button.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM