简体   繁体   中英

How can scrape prices from next pages?

I'm new to python and web scraping. I wrote some codes by using requests and beautifulsoup. One code is for scraping prices and names and links. Which works fine and is as follows:

from bs4 import BeautifulSoup
import requests

urls = "https://www.meisamatr.com/fa/product/cat/2-%D8%A2%D8%B1%D8%A7%DB%8C%D8%B4%DB%8C.html#/pagesize-24/order-new/stock-1/page-1"
source = requests.get(urls).text
soup = BeautifulSoup(source, 'lxml')

for figcaption in soup.find_all('figcaption'):
    price = figcaption.div.text
    name = figcaption.find('a', class_='title').text
    link = figcaption.find('a', class_='title')['href']

    print(price)
    print(name)
    print(link)

and also one for making other urls that I need those information scraped from, which also gives the correct urls when I use print():

x = 0
counter = 1

for x in range(0, 70)
    urls = "https://www.meisamatr.com/fa/product/cat/2-%D8%A2%D8%B1%D8%A7%DB%8C%D8%B4%DB%8C.html#/pagesize-24/order-new/stock-1/page-" + str(counter)
    counter += 1
    x += 1
    print(urls)

But when I try to combine these two in order to scrape a page and then change url to new one and then scrape it, it just gives the scraped information on the first page 70 times. please guide me through this. the whole code is as follows:

from bs4 import BeautifulSoup
import requests

x = 0
counter = 1
for x in range(0, 70):
    urls = "https://www.meisamatr.com/fa/product/cat/2-%D8%A2%D8%B1%D8%A7%DB%8C%D8%B4%DB%8C.html#/pagesize-24/order-new/stock-1/page-" + str(counter)
    source = requests.get(urls).text
    soup = BeautifulSoup(source, 'lxml')
    counter += 1
    x += 1
    print(urls)

    for figcaption in soup.find_all('figcaption'):
        price = figcaption.div.text
        name = figcaption.find('a', class_='title').text
        link = figcaption.find('a', class_='title')['href']

        print(price)
        print()
        print(name)
        print()
        print(link)

Your x=0 and then incriminating it by 1 is redundant and not needed, as you have it iterating through that range range(0, 70) . I'm also not sure why you have a counter as you don't need that either. Here's how you would do it below:

HOWEVER, I believe that issue is not with the iteration or looping, but the url itself. If you manually go to the two pages as listed below, the content doesn't change:

https://www.meisamatr.com/fa/product/cat/2-%D8%A2%D8%B1%D8%A7%DB%8C%D8%B4%DB%8C.html#/pagesize-24/order-new/stock-1/page-1

and then

https://www.meisamatr.com/fa/product/cat/2-%D8%A2%D8%B1%D8%A7%DB%8C%D8%B4%DB%8C.html#/pagesize-24/order-new/stock-1/page-2

Since the site is dynamic, you'll need to find a different way to iterate page to page, or figure out what the exact url is. So try:

from bs4 import BeautifulSoup
import requests

for x in range(0, 70):
    try:
        urls = 'https://www.meisamatr.com/fa/product/cat/2-%D8%A2%D8%B1%D8%A7%DB%8C%D8%B4%DB%8C.html&pagesize[]=24&order[]=new&stock[]=1&page[]=' +str(x+1) + '&ajax=ok?_=1561559181560'
        source = requests.get(urls).text
        soup = BeautifulSoup(source, 'lxml')

        print('Page: %s' %(x+1))

        for figcaption in soup.find_all('figcaption'):

            price = figcaption.find('span', {'class':'new_price'}).text.strip()
            name = figcaption.find('a', class_='title').text
            link = figcaption.find('a', class_='title')['href']

            print('%s\n%s\n%s' %(price, name, link))
    except:
        break

You can find that link by going to the website and looking at the dev tools (Ctrl +Shift+I or right-click 'Inspect') -> network -> XHR

When I did that and then physically click to the next page, I can see how that data was rendered, and found the reference url.

在此处输入图片说明

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM