简体   繁体   中英

Why my crawler neither fetches any data nor throws any error

I have made a crawler to parse the name of products from Amazon but when i run my crawler it neither brings any result nor shows any error. So far I know, Xpaths are okay. Can't find out any mistakes I've made already. Hope there is somebody to look into it.

import requests
from lxml import html

def Startpoint():
    url = "https://www.amazon.com/Best-Sellers/zgbs"
    response = requests.get(url)
    tree = html.fromstring(response.text)
    titles = tree.xpath('//ul[@id="zg_browseRoot"]')
    for title in titles:
        items=title.xpath('.//li/a/@href')
        for item in items:
            Endpoint(item)

def Endpoint(links):
    response = requests.get(links)
    tree = html.fromstring(response.text)
    titles = tree.xpath('//div[@class="a-section a-spacing-none p13n-asin"]')
    for title in titles:
        try :
            Name=title.xpath('.//div[@class="p13n-sc-truncated-hyphen p13n-sc-truncated"]/text()')[0]
            print(Name)
        except:
            continue

Startpoint()

You don't get any errors because you have a try - except block in your script .
If you want to display errors , change this :

except:
    continue

to :

except Exception as e : 
    print(e.message)
    continue

Note :

It's best to have an except block for every expected exception ( keyerror, valueerror, etc ) , if you plan to handle those cases separately .

Thanks to @David Metcalfe for this suggestion

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM