简体   繁体   中英

Python requests module gets stuck on requests.get() and gets timed out

I've been trying to web scrape from the following site: "https://www.india.ford.com/cars/aspire/"

import requests
from bs4 import BeautifulSoup
import csv

response = requests.get("https://www.india.ford.com/cars/aspire/", timeout=5)

if response.status_code!=200:
    print("error!")
else:
    print(response.status_code)

The execution gets stuck indefinitely.

On using timeout=5

I get the following error:

在此处输入图像描述

I'm new to this so sorry if this is a noob question. Any help is highly appreciated: :P

Timeout need to use try except.

This page needs to disguise the browser.

try:
    headers = {
        'user-agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.111 Safari/537.36',
    }
    response = requests.get("https://www.india.ford.com/cars/aspire/", headers=headers, timeout=5)

    if response.status_code != 200:
        print("error!")
    else:
        print(response.status_code)
except requests.exceptions.Timeout as error:
    print('time out')

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM