简体   繁体   English

无法使用 BeautifulSoup 解析 Google 搜索结果页面

[英]Can't parse a Google search result page using BeautifulSoup

I'm parsing webpages using BeautifulSoup from bs4 in python.我在 python 中使用来自 bs4 的 BeautifulSoup 解析网页。 When I inspected the elements of a google search page, this was the division having the 1st result:当我检查 google 搜索页面的元素时,这是第一个结果的部门:

图片

and since it had class = 'r' I wrote this code:因为它有class = 'r'我写了这段代码:

import requests
site = requests.get('https://www.google.com/search?client=firefox-b-d&ei=CLtgXt_qO7LH4-EP6LSzuAw&q=%22narendra+modi%22+%\22scams%22+%\22frauds%22+%\22corruption%22+%22modi%22+-lalit+-nirav&oq=%22narendra+modi%22+%\22scams%22+%\22frauds%22+%\22corruption%22+%22modi%22+-lalit+-nirav&gs_l=psy-ab.3...5077.11669..12032...5.0..0.202.2445.1j12j1......0....1..gws-wiz.T_WHav1OCvk&ved=0ahUKEwjfjrfv94LoAhWy4zgGHWjaDMcQ4dUDCAo&uact=5')
from bs4 import BeautifulSoup
page = BeautifulSoup(site.content, 'html.parser')
results = page.find_all('div', class_="r")
print(results)

But the command prompt returned just []但是命令提示符只返回[]

What could've gone wrong and how to correct it?可能出了什么问题以及如何纠正它?

Also, Here's the webpage.另外, 这是网页。

EDIT 1: I edited my code accordingly by adding the dictionary for headers, yet the result is the same [] .编辑 1:我通过添加标题字典相应地编辑了我的代码,但结果是相同的[] Here's the new code:这是新代码:

import requests
headers = {
    'User-Agent' : 'Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:47.0) Gecko/20100101 Firefox/47.0'
}
site = requests.get('https://www.google.com/search?client=firefox-b-d&ei=CLtgXt_qO7LH4-EP6LSzuAw&q=%22narendra+modi%22+%22cams%22+%22frauds%22+%22corruption%22+%22modi%22+-lalit+-nirav&oq=%22narendra+modi%22+%22scams%22+%22frauds%22+%22corruption%22+%22modi%22+-lalit+-nirav&gs_l=psy-ab.3...5077.11669..12032...5.0..0.202.2445.1j12j1......0....1..gws-wiz.T_WHav1OCvk&ved=0ahUKEwjfjrfv94LoAhWy4zgGHWjaDMcQ4dUDCAo&uact=5', headers = headers)
from bs4 import BeautifulSoup
page = BeautifulSoup(site.content, 'html.parser')
results = page.find_all('div', class_="r")
print(results)

NOTE: When I tell it to print the entire page, there's no problem, or when I take list(page.children) , it works fine.注意:当我告诉它打印整个页面时,没有问题,或者当我使用list(page.children) ,它工作正常。

Some website requires User-Agent header to be set to prevent fake request from non-browser.某些网站需要设置User-Agent标头以防止来自非浏览器的虚假请求。 But, fortunately there's a way to pass headers to the request as such但是,幸运的是有一种方法可以将标头传递给请求

# Define a dictionary of http request headers
headers = {
  'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:47.0) Gecko/20100101 Firefox/47.0'
} 

# Pass in the headers as a parameterized argument
requests.get(url, headers=headers)

Note: List of user agents can be found here注意:可以在此处找到用户代理列表

>>> give_me_everything = soup.find_all('div', class_='yuRUbf')
Prints a bunch of stuff.
>>> give_me_everything_v2 = soup.select('.yuRUbf')
Prints a bunch of stuff.

Note that you can't do something like this:请注意,您不能执行以下操作:

>>> give_me_everything = soup.find_all('div', class_='yuRUbf').text
AttributeError: You're probably treating a list of elements like a single element.
>>> for all in soup.find_all('div', class_='yuRUbf'):
    print(all.text)
Prints a bunch of stuff.

Code:代码:

from bs4 import BeautifulSoup
import requests

headers = {
    'User-agent':
    "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko)"
    "Chrome/70.0.3538.102 Safari/537.36 Edge/18.19582"
}

html = requests.get('https://www.google.com/search?q="narendra modi" "scams" "frauds" "corruption" "modi" -lalit -nirav', headers=headers)
soup = BeautifulSoup(html.text, 'html.parser')

give_me_everything = soup.find_all('div', class_='yuRUbf')
print(give_me_everything)

Alternatively, you can do the same thing using Google Search Engine Results API from SerpApi.或者,您可以使用来自 SerpApi 的Google Search Engine Results API做同样的事情。 It's a paid API with a free trial of 5,000 searches.这是一个付费 API,可免费试用 5,000 次搜索。

The main difference is that you don't have to come with a different solution when something isn't working thus don't have to maintain the parser.主要区别在于,当某些东西不起作用时,您不必提供不同的解决方案,因此不必维护解析器。

Code to integrate:集成代码:

from serpapi import GoogleSearch

params = {
  "api_key": "YOUR_API_KEY",
  "engine": "google",
  "q": 'narendra modi" "scams" "frauds" "corruption" "modi" -lalit -nirav',
}

search = GoogleSearch(params)
results = search.get_dict()

for result in results['organic_results']:
    title = result['title']
    link = result['link']
    displayed_link = result['displayed_link']
    print(f'{title}\n{link}\n{displayed_link}\n')

----------
Opposition Corners Modi Govt On Jay Shah Issue, Rafael ...
https://www.outlookindia.com/website/story/no-confidence-vote-opposition-corners-modi-govt-on-jay-shah-issue-rafael-deals-c/313790
https://www.outlookindia.com

Modi, Rahul and Kejriwal describe one another as frauds ...
https://www.business-standard.com/article/politics/modi-rahul-and-kejriwal-describe-one-another-as-frauds-114022400019_1.html
https://www.business-standard.com
...

Disclaimer, I work for SerpApi.免责声明,我为 SerpApi 工作。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM