簡體   English   中英

用Selenium和美麗的湯進行Python Web爬行所有頁面(轉到下一頁)

[英]Python Web Crawling with Selenium & Beautiful soup All pages(go to next page)

import requests
import selenium
from selenium import webdriver
from bs4 import BeautifulSoup
browser = webdriver.Firefox()
browser.get('http://www.megabox.co.kr/?show=detail&rtnShowMovieCode=013491')
html = browser.page_source
soup = BeautifulSoup(html, 'html.parser')
comment = soup.findAll('',{'class': 'comment'})


for i, t in enumerate(comment,1):
    print('%2d: %s'%(i, t.text))

http://www.megabox.co.kr/?show=detail&rtnShowMovieCode=013491

我想抓取1頁,2頁,3頁中的所有評論...但是我不知道該怎么做。 你能解釋一下嗎?

我剛接觸Python,因此可能會出現bug,但這是我的嘗試。 我已盡力在評論中添加了解釋。

import requests
from selenium import webdriver
from bs4 import BeautifulSoup
browser = webdriver.Chrome()
browser.get('http://www.megabox.co.kr/?show=detail&rtnShowMovieCode=013491')
html = browser.page_source
soup = BeautifulSoup(html, 'html.parser')
comment = soup.findAll('',{'class': 'comment'})
#Unchanged OP code until here

#Find the tag containing last page attributes 
lastpage = soup.find('a', {'class':'img_btn customer last'})

#Extracts digit in the "onclick" attribute (last page number)
last = int("".join(filter(str.isdigit, lastpage.get('onclick'))))

#Assigning variables to form xpath
x = '//*[@id='
y = ']'

#Start of the range will be 2 because first page is already scraped
for i in range(2,last):

    try:

        #Concatenate page id value to form xpath
        z=x+str(i)+y

        #Click on the button to next page
        browser.find_element_by_xpath(str(z)).click()

        #Rinse and repeat your original code
        html = browser.page_source
        soup = BeautifulSoup(html, 'html.parser')

        #Scrape and extend your original comment variable 
        comment.extend(soup.findAll('',{'class': 'comment'}))

        #Just in case (Please test and revert in case of issues; I had the patience to sit through just 100 pages)
    except Exception:

        #Find button to move to next page
        browser.find_element_by_xpath('//*[@title="다음 10페이지 보기"]').click()

        #Rinse and repeat same steps in the try block
        html = browser.page_source
        soup = BeautifulSoup(html, 'html.parser')
        comment.extend(soup.findAll('',{'class': 'comment'}))

        continue

    #I added the below in lieu of a progress bar so I know how many pages were done. You can omit this though
    finally:
        print('Page ' + str(i) + ' scraped!')

#OP's original output as-is
for i, t in enumerate(comment,1):
    print('%2d: %s'%(i, t.text))

我衷心希望這對您有所幫助。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM