简体   繁体   English

Python requests.get() 循环不返回任何内容

[英]Python requests.get() loop returns nothing

When trying to scrape multiple pages of this website, I get no content in return.当试图抓取本网站的多个页面时,我没有得到任何内容作为回报。 I usually check to make sure all the lists I'm creating are of equal length, but all are coming back as len = 0 .我通常会检查以确保我创建的所有lists的长度相等,但所有列表都返回len = 0

I've used similar code to scrape other websites, so why does this code not work correctly?我用过类似的代码来抓取其他网站,那么为什么这段代码不能正常工作呢?

Some solutions I've tried, but haven't worked for my purposes: requests.Session() solutions as suggested in this answer , .json as suggested here .我尝试过的一些解决方案,但没有达到我的目的: requests.Session()解决方案如本答案中建议.json 建议

for page in range(100, 350):

    page = requests.get("https://www.ghanaweb.com/GhanaHomePage/election2012/parliament.constituency.php?ID=" + str(page) + "&res=pm")

    page.encoding = page.apparent_encoding

    if not page:
        pass

    else:

        soup = BeautifulSoup(page.text, 'html.parser')

        ghana_tbody = soup.find_all('tbody')

        sleep(randint(2,10))

         for container in ghana_tbody:

            #### CANDIDATES ####
            candidate = container.find_all('div', class_='can par')
            for data in candidate:
                cand = data.find('h4')
                for info in cand:
                    if cand is not None:
                        can2 = info.get_text()
                        can.append(can2)

            #### PARTY NAMES ####
            partyn = container.find_all('h5')
            for data in partyn:
                if partyn is not None:
                    partyn2 = data.get_text()
                    pty_n.append(partyn2)

            #### CANDIDATE VOTES ####
            votec = container.find_all('td', class_='votes')
            for data in votec:
                if votec is not None:
                    votec2 = data.get_text()
                    cv1.append(votec2)

            #### CANDIDATE VOTE SHARE ####
            cansh = container.find_all('td', class_='percent')
            for data in cansh:
                if cansh is not None:
                    cansh2 = data.get_text()
                    cvs1.append(cansh2)

        #### TOTAL  VOTES ####`
        tfoot = soup.find_all('tr', class_='total')
        for footer in tfoot:
            fvote = footer.find_all('td', class_='votes')
            for data in fvote:
                if fvote is not None:
                    fvote2 = data.get_text()
                    fvoteindiv = [fvote2]
                    fvotelist = fvoteindiv * (len(pty_n) - len(vot1))
                    vot1.extend(fvotelist)

Thanks in advance for your help!在此先感谢您的帮助!

I've made some simplification changes.我做了一些简化的改变。 The major changes that needed to be changed were:需要改变的主要变化是:

  1. ghana_tbody = soup.find_all('table', class_='canResults')
  2. can2 = info # not info.get_text()

I have only tested this against page 112;我只针对第 112 页对此进行了测试; life is too short.人生如此短暂。

import requests
from bs4 import BeautifulSoup
from random import randint
from time import sleep

can = []
pty_n = []
cv1 = []
cvs1 = []
vot1 = []

START_PAGE = 112
END_PAGE = 112

for page in range(START_PAGE, END_PAGE + 1):
    page = requests.get("https://www.ghanaweb.com/GhanaHomePage/election2012/parliament.constituency.php?ID=112&res=pm")
    page.encoding = page.apparent_encoding
    if not page:
        pass
    else:
        soup = BeautifulSoup(page.text, 'html.parser')
        ghana_tbody = soup.find_all('table', class_='canResults')
        sleep(randint(2,10))
        for container in ghana_tbody:

            #### CANDIDATES ####
            candidate = container.find_all('div', class_='can par')
            for data in candidate:
                cand = data.find('h4')
                for info in cand:
                    can2 = info # not info.get_text()
                    can.append(can2)

            #### PARTY NAMES ####
            partyn = container.find_all('h5')
            for data in partyn:
                partyn2 = data.get_text()
                pty_n.append(partyn2)


            #### CANDIDATE VOTES ####
            votec = container.find_all('td', class_='votes')
            for data in votec:
                votec2 = data.get_text()
                cv1.append(votec2)

            #### CANDIDATE VOTE SHARE ####
            cansh = container.find_all('td', class_='percent')
            for data in cansh:
                cansh2 = data.get_text()
                cvs1.append(cansh2)

        #### TOTAL  VOTES ####`
        tfoot = soup.find_all('tr', class_='total')
        for footer in tfoot:
            fvote = footer.find_all('td', class_='votes')
            for data in fvote:
                fvote2 = data.get_text()
                fvoteindiv = [fvote2]
                fvotelist = fvoteindiv * (len(pty_n) - len(vot1))
                vot1.extend(fvotelist)

print('can = ', can)
print('pty_n = ', pty_n)
print('cv1 = ', cv1)
print('cvs1 = ', cvs1)
print('vot1 = ', vot1)

Prints:印刷:

can =  ['Kwadwo Baah Agyemang', 'Daniel Osei', 'Anyang - Kusi Samuel', 'Mary Awusi']
pty_n =  ['NPP', 'NDC', 'IND', 'IND']
cv1 =  ['14,966', '9,709', '8,648', '969', '34292']
cvs1 =  ['43.64', '28.31', '25.22', '2.83', '\xa0']
vot1 =  ['34292', '34292', '34292', '34292']

Be sure to first change START_PAGE and END_PAGE to 100 and 350 respecively.请务必先将 START_PAGE 和 END_PAGE 分别更改为 100 和 350。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM