繁体   English   中英

如何使用 beautifulsoup 遍历完整的 web 表?

[英]How can I iterate through the full web table with beautifulsoup?

我想用 selenium 和 beautifulsoup 刮一张 web 表。 该表包含 10 个“resultMainRow”和 4 个“resultMainCell”。 在第 4 个 resultMainCell 中,有 8 个 span 类,每个类都有一个 img src。 以下 html 代码表示表格行之一。 我只能打印出表格的相关源代码。 如何与 img src 一起遍历完整的 web 表?

 <div class="resultMainTable"> <div class="resultMainRow"> <div class="resultMainCell_1 tableResult2"> <a href="javascript:genResultDetails(2);" title="Best of the date">20/006 </a></div> <div class="resultMainCell_2 tableResult2">21/01/2020</div> <div class="resultMainCell_3 tableResult2"></div> <div class="resultMainCell_4 tableResult2"> <span class="resultMainCellInner"> <img height="25" src="/info/images/icon/no_3abc”> </span> <span class="resultMainCellInner"> <img height="25" src = "/info/images/icon/no_14 " ></span> <span class="resultMainCellInner"> <img height="25" src "/info/images/icon/no_21 " ></span> <span class="resultMainCellInner"> <img height="25" src="/info/images/icon/no_28 " ></span> <span class="resultMainCellInner"> <img height="25" src=" /info/images/icon/no_37 "></span> <span class="resultMainCellInner"> <img height="25" src="/info/images/icon/no_44 "></span> <span class="resultMainCellInner"> <img height="6" src="/info/images/icon_happy " ></span> <span class="resultMainCellInner" <img height="25" src="/info/images/icon/smile "></span> </div> </div>

该表包含 10 个“resultMainRow”和 4 个“resultMainCell”。 在第 4 个 resultMainCell 中,有 8 个 span 类,每个类都有一个 img src。

我的代码如下:

soup = BeautifulSoup(driver.page_source, 'lxml')
         sixsix = soup.findAll("div", {"class": "resultMainTable"})
         print (sixsix)

        for row in sixsix:
            images = soup.findAll('img')
            for image in images:
                if len(images) == 8:
                aaa = images[1].find('src')
                bbb = images[2].find('src')
                ccc = images[3].find('src')
                ddd = images[4].find('src')
                eee = images[5].find('src')
                fff = images[6].find('src')
                ggg = images[7].find('src')
                hhh = images[8].find('src')
                print ((row.text), (image('src')))

您可以尝试使用此脚本遍历表格的所有行,并从前三个单元格中提取文本,并从 src 属性中提取 8 个 URL:

from bs4 import BeautifulSoup

txt = '''
<div class="resultMainTable">
    <div class="resultMainRow">
       <div class="resultMainCell">text1</div>
       <div class="resultMainCell">text2</div>
       <div class="resultMainCell">text3</div>
       <div class="resultMainCell">
            <div>
                 <div>
                      <span>
                           <img src="1" />
                           <img src="2" />
                           <img src="3" />
                           <img src="4" />
                           <img src="5" />
                           <img src="6" />
                           <img src="7" />
                           <img src="8" />
                      </span>
                 </div>
            </div>
       </div>
    </div>
    <div class="resultMainRow">
       <div class="resultMainCell">text3</div>
       <div class="resultMainCell">text4</div>
       <div class="resultMainCell">text5</div>
       <div class="resultMainCell">
            <div>
                 <div>
                      <span>
                           <img src="9" />
                           <img src="10" />
                           <img src="11" />
                           <img src="12" />
                           <img src="13" />
                           <img src="14" />
                           <img src="15" />
                           <img src="16" />
                      </span>
                 </div>
            </div>
       </div>
    </div>
</div>'''   

soup = BeautifulSoup(txt, 'html.parser')

for row in soup.select('div.resultMainTable .resultMainRow'):
    v1, v2, v3, v4 = row.select('div.resultMainCell')
    imgs = [img['src'] for img in v4.select('img')]
    print(v1.text, v2.text, v3.text, *imgs)

印刷:

text1 text2 text3 1 2 3 4 5 6 7 8
text3 text4 text5 9 10 11 12 13 14 15 16

编辑(使用来自已编辑问题的真实 HTML 代码):

from bs4 import BeautifulSoup

txt = '''<div class="resultMainTable">
   <div class="resultMainRow">
      <div class="resultMainCell_1 tableResult2">
           <a href="javascript:genResultDetails(2);" 
           title="Best of the date">20/006 </a></div>
      <div class="resultMainCell_2 tableResult2">21/01/2020</div>
      <div class="resultMainCell_3 tableResult2"></div>
      <div class="resultMainCell_4 tableResult2">
          <span class="resultMainCellInner"> 
              <img height="25" src="/info/images/icon/no_3abc"> </span>
          <span class="resultMainCellInner"> 
              <img height="25" src = "/info/images/icon/no_14 " ></span>
          <span class="resultMainCellInner"> 
               <img height="25" src "/info/images/icon/no_21 " ></span>
          <span class="resultMainCellInner">
               <img height="25" src="/info/images/icon/no_28 " ></span>
          <span class="resultMainCellInner">
               <img height="25" src=" /info/images/icon/no_37 "></span>
          <span class="resultMainCellInner">   
               <img height="25" src="/info/images/icon/no_44 "></span>
          <span class="resultMainCellInner">             
               <img height="6" src="/info/images/icon_happy " ></span>
          <span class="resultMainCellInner" 
               <img height="25" src="/info/images/icon/smile "></span>
    </div>
       </div>'''


soup = BeautifulSoup(txt, 'html.parser')

for row in soup.select('div.resultMainTable .resultMainRow'):
    v1, v2, v3, v4 = row.select('div[class^="resultMainCell"]')
    imgs = [img['src'] for img in v4.select('img')]
    print(v1.text, v2.text, v3.text, *imgs)

印刷:

20/006  21/01/2020  /info/images/icon/no_3abc /info/images/icon/no_14   /info/images/icon/no_28   /info/images/icon/no_37  /info/images/icon/no_44  /info/images/icon_happy 

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM