繁体   English   中英

用漂亮的汤在python中提取深层嵌套的href

[英]extracting deeply nested href in python with beautiful soup

我正在尝试提取一个非常嵌套的href。 结构如下:

<div id="main">
 <ol>
   <li class>
     <div class>
       <div class>
         <a class>
         <h1 class="title entry-title">
           <a href="http://wwww.link_i_want_to_extract.com">
           <span class>
         </h1>
        </div>
       </div>
     </li>

然后还有其他带有href的<li class> 所以基本上父母对孩子的顺序是

li - div - div - h1 - a href

我尝试了以下方法:

soup.select('li div div h1')

soup.find_all("h1", { "class" : "title entry-title" }) 

for item in soup.find_all("h1", attrs={"class" : "title entry-title"}):
        for link in item.find_all('a',href=TRUE):

这些似乎都不起作用,我得到了[]或空的.txt文件。

另外,更令人不安的是,在定义soup ,然后执行print(soup)我看不到嵌套的类,我只看到顶部的那个类, <div id=main>而且也没有执行print soup.l检索l类。 我认为Beautifulsoup不会识别l类和其他类。

这对我有用

from bs4 import BeautifulSoup

html = '''
<div id="main">
   <ol>
      <li class>
         <div class>
            <div class>
               <a class>
               <h1 class="title entry-title">
                  <a href="http://www.link_i_want_to_extract.com">
                  <span class>
               </h1>
            </div>
         </div>
      </li>
      <li class>
         <div class>
            <div class>
               <a class>
               <h1 class="title entry-title">
                  <a href="https://other_link_i_want_to_extract.net">
                  <span class>
               </h1>
            </div>
         </div>
      </li>
   </ol>
</div>
'''

soup = BeautifulSoup(html, "lxml")
for h1 in soup.find_all('h1', class_="title entry-title"):
    print(h1.find("a")['href'])

您有错别字: href=TRUE ,应为href=True

s = """
<div id="main">
   <ol>
      <li class>
         <div class>
            <div class>
               <a class>
               <h1 class="title entry-title">
                  <a href="http://www.link_i_want_to_extract.com">
                  <span class>
               </h1>
            </div>
         </div>
      </li>
      <li class>
         <div class>
            <div class>
               <a class>
               <h1 class="title entry-title">
                  <a href="https://other_link_i_want_to_extract.net">
                  <span class>
               </h1>
            </div>
         </div>
      </li>
   </ol>
</div>
"""

from bs4 import BeautifulSoup
soup = BeautifulSoup(s, 'html.parser')

for item in soup.find_all("h1", attrs={"class" : "title entry-title"}):
    for link in item.find_all('a',href=True):
        print('bs link:', link['href'])

另外,您可以使用pyQuery ,它提供了类似于查询语法的js / jquery:

from pyquery import PyQuery as pq
from lxml import etree

d = pq(s)
for link in d('h1.title.entry-title > a'):
    print('pq link:', pq(link).attr('href'))

返回值:

bs link: http://www.link_i_want_to_extract.com
bs link: https://other_link_i_want_to_extract.net
pq link: http://www.link_i_want_to_extract.com
pq link: https://other_link_i_want_to_extract.net

使用. 找到第一个后裔:

soup.find('div', id="main").h1.a['href']

或使用h1作为锚点:

soup.find("h1", { "class" : "title entry-title" }).a['href']

一种简单的方法:

soup.select('a[href]')

要么:

soup.findAll('a', href=True)

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM