简体   繁体   English

点击 python 中 Selenium 的多个链接

[英]Click on multiple links with Selenium in python

I'm trying to web-scrape data from a structure that looks like that:我正在尝试从如下所示的结构中抓取数据:

<div class = "tables">
        <div class = "table1">
            <div class = "row">
                <div class = 'data'>Useful Data</div>
                <a href = "url1"
            </div>
            <div class = "row">
                <div class = 'data'>Useful Data</div>
                <a href = "url1">
            </div>
        </div>
        
        <div class = "table2">
            <div class = "row">
                <div class = 'data'>Useful Data</div>
                <a href = "url3"
            </div>
            <div class = "row">
                <div class = 'data'>Useful Data</div>
                <a href = "url4">
            </div>
        </div>
     </div>

The data that I want is in the div "data", and also on a some other pages accessible by clicking on the urls.我想要的数据在 div“数据”中,并且在通过单击 url 可访问的其他一些页面上。 I iterate through the 'tables' using BeautifulSoup, and I'm trying to click on the links with Selenium like so:我使用 BeautifulSoup 遍历“表”,并尝试单击 Selenium 的链接,如下所示:

tables = soup.find_all('div', class_ = 'tables')
 for line in tables:
     row = line.find_all('div', class_ = "row")
     for element in row:
         link = driver.find_element_by_xpath('//a[contains(@href,"href")]')
         #some code

In my script, this line在我的脚本中,这一行

link = driver.find_element_by_xpath('//a[contains(@href,"href")]')

always return the first url, when I want it to 'follow' BeautifulSoup and return to following hrefs.总是返回第一个 url,当我希望它“关注”BeautifulSoup 并返回以下 hrefs 时。 So is there a way to modify href depending on the url from the source code?那么有没有办法根据源代码中的 url 修改href? I should add that all my urls are pretty similiar, except for the last part.我应该补充一点,我所有的网址都非常相似,除了最后一部分。 (ex.: url1 = questions/ask/ 1000 , url2 = questions/ask/ 1001 ) (例如:url1 = questions/ask/ 1000 , url2 = questions/ask/ 1001

I've also tried to find all the href in the page to iterate trough them using我还尝试在页面中找到所有 href 以使用它们进行迭代

links = self.driver.find_element_by_xpath('//a[@href]')

but that doesn't work either.但这也不起作用。 Since the page contains a lot of links that aren't useful to me, I'm not sure if that's the best way to go.由于该页面包含许多对我无用的链接,我不确定这是否是 go 的最佳方式。

Seems to be a bit complicated - Why not extracting the href with BeautifulSoup directly?似乎有点复杂 - 为什么不直接用BeautifulSoup提取href

for a in soup.select('.tables a[href]'):
    link = a['href']

You also can modify it, concat with baseUrl and store in a list to iterate over:您还可以修改它,与 baseUrl 连接并存储在列表中以进行迭代:

urls = [baseUrl+a['href'] for a in soup.select('.tables a[href]')]

Example例子

baseUrl = 'http://www.example.com'

html='''
<div class = "tables">
        <div class = "table1">
            <div class = "row">
                <div class = 'data'>Useful Data</div>
                <a href = "/url1"
            </div>
            <div class = "row">
                <div class = 'data'>Useful Data</div>
                <a href = "/url1">
            </div>
        </div>

        <div class = "table2">
            <div class = "row">
                <div class = 'data'>Useful Data</div>
                <a href = "/url3"
            </div>
            <div class = "row">
                <div class = 'data'>Useful Data</div>
                <a href = "/url4">
            </div>
        </div>
     </div>'''
soup = BeautifulSoup(html,'lxml')

urls = [baseUrl+a['href'] for a in soup.select('.tables a[href]')]

for url in urls:
    print(url)#or request the website,....

Output Output

http://www.example.com/url1
http://www.example.com/url1
http://www.example.com/url3
http://www.example.com/url4

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM