简体   繁体   中英

Scrape all links from a website using beautiful soup or selenium

I want to scrape all links that a website has and want to filter them out so that I can wget them later.

The problem is given a URL lets say

URL = "https://stackoverflow.com/questions/"

my scraper should scrape and provide url's such as

https://stackoverflow.com/questions/51284071/how-to-get-all-the-link-in-page-using-selenium-python
https://stackoverflow.com/questions/36927366/how-to-get-the-link-to-all-the-pages-of-a-website-for-data-scrapping 
https://stackoverflow.com/questions/46468032/python-selenium-automatically-load-more-pages

Currently, I have borrowed code from StackOverflow

import requests
from bs4 import BeautifulSoup

def recursiveUrl(url, link, depth):
    if depth == 10:
        return url
    else:
        # print(link['href'])
        page = requests.get(url + link['href'])
        soup = BeautifulSoup(page.text, 'html.parser')
        newlink = soup.find('a')
        if len(newlink) == 0:
            return link
        else:
            return link, recursiveUrl(url, newlink, depth + 1)
def getLinks(url):
    page = requests.get(url)
    soup = BeautifulSoup(page.text, 'html.parser')
    links = soup.find_all('a')
    for link in links:
      try:
        links.append(recursiveUrl(url, link, 0))
      except Exception as e:
        pass
    return links
links = getLinks("https://www.businesswire.com/portal/site/home/news/")
print(links)

And I think instead of going through all pages it is going through all hyperlinks provided in the webpage.

I have also referred to this

link = "https://www.businesswire.com/news"

from scrapy.selector import HtmlXPathSelector
from scrapy.spider import BaseSpider
from scrapy.http import Request

DOMAIN = link
URL = 'http://%s' % DOMAIN

class MySpider(BaseSpider):
    name = DOMAIN
    allowed_domains = [DOMAIN]
    start_urls = [
        URL
    ]

    def parse(self, response):
        hxs = HtmlXPathSelector(response)
        for url in hxs.select('//a/@href').extract():
            if not ( url.startswith('http://') or url.startswith('https://') ):
                url= URL + url
            print (url)
            yield Request(url, callback=self.parse)

But this is too old and is not functioning.

Scraping is new to me so I might be stuck in some basic funda.

Let me know how to fire up this problem.

import unittest
import pytest
from selenium import webdriver

class TestHabilitado(unittest.TestCase):
  def setUp(self):
    self.driver = webdriver.Firefox()
    self.vars = {}

  def test_habilitado(self):
    self.driver.get("https://stackoverflow.com/questions")

    for link in self.driver.find_elements_by_xpath("//a[contains(@class,'question-hyperlink')]"):
        url=link.get_attribute("href")
        print(url)

if __name__ == "__main__":
    unittest.main()

I think this could be valid. You must install Selenium dependencies and download firefox Selenium driver. Next execute this script.

OUTPUT:
python stackoverflow.py

https://stackoverflow.com/questions/61519440/how-do-i-trigger-a-celery-task-from-django-admin
https://stackoverflow.com/questions/61519439/how-to-add-rows-in-consecutive-blocks-in-excel
https://stackoverflow.com/questions/61519437/not-null-constraint-failed-api-userlog-browser-info-id-when-i-want-to-add-show
https://stackoverflow.com/questions/61519435/dart-parse-date-with-0000
https://stackoverflow.com/questions/61519434/is-there-a-way-to-reduce-the-white-pixels-in-a-invereted-image
https://stackoverflow.com/questions/61519433/querying-datastore-using-some-of-the-indexed-properties
https://stackoverflow.com/questions/61519431/model-checkpoint-doesnt-create-a-directory
https://stackoverflow.com/questions/61519430/why-is-the-event-dispatched-by-window-not-captured-by-other-elements
https://stackoverflow.com/questions/61519426/live-sass-complier-in-vs-code-unfortunately-stopped-working-while-coding
....

One solution using requests and bs4 :

import requests
from bs4 import BeautifulSoup

url = "https://stackoverflow.com/questions/"
html = requests.get(url).content
soup = BeautifulSoup(html, "html.parser")

# Find all <a> in your HTML that have a not null 'href'. Keep only 'href'.
links = [a["href"] for a in soup.find_all("a", href=True)]
print(links)

Output:

[
    "#",
    "https://stackoverflow.com",
    "#",
    "/teams/customers",
    "https://stackoverflow.com/advertising",
    "#",
    "https://stackoverflow.com/users/login?ssrc=head&returnurl=https%3a%2f%2fstackoverflow.com%2fquestions%2f",
    "https://stackoverflow.com/users/signup?ssrc=head&returnurl=%2fusers%2fstory%2fcurrent",
    "https://stackoverflow.com",
...

If you want to keep questions links then:

print(
    [
        link
        if link.startswith("https://stackoverflow.com")
        else f"https://stackoverflow.com{link}"
        for link in links
        if "/questions/" in link
    ]
)

Output:

[
    "https://stackoverflow.com/questions/ask",
    "https://stackoverflow.com/questions/61523359/assembly-nasm-print-ascii-table-using-a-range-determined-by-input",
    "https://stackoverflow.com/questions/tagged/assembly",
    "https://stackoverflow.com/questions/tagged/input",
    "https://stackoverflow.com/questions/tagged/range",
    "https://stackoverflow.com/questions/tagged/ascii",
    "https://stackoverflow.com/questions/tagged/nasm",
    "https://stackoverflow.com/questions/61523356/can-i-inject-an-observable-from-a-parent-component-into-a-child-component",
    "https://stackoverflow.com/questions/tagged/angular",
    "https://stackoverflow.com/questions/tagged/redux",
...
]

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM