简体   繁体   English

如何获取网页的所有链接,无论是直接链接还是使用python间接链接到网页?

[英]How to get all links of a webpage whether linked to webpage directly or indirectly using python?

I need to get all the Links related to give homepage URL of a website, that all links mean the link which are present in homepage plus the links which are new and are reached via using the link in the homepage links. 我需要获取所有与提供网站首页URL相关的链接,所有链接都表示存在于首页中的链接以及通过使用首页链接中的链接可以访问的新链接。

I am using the BeautifulSoup python library. 我正在使用BeautifulSoup python库。 I am also thinking to use Scrapy. 我也在考虑使用Scrapy。 This Below code extracts Links only linked to homepage. 下面的这段代码提取仅链接到首页的链接。

from bs4 import BeautifulSoup
import requests


url = "https://www.dataquest.io"

def links(url):
    html = requests.get(url).content
    bsObj = BeautifulSoup(html, 'lxml')

    links = bsObj.findAll('a')
    finalLinks = set()
    for link in links:
        finalLinks.add(link)

    return finalLinks

print(links(url))
linklis = list(links(url))

for l in linklis:
    print(l)
    print("\n")

I need a List which include all URL/Links which can be reached via the homepage URL (may be directly or indirectly linked to homepage). 我需要一个列表,其中包含可以通过主页URL(可以直接或间接链接到主页)访问的所有URL /链接。

This script will print all links found on the url https://www.dataquest.io : 该脚本将打印在URL https://www.dataquest.io上找到的所有链接:

from bs4 import BeautifulSoup
import requests

url = "https://www.dataquest.io"

def links(url):
    html = requests.get(url).content
    bsObj = BeautifulSoup(html, 'lxml')

    links = bsObj.select('a[href]')

    final_links = set()

    for link in links:
        url_string = link['href'].rstrip('/')
        if 'javascript:' in url_string or url_string.startswith('#'):
            continue
        elif 'http' not in url_string and not url_string.startswith('//'):
            url_string = 'https://www.dataquest.io' + url_string
        elif 'dataquest.io' not in url_string:
            continue
        final_links.add(url_string)

    return final_links

for l in sorted( links(url) ):
    print(l)

Prints: 印刷品:

http://app.dataquest.io/login
http://app.dataquest.io/signup
https://app.dataquest.io/signup
https://www.dataquest.io
https://www.dataquest.io/about-us
https://www.dataquest.io/blog
https://www.dataquest.io/blog/learn-data-science
https://www.dataquest.io/blog/learn-python-the-right-way
https://www.dataquest.io/blog/the-perfect-data-science-learning-tool
https://www.dataquest.io/blog/topics/student-stories
https://www.dataquest.io/chat
https://www.dataquest.io/course
https://www.dataquest.io/course/algorithms-and-data-structures
https://www.dataquest.io/course/apis-and-scraping
https://www.dataquest.io/course/building-a-data-pipeline
https://www.dataquest.io/course/calculus-for-machine-learning
https://www.dataquest.io/course/command-line-elements
https://www.dataquest.io/course/command-line-intermediate
https://www.dataquest.io/course/data-exploration
https://www.dataquest.io/course/data-structures-algorithms
https://www.dataquest.io/course/decision-trees
https://www.dataquest.io/course/deep-learning-fundamentals
https://www.dataquest.io/course/exploratory-data-visualization
https://www.dataquest.io/course/exploring-topics
https://www.dataquest.io/course/git-and-vcs
https://www.dataquest.io/course/improving-code-performance
https://www.dataquest.io/course/intermediate-r-programming
https://www.dataquest.io/course/intro-to-r
https://www.dataquest.io/course/kaggle-fundamentals
https://www.dataquest.io/course/linear-algebra-for-machine-learning
https://www.dataquest.io/course/linear-regression-for-machine-learning
https://www.dataquest.io/course/machine-learning-fundamentals
https://www.dataquest.io/course/machine-learning-intermediate
https://www.dataquest.io/course/machine-learning-project
https://www.dataquest.io/course/natural-language-processing
https://www.dataquest.io/course/optimizing-postgres-databases-data-engineering
https://www.dataquest.io/course/pandas-fundamentals
https://www.dataquest.io/course/pandas-large-datasets
https://www.dataquest.io/course/postgres-for-data-engineers
https://www.dataquest.io/course/probability-fundamentals
https://www.dataquest.io/course/probability-statistics-intermediate
https://www.dataquest.io/course/python-data-cleaning-advanced
https://www.dataquest.io/course/python-datacleaning
https://www.dataquest.io/course/python-for-data-science-fundamentals
https://www.dataquest.io/course/python-for-data-science-intermediate
https://www.dataquest.io/course/python-programming-advanced
https://www.dataquest.io/course/r-data-cleaning
https://www.dataquest.io/course/r-data-cleaning-advanced
https://www.dataquest.io/course/r-data-viz
https://www.dataquest.io/course/recursion-and-tree-structures
https://www.dataquest.io/course/spark-map-reduce
https://www.dataquest.io/course/sql-databases-advanced
https://www.dataquest.io/course/sql-fundamentals
https://www.dataquest.io/course/sql-fundamentals-r
https://www.dataquest.io/course/sql-intermediate-r
https://www.dataquest.io/course/sql-joins-relations
https://www.dataquest.io/course/statistics-fundamentals
https://www.dataquest.io/course/statistics-intermediate
https://www.dataquest.io/course/storytelling-data-visualization
https://www.dataquest.io/course/text-processing-cli
https://www.dataquest.io/directory
https://www.dataquest.io/forum
https://www.dataquest.io/help
https://www.dataquest.io/path/data-analyst
https://www.dataquest.io/path/data-analyst-r
https://www.dataquest.io/path/data-engineer
https://www.dataquest.io/path/data-scientist
https://www.dataquest.io/privacy
https://www.dataquest.io/subscribe
https://www.dataquest.io/terms
https://www.dataquest.io/were-hiring
https://www.dataquest.io/wp-content/uploads/2019/03/db.png
https://www.dataquest.io/wp-content/uploads/2019/03/home-code-1.jpg
https://www.dataquest.io/wp-content/uploads/2019/03/python.png

EDIT: Changed the selector to a[href] 编辑:将选择器更改为a[href]

EDIT2: A primitive recursive crawler: EDIT2:基本的递归爬虫:

def crawl(urls, seen=set()):
    for url in urls:
        if url not in seen:
            print(url)
            seen.add(url)
            new_links = links(url)
            crawl(urls.union(new_links), seen)

starting_links = links(url)
crawl(starting_links)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM