簡體   English   中英

遞歸函數沒有輸出

[英]Recursive function gives no output

我正在用遞歸函數抓取我域的所有 URL。 但它什么都不輸出,沒有任何錯誤。

#usr/bin/python

from bs4 import BeautifulSoup
import requests
import tldextract


def scrape(url):

    for links in url:
        main_domain = tldextract.extract(links)
        r = requests.get(links)
        data = r.text
        soup = BeautifulSoup(data)
    
        for href in soup.find_all('a'):
            href = href.get('href')
            if not href:
                continue
            link_domain = tldextract.extract(href)
        
            if link_domain.domain == main_domain.domain :
                problem.append(href)
    
            elif not href == '#' and link_domain.tld == '':
                new = 'http://www.'+ main_domain.domain + '.' + main_domain.tld + '/' + href
                problem.append(new)

        return len(problem)
        return scrape(problem)
        

problem = ["http://xyzdomain.com"]  
print(scrape(problem))

當我創建一個新列表時,它可以工作,但我不想每次都為每個循環創建一個列表。

您需要構建您的代碼,使其符合遞歸模式,因為您當前的代碼不符合 - 您也不應該調用與庫同名的變量,例如href = href.get()因為這通常會阻止庫工作當它成為變量時,您當前的代碼只會返回 len() 因為此返回之前無條件地達到: return scrap(problem) 。:

def Recursive(Factorable_problem)
    if Factorable_problem is Simplest_Case:
        return AnswerToSimplestCase
    else:
        return Rule_For_Generating_From_Simpler_Case(Recursive(Simpler_Case))

例如:

def Factorial(n):
    """ Recursively Generate Factorials """
    if n < 2:
        return 1
    else:
        return n * Factorial(n-1)

您好,我已經制作了一個非遞歸版本,它似乎可以獲取同一域上的所有鏈接。

下面的代碼我已經使用代碼中包含的問題進行了測試。 當我解決了遞歸版本的問題時,下一個問題是達到了遞歸深度限制,所以我重寫了它,使其以迭代方式運行,代碼和結果如下:

from bs4 import BeautifulSoup
import requests
import tldextract


def print_domain_info(d):
    print "Main Domain:{0} \nSub Domain:{1} \nSuffix:{2}".format(d.domain,d.subdomain,d.suffix)

SEARCHED_URLS = []
problem = [ "http://Noelkd.neocities.org/", "http://youpi.neocities.org/"]
while problem:
    # Get a link from the stack of links
    link = problem.pop()
    # Check we haven't been to this address before
    if link in SEARCHED_URLS:
        continue
    # We don't want to come back here again after this point
    SEARCHED_URLS.append(link)
    # Try and get the website
    try:
        req = requests.get(link)
    except:
        # If its not working i don't care for it
        print "borked website found: {0}".format(link)
        continue
    # Now we get to this point worth printing something
    print "Trying to parse:{0}".format(link)
    print "Status Code:{0}  Thats: {1}".format(req.status_code, "A-OK" if req.status_code == 200 else "SOMTHINGS UP" )
    # Get the domain info
    dInfo = tldextract.extract(link)
    print_domain_info(dInfo)
    # I like utf-8
    data = req.text.encode("utf-8")
    print "Lenght Of Data Retrived:{0}".format(len(data))  # More info
    soup = BeautifulSoup(data)  # This was here before so i left it.
    print "Found {0} link{1}".format(len(soup.find_all('a')),"s" if len(soup.find_all('a')) > 1 else "")
    FOUND_THIS_ITERATION = []  # Getting the same links over and over was boring
    found_links = [x for x in soup.find_all('a') if x.get('href') not in SEARCHED_URLS]  # Find me all the links i don't got
    for href in found_links: 
        href = href.get('href') # You wrote this seems to work well
        if not href:
            continue
        link_domain = tldextract.extract(href) 
        if link_domain.domain == dInfo.domain: # JUST FINDING STUFF ON SAME DOMAIN RIGHT?!
            if href not in FOUND_THIS_ITERATION: # I'ma check you out next time 
                print "Check out this link: {0}".format(href)
                print_domain_info(link_domain)
                FOUND_THIS_ITERATION.append(href)
                problem.append(href)
            else: # I got you already
                print "DUPE LINK!"
        else: 
            print "Not on same domain moving on" 

    # Count down
    print "We have {0} more sites to search".format(len(problem))
    if problem:
        continue
    else:
        print "Its been fun"
        print "Lets see the URLS we've visited:"
        for url in SEARCHED_URLS:
            print url

在對新城市網站進行了大量其他日志記錄之后,打印出來了!

發生的事情是腳本彈出尚未訪問的網站列表的值,然后獲取頁面上位於同一域中的所有鏈接。 如果這些鏈接指向我們未訪問過的頁面,我們會將鏈接添加到要訪問的鏈接列表中。 在我們這樣做之后,我們彈出下一頁並再次執行相同的操作,直到沒有頁面可供訪問為止。

認為這就是您要找的東西,如果這不能以您想要的方式工作,或者有人可以改進,請在評論中回復我們,請發表評論。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM