簡體   English   中英

從python請求中的txt文件中的鏈接獲取所有分頁URL的列表

[英]Get list of all paginated URL's from links in txt file in python requests

嗨,大家好,我定義了一個函數,以獲取python txt文件中鏈接底部所有分頁URL的列表。

這是我需要做的一個例子。

輸入連結

http://www.apartmentguide.com/apartments/Alabama/Hartselle/

期望的輸出

www.apartmentguide.com/apartments/Alabama/Hartselle/?page=2
www.apartmentguide.com/apartments/Alabama/Hartselle/?page=3
www.apartmentguide.com/apartments/Alabama/Hartselle/?page=4
www.apartmentguide.com/apartments/Alabama/Hartselle/?page=5
www.apartmentguide.com/apartments/Alabama/Hartselle/?page=6
www.apartmentguide.com/apartments/Alabama/Hartselle/?page=7
www.apartmentguide.com/apartments/Alabama/Hartselle/?page=8
www.apartmentguide.com/apartments/Alabama/Hartselle/?page=9

因此,每個Input Url都有任何限制。

這是我到目前為止編寫的函數,但是它不能正常工作,我也不擅長使用Python。

import requests
#from bs4 import BeautifulSoup
from scrapy import Selector as Se
import urllib2


lists = open("C:\Users\Administrator\Desktop\\3.txt","r")
read_list = lists.read()
line = read_list.split("\n")


def get_links(line):
    for each in line:
        r = requests.get(each)
        sel = Se(text=r.text, type="html")
        next_ = sel.xpath('//a[@class="next sprite"]//@href').extract()
        for next_1 in next_:
            next_2 = "http://www.apartmentguide.com"+next_1
            print next_2
        get_links(next_1)

get_links(line)

下面是執行此操作的兩種方法。

import mechanize

import requests
from bs4 import BeautifulSoup, SoupStrainer
import urlparse

import pprint

#-- Mechanize --
br = mechanize.Browser()

def get_links_mechanize(root):
    links = []
    br.open(root)

    for link in br.links():
        try:
            if dict(link.attrs)['class'] == 'page':
                links.append(link.absolute_url)
        except:
            pass
    return links


#-- Requests / BeautifulSoup / urlparse --
def get_links_bs(root):
    links = []
    r = requests.get(root)

    for link in BeautifulSoup(r.text, parse_only=SoupStrainer('a')):
        if link.has_attr('href') and link.has_attr('class') and 'page' in link.get('class'):
            links.append(urlparse.urljoin(root, link.get('href')))

    return links


#with open("C:\Users\Administrator\Desktop\\3.txt","r") as f:
#    for root in f:
#        links = get_links(root) 
#        # <Do something with links>
root = 'http://www.apartmentguide.com/apartments/Alabama/Hartselle/'

print "Mech:"
pprint.pprint( get_links_mechanize(root) )
print "Requests/BS4/urlparse:"
pprint.pprint( get_links_bs(root) )

人們使用mechanize -使用URL會更智能,但會慢很多,並且可能會因您正在執行的其他操作而被殺。

另一個使用requests來獲取頁面(urllib2就足夠了), BeautifulSoup用來解析標記,而urlparse使用列出頁面中的相對URL來形成絕對URL。

請注意,這兩個函數均返回以下列表:

['http://www.apartmentguide.com/apartments/Alabama/Hartselle/?page=2',
 'http://www.apartmentguide.com/apartments/Alabama/Hartselle/?page=3',
 'http://www.apartmentguide.com/apartments/Alabama/Hartselle/?page=4',
 'http://www.apartmentguide.com/apartments/Alabama/Hartselle/?page=5',
 'http://www.apartmentguide.com/apartments/Alabama/Hartselle/?page=2',
 'http://www.apartmentguide.com/apartments/Alabama/Hartselle/?page=3',
 'http://www.apartmentguide.com/apartments/Alabama/Hartselle/?page=4',
 'http://www.apartmentguide.com/apartments/Alabama/Hartselle/?page=5']

其中有重復項。 您可以通過更改來消除重復項

return links

return list(set(links))

無論選擇哪種方法

編輯:

我注意到上面的函數僅返回到第2-5頁的鏈接,您必須瀏覽這些頁面才能看到實際上有10個頁面。

完全不同的方法是,從“根”頁面抓取結果數量,然后預測將要生成的頁面數量,然后從中建立鏈接。

由於每頁有20個結果,因此弄清楚有多少頁是簡單的,請考慮:

import requests, re, math, pprint

def scrape_results(root):
    links = []
    r = requests.get(root)

    mat = re.search(r'We have (\d+) apartments for rent', r.text)
    num_results = int(mat.group(1))                     # 182 at the moment
    num_pages = int(math.ceil(num_results/20.0))        # ceil(182/20) => 10

    # Construct links for pages 1-10
    for i in range(num_pages):
        links.append("%s?page=%d" % (root, (i+1)))

    return links

pprint.pprint(scrape_results(root))

這將是3種方法中最快的方法,但可能更容易出錯。

編輯2

也許像:

import re, math, pprint
import requests, urlparse
from bs4 import BeautifulSoup, SoupStrainer

def get_pages(root):
    links = []
    r = requests.get(root)

    mat = re.search(r'We have (\d+) apartments for rent', r.text)
    num_results = int(mat.group(1))                     # 182 at the moment
    num_pages = int(math.ceil(num_results/20.0))        # ceil(182/20) => 10

    # Construct links for pages 1-10
    for i in range(num_pages):
        links.append("%s?page=%d" % (root, (i+1)))

    return links

def get_listings(page):
    links = []
    r = requests.get(page)

    for link in BeautifulSoup(r.text, parse_only=SoupStrainer('a')):
        if link.has_attr('href') and link.has_attr('data-listingid') and 'name' in link.get('class'):
            links.append(urlparse.urljoin(root, link.get('href')))

    return links

root='http://www.apartmentguide.com/apartments/Alabama/Hartselle/'
listings = []
for page in get_pages(root):
    listings += get_listings(page)

pprint.pprint(listings)
print(len(listings))

對於Re我不確定,所以嘗試了xpath。

links = open("C:\Users\ssamant\Desktop\Anida\Phase_II\Apartmentfinder\\2.txt","r")
read_list = links.read()
line = read_list.split("\n")

for each in line:
    lines = []
    r = requests.get(each)
    sel = Selector(text=r.text,type="html")
    mat = sel.xpath('//h1//strong/text()').extract()
    mat = str(mat)
    mat1 = mat.replace(" apartments for rent']","")
    mat2 = mat1.replace("[u'","")
    mat3 = int(mat2)
    num_pages = int(math.ceil(mat3/20.0))
    for i in range(num_pages):
        lines.append("%s/Page%d" % (each, (i+1)))
    with open('C:\Users\ssamant\Desktop\Anida\Phase_II\Apartmentfinder\\test.csv', 'ab') as f:
        writer = csv.writer(f)
        for val in lines:
            writer.writerow([val])

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM