[英]How to extract the website URL from the redirect link with Scrapy Python
我編寫了一個腳本來從網站獲取數據。 我在收集網站 URL 時遇到問題,因為 @href 是重定向鏈接。 如何將重定向 URL 轉換為它重定向到的實際網站?
import scrapy
import logging
class AppSpider(scrapy.Spider):
name = 'app'
allowed_domains = ['www.houzz.in']
start_urls = ['https://www.houzz.in/professionals/searchDirectory?topicId=26721&query=Design-Build+Firms&location=Mumbai+City+District%2C+India&distance=100&sort=4']
def parse(self, response):
lists = response.xpath('//li[@class="hz-pro-search-results__item"]/div/div[@class="hz-pro-search-result__info"]/div/div/div/a')
for data in lists:
link = data.xpath('.//@href').get()
yield scrapy.Request(url=link, callback=self.parse_houses, meta={'Links': link})
next_page = response.xpath('(//a[@class="hz-pagination-link hz-pagination-link--next"])[1]/@href').extract_first()
if next_page:
yield response.follow(response.urljoin(next_page), callback=self.parse)
def parse_houses(self, response):
link = response.request.meta['Links']
firm_name = response.xpath('//div[@class="hz-profile-header__title"]/h1/text()').get()
name = response.xpath('//div[@class="profile-meta__val"]/text()').get()
phone = response.xpath('//div[@class="hz-profile-header__contact-info text-right mrm"]/a/span/text()').get()
website = response.xpath('(//div[@class="hz-profile-header__contact-info text-right mrm"]/a)[2]/@href').get()
yield {
'Links': link,
'Firm_name': firm_name,
'Name': name,
'Phone': phone,
'Website': website
}
您必須向該目標 URL 發出請求,才能查看它通往何處
在您的情況下,您可以簡單地執行HEAD
請求,這不會加載目標 URL 的任何主體,這樣可以節省帶寬並提高腳本的速度
def parse_houses(self, response):
link = response.request.meta['Links']
firm_name = response.xpath('//div[@class="hz-profile-header__title"]/h1/text()').get()
name = response.xpath('//div[@class="profile-meta__val"]/text()').get()
phone = response.xpath('//div[@class="hz-profile-header__contact-info text-right mrm"]/a/span/text()').get()
website = response.xpath('(//div[@class="hz-profile-header__contact-info text-right mrm"]/a)[2]/@href').get()
yield Request(url=website,
method="HEAD",
callback=self.get_final_link,
meta={'data':
{
'Links': link,
'Firm_name': firm_name,
'Name': name,
'Phone': phone,
'Website': website
}
}
)
def get_final_link(self, response):
data = response.meta['data']
data['website'] = response.headers['Location']
yield data
如果您的目標是獲取網站,那么每個列表的源代碼中也提供了實際的網站鏈接,您可以通過正則表達式獲取它,無需訪問加密的 url
def parse_houses(self, response):
link = response.request.meta['Links']
firm_name = response.xpath('//div[@class="hz-profile-header__title"]/h1/text()').get()
name = response.xpath('//div[@class="profile-meta__val"]/text()').get()
phone = response.xpath('//div[@class="hz-profile-header__contact-info text-right mrm"]/a/span/text()').get()
website = re.findall(r"\"url\"\: \"(.*?)\"", response.text)[0]
你可以這樣做:
class AppSpider(scrapy.Spider):
base_url = 'www.houzz.in{}'
.
.
.
def foo(self):
actual_url = self.base_url.format(redirect_url)
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.