簡體   English   中英

如何在 Zillow 上抓取超出頁面限制的數據?

[英]how can I scrape data beyond the page limit on Zillow?

我創建了一個代碼來抓取 Zillow 數據,它工作正常。 我唯一的問題是它被限制在 20 頁,即使有更多的結果。 有沒有辦法繞過這個頁面限制並廢棄所有數據?

我還想知道這個問題是否有一個通用的解決方案,因為我幾乎在我想抓取的每個站點都遇到了它。

謝謝

from bs4 import BeautifulSoup
import requests
import lxml
import json



headers = {
        "User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36",
        "Accept-Language": "en-US,en;q=0.9"
    }   


search_link = 'https://www.zillow.com/homes/Florida--/'
response = requests.get(url=search_link, headers=headers)


pages_number = 19
def OnePage():
    soup = BeautifulSoup(response.text, 'lxml')
    data = json.loads(
        soup.select_one("script[data-zrr-shared-data-key]")
        .contents[0]
        .strip("!<>-")
    )
    all_data = data['cat1']['searchResults']['listResults']
    
    home_info = []
    result = []
    
    for i in range(len(all_data)):
        property_link = all_data[i]['detailUrl']
        property_response = requests.get(url=property_link, headers=headers)
        property_page_source = BeautifulSoup(property_response.text, 'lxml')
        property_data_all = json.loads(json.loads(property_page_source.find('script', {'id': 'hdpApolloPreloadedData'}).get_text())['apiCache'])
        zp_id = str(json.loads(property_page_source.find('script', {'id': 'hdpApolloPreloadedData'}).get_text())['zpid'])
        property_data = property_data_all['ForSaleShopperPlatformFullRenderQuery{"zpid":'+zp_id+',"contactFormRenderParameter":{"zpid":'+zp_id+',"platform":"desktop","isDoubleScroll":true}}']["property"]
        home_info["Broker Name"] = property_data['attributionInfo']['brokerName']
        home_info["Broker Phone"] = property_data['attributionInfo']['brokerPhoneNumber']
        result.append(home_info)
        
    return result
    


data = pd.DataFrame()
all_page_property_info = []
for page in range(pages_number):
    property_info_one_page = OnePage()
    search_link = 'https://www.zillow.com/homes/Florida--/'+str(page+2)+'_p'
    response = requests.get(url=search_link, headers=headers)
    all_page_property_info = all_page_property_info+property_info_one_page
    data = pd.DataFrame(all_page_property_info)
    data.to_csv(f"/Users//Downloads/Zillow Search Result.csv", index=False)

實際上,您無法使用 bs4 從 zillow 中獲取任何數據,因為它們是由 JS 動態加載的,而 bs4 無法呈現 JS。 只有 6 到 8 個數據項是 static。 所有數據都位於腳本標簽中,html 注釋為 json 格式。 如何拉取所需的數據? 在這種情況下,您可以按照下一個示例進行操作。 這樣您就可以提取所有項目。 所以要拉 rest 的數據項,是你的任務或者只是在這里添加你的數據項。 Zillow 是最著名和最聰明的網站之一。 所以我們應該尊重它的條款和條件。

例子:

import requests
import re
import json
import pandas as pd

url='https://www.zillow.com/fl/{page}_p/?searchQueryState=%7B%22usersSearchTerm%22%3A%22FL%22%2C%22mapBounds%22%3A%7B%22west%22%3A-94.21964006249998%2C%22east%22%3A-80.68448381249998%2C%22south%22%3A22.702203494269085%2C%22north%22%3A32.23788425255877%7D%2C%22regionSelection%22%3A%5B%7B%22regionId%22%3A14%2C%22regionType%22%3A2%7D%5D%2C%22isMapVisible%22%3Afalse%2C%22filterState%22%3A%7B%22sort%22%3A%7B%22value%22%3A%22days%22%7D%2C%22ah%22%3A%7B%22value%22%3Atrue%7D%7D%2C%22isListVisible%22%3Atrue%2C%22mapZoom%22%3A6%2C%22pagination%22%3A%7B%22currentPage%22%3A2%7D%7D'
lst=[]
for page in range(1,21):
    r = requests.get(url.format(page=page),headers = {'User-Agent':'Mozilla/5.0'})
    data = json.loads(re.search(r'!--(\{"queryState".*?)-->', r.text).group(1))

    for item in data['cat1']['searchResults']['listResults']:
        price= item['price'] 
        lst.append({'price': price})
df = pd.DataFrame(lst).to_csv('out.csv',index=False)
print(df)

Output:

       price
0      $354,900
1      $164,900
2      $155,000
3      $475,000
4      $245,000
..          ...
795    $295,000
796     $10,000
797    $385,000
798  $1,785,000
799  $1,550,000

[800 rows x 1 columns]

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM