繁体   English   中英

如何在 Zillow 上抓取超出页面限制的数据?

[英]how can I scrape data beyond the page limit on Zillow?

我创建了一个代码来抓取 Zillow 数据,它工作正常。 我唯一的问题是它被限制在 20 页,即使有更多的结果。 有没有办法绕过这个页面限制并废弃所有数据?

我还想知道这个问题是否有一个通用的解决方案,因为我几乎在我想抓取的每个站点都遇到了它。

谢谢

from bs4 import BeautifulSoup
import requests
import lxml
import json



headers = {
        "User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36",
        "Accept-Language": "en-US,en;q=0.9"
    }   


search_link = 'https://www.zillow.com/homes/Florida--/'
response = requests.get(url=search_link, headers=headers)


pages_number = 19
def OnePage():
    soup = BeautifulSoup(response.text, 'lxml')
    data = json.loads(
        soup.select_one("script[data-zrr-shared-data-key]")
        .contents[0]
        .strip("!<>-")
    )
    all_data = data['cat1']['searchResults']['listResults']
    
    home_info = []
    result = []
    
    for i in range(len(all_data)):
        property_link = all_data[i]['detailUrl']
        property_response = requests.get(url=property_link, headers=headers)
        property_page_source = BeautifulSoup(property_response.text, 'lxml')
        property_data_all = json.loads(json.loads(property_page_source.find('script', {'id': 'hdpApolloPreloadedData'}).get_text())['apiCache'])
        zp_id = str(json.loads(property_page_source.find('script', {'id': 'hdpApolloPreloadedData'}).get_text())['zpid'])
        property_data = property_data_all['ForSaleShopperPlatformFullRenderQuery{"zpid":'+zp_id+',"contactFormRenderParameter":{"zpid":'+zp_id+',"platform":"desktop","isDoubleScroll":true}}']["property"]
        home_info["Broker Name"] = property_data['attributionInfo']['brokerName']
        home_info["Broker Phone"] = property_data['attributionInfo']['brokerPhoneNumber']
        result.append(home_info)
        
    return result
    


data = pd.DataFrame()
all_page_property_info = []
for page in range(pages_number):
    property_info_one_page = OnePage()
    search_link = 'https://www.zillow.com/homes/Florida--/'+str(page+2)+'_p'
    response = requests.get(url=search_link, headers=headers)
    all_page_property_info = all_page_property_info+property_info_one_page
    data = pd.DataFrame(all_page_property_info)
    data.to_csv(f"/Users//Downloads/Zillow Search Result.csv", index=False)

实际上,您无法使用 bs4 从 zillow 中获取任何数据,因为它们是由 JS 动态加载的,而 bs4 无法呈现 JS。 只有 6 到 8 个数据项是 static。 所有数据都位于脚本标签中,html 注释为 json 格式。 如何拉取所需的数据? 在这种情况下,您可以按照下一个示例进行操作。 这样您就可以提取所有项目。 所以要拉 rest 的数据项,是你的任务或者只是在这里添加你的数据项。 Zillow 是最著名和最聪明的网站之一。 所以我们应该尊重它的条款和条件。

例子:

import requests
import re
import json
import pandas as pd

url='https://www.zillow.com/fl/{page}_p/?searchQueryState=%7B%22usersSearchTerm%22%3A%22FL%22%2C%22mapBounds%22%3A%7B%22west%22%3A-94.21964006249998%2C%22east%22%3A-80.68448381249998%2C%22south%22%3A22.702203494269085%2C%22north%22%3A32.23788425255877%7D%2C%22regionSelection%22%3A%5B%7B%22regionId%22%3A14%2C%22regionType%22%3A2%7D%5D%2C%22isMapVisible%22%3Afalse%2C%22filterState%22%3A%7B%22sort%22%3A%7B%22value%22%3A%22days%22%7D%2C%22ah%22%3A%7B%22value%22%3Atrue%7D%7D%2C%22isListVisible%22%3Atrue%2C%22mapZoom%22%3A6%2C%22pagination%22%3A%7B%22currentPage%22%3A2%7D%7D'
lst=[]
for page in range(1,21):
    r = requests.get(url.format(page=page),headers = {'User-Agent':'Mozilla/5.0'})
    data = json.loads(re.search(r'!--(\{"queryState".*?)-->', r.text).group(1))

    for item in data['cat1']['searchResults']['listResults']:
        price= item['price'] 
        lst.append({'price': price})
df = pd.DataFrame(lst).to_csv('out.csv',index=False)
print(df)

Output:

       price
0      $354,900
1      $164,900
2      $155,000
3      $475,000
4      $245,000
..          ...
795    $295,000
796     $10,000
797    $385,000
798  $1,785,000
799  $1,550,000

[800 rows x 1 columns]

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM