简体   繁体   English

使用 BeautifulSoup 从网站上抓取每个元素

[英]Scraping each element from website with BeautifulSoup

I wrote a code for scraping one real estate website.我编写了一个用于抓取一个房地产网站的代码。 This is the link:这是链接:

https://www.nekretnine.rs/stambeni-objekti/stanovi/lista/po-stranici/10/ https://www.nekretnine.rs/stambeni-objekti/stanovi/lista/po-stranici/10/

From this page I can get only location, size and price of the apartment, but Is it possible to write a code that will go on page of each appartment and scrape values from it, because it contains much more info.从这个页面我只能得到公寓的位置、大小和价格,但是是否可以编写一个代码,将 go 在每个公寓的页面上并从中刮取价值,因为它包含更多信息。 Check this link:检查此链接:

https://www.nekretnine.rs/stambeni-objekti/stanovi/arena-bulevar-arsenija-carnojevica-97m-2-lode-energoprojekt/NkvJK0Ou5tV/ https://www.nekretnine.rs/stambeni-objekti/stanovi/arena-bulevar-arsenija-carnojevica-97m-2-lode-energoprojekt/NkvJK0Ou5tV/

I have posted a code.我已经发布了一个代码。 I noticed that my url changes when I click on specific real estate.我注意到当我点击特定的房地产时,我的 url 发生了变化。 For example:例如:

arena-bulevar-arsenija-carnojevica-97m-2-lode-energoprojekt/NkvJK0Ou5tV/

I taught about creating for loop, but there is no way to know how it changes because it has some id number at the end:我教过如何创建 for 循环,但无法知道它是如何变化的,因为它最后有一些 id 号:

NkvJK0Ou5tV

This is the code that I have:这是我拥有的代码:

from bs4 import BeautifulSoup
import requests

website = "https://www.nekretnine.rs/stambeni-objekti/stanovi/lista/po-stranici/10/"

soup = requests.get(website).text
my_html = BeautifulSoup(soup, 'lxml')

lokacija = my_html.find_all('p', class_='offer-location text-truncate')
ukupna_kvadratura = my_html.find_all('p', class_='offer-price offer-price--invert')
ukupna_cena = my_html.find_all('div', class_='d-flex justify-content-between w-100')
ukupni_opis = my_html.find_all('div', class_='mt-1 mb-1 mt-lg-0 mb-lg-0 d-md-block offer-meta-info offer-adress')


for lok, kvadratura, cena_stana, sumarno in zip(lokacija, ukupna_kvadratura, ukupna_cena, ukupni_opis):

    lok = lok.text.split(',')[0] #lokacija

    kv = kvadratura.span.text.split(' ')[0] #kvadratura
    jed = kvadratura.span.text.split(' ')[1] #jedinica mere

    cena = cena_stana.span.text #cena

    sumarno = sumarno.text

    datum = sumarno.split('|')[0].strip()
    status = sumarno.split('|')[1].strip()
    opis = sumarno.split('|')[2].strip()

    print(lok, kv, jed, cena, datum, status, opis)

You can get href from div class="placeholder-preview-box ratio-4-3".您可以从 div class="placeholder-preview-box ratio-4-3" 获取 href。 From here you can find the URL.从这里您可以找到 URL。

You can iterate over the links provided by the pagination at the bottom of the page:您可以遍历页面底部分页提供的链接:

from bs4 import BeautifulSoup as soup
import requests
d = soup(requests.get('https://www.nekretnine.rs/stambeni-objekti/stanovi/lista/po-stranici/10/').text, 'html.parser')
def scrape_page(page):
   return [{'title':i.h2.get_text(strip=True), 'loc':i.p.get_text(strip=True), 'price':i.find('p', {'class':'offer-price'}).get_text(strip=True)} for i in page.find_all('div', {'class':'row offer'})]

result = [scrape_page(d)]
while d.find('a', {'class':'pagination-arrow arrow-right'}):
   d = soup(requests.get(f'https://www.nekretnine.rs{d.find("a", {"class":"pagination-arrow arrow-right"})["href"]}').text, 'html.parser')
   result.append(scrape_page(d))

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM