The problem is that I can't count all elements on 2 webpages correctly in Selenium.
pages = driver.find_elements(By.CLASS_NAME, "page") #getting list of pages
for x in pages: #loop through this list
x.click() #click on page number
prices += driver.find_elements(By.CLASS_NAME, "final-price") #addition 1pg and 2pg elements
print(len(prices))
Result is wrong: it adds 1pg with 1pg, not 1pg with 2pg. tried to insert wait - did not help :( I want to note that page is not reloading, just scroll up and shows new results after clicking on the page 2, it must be using ajax.
You can use requests and bs4 libraries to scrap a website.
For example:
a) I will count how many links are in a site:
b) i will extract href value for each link
import requests, bs4
my_request = requests.get('https://www.bellezaculichi.com')
# request one page
my_request.raise_for_status()
# check for error
my_html = bs4.BeautifulSoup(my_request.text)
# parse site
my_links = my_html.select('a')
#stores all links in list my_links
total_links = len(my_links)
# how many links do i found
print(total_links)
# show how many links
for i in range(total_links):
my_links[i]["href"]
With bs4 you can count any element in a web page.
You can count how many divs, p or a
With bs4 you can count any element by ID in a web page.
With bs4 you can count any element by CLASS in a web page.
With bs4 you can count any element by ATTRIBUTE in a web page.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.