简体   繁体   中英

Is there a way to scrape through multiple pages on a website in R

I am new to R and webscraping. For practice I am trying to scrape book titles from a fake website that has multiple pages ('http://books.toscrape.com/catalogue/page-1.html'), and then calculate certain metrics based on the book titles. There are 20 books on each page and 50 pages, I have managed to scrape and calculate metrics for the first 20 books, however I want to calculate the metrics for the full 1000 books on the website.

The current output looks like this:

 [1] "A Light in the Attic"                                                                          
 [2] "Tipping the Velvet"                                                                            
 [3] "Soumission"                                                                                    
 [4] "Sharp Objects"                                                                                 
 [5] "Sapiens: A Brief History of Humankind"                                                         
 [6] "The Requiem Red"                                                                               
 [7] "The Dirty Little Secrets of Getting Your Dream Job"                                            
 [8] "The Coming Woman: A Novel Based on the Life of the Infamous Feminist, Victoria Woodhull"       
 [9] "The Boys in the Boat: Nine Americans and Their Epic Quest for Gold at the 1936 Berlin Olympics"
[10] "The Black Maria"                                                                               
[11] "Starving Hearts (Triangular Trade Trilogy, #1)"                                                
[12] "Shakespeare's Sonnets"                                                                         
[13] "Set Me Free"                                                                                   
[14] "Scott Pilgrim's Precious Little Life (Scott Pilgrim #1)"                                       
[15] "Rip it Up and Start Again"                                                                     
[16] "Our Band Could Be Your Life: Scenes from the American Indie Underground, 1981-1991"            
[17] "Olio"                                                                                          
[18] "Mesaerion: The Best Science Fiction Stories 1800-1849"                                         
[19] "Libertarianism for Beginners"                                                                  
[20] "It's Only the Himalayas"

I want this to be 1000 books long instead of 20, this will allow me to use the same code to calculate the metrics but for 1000 books instead of 20.

Code:

url<-'http://books.toscrape.com/catalogue/page-1.html'

url %>%
  read_html() %>%
  html_nodes('h3 a') %>%
  html_attr('title')->titles
titles

What would be the best way to scrape every book from the website and make the list 1000 book titles long instead of 20? Thanks in advance.

Generate the 50 URLs, then iterate on them, eg with purrr::map

library(rvest)

urls <- paste0('http://books.toscrape.com/catalogue/page-', 1:50, '.html')

titles <- purrr::map(
  urls, 
  . %>% 
    read_html() %>%
    html_nodes('h3 a') %>%
    html_attr('title')
)

something like this perhaps?

library(tidyverse)
library(rvest)
library(data.table)
# Vector with URL's to scrape
url <- paste0("http://books.toscrape.com/catalogue/page-", 1:20, ".html")
# Scrape to list
L <- lapply( url, function(x) {
  print( paste0( "scraping: ", x, " ... " ) )
  data.table(titles = read_html(x) %>%
              html_nodes('h3 a') %>%
              html_attr('title') )
})
# Bind list to single data.table
data.table::rbindlist(L, use.names = TRUE, fill = TRUE)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM