I'm looking to try and automate my html table scrape in Scrapy. This is what I have so far:
import scrapy
import pandas as pd
class XGSpider(scrapy.Spider):
name = 'expectedGoals'
start_urls = [
'https://fbref.com/en/comps/9/schedule/Premier-League-Scores-and-Fixtures',
]
def parse(self, response):
matches = []
for row in response.xpath('//*[@id="sched_ks_3232_1"]//tbody/tr'):
match = {
'home': row.xpath('td[4]//text()').extract_first(),
'homeXg': row.xpath('td[5]//text()').extract_first(),
'score': row.xpath('td[6]//text()').extract_first(),
'awayXg': row.xpath('td[7]//text()').extract_first(),
'away': row.xpath('td[8]//text()').extract_first()
}
matches.append(match)
x = pd.DataFrame(
matches, columns=['home', 'homeXg', 'score', 'awayXg', 'away'])
yield x.to_csv("xG.csv", sep=",", index=False)
It works fine, however as you can see I am hardcoding the keys ( home
, homeXg
, etc.) for the match
object. I'd like to automate scraping the keys to a list and then initialize a dict wih keys from said list. Problem is, I don't know how to loop through xpath by index. As an example,
headers = []
for row in response.xpath('//*[@id="sched_ks_3260_1"]/thead/tr'):
yield{
'first': row.xpath('th[1]/text()').extract_first(),
'second': row.xpath('th[2]/text()').extract_first()
}
Is it possible to stick th[1]
, th[2]
, th[3]
etc. into a for loop, with the numbers as indexes, and appending the values to a list? eg
row.xpath('th[i]/text()').extract_first()
?
Not tested but should work:
column_index = 1
columns = {}
for column_node in response.xpath('//*[@id="sched_ks_3260_1"]/thead/tr/th'):
column_name = column_node.xpath('./text()').extract_first()
columns[column_name] = column_index
column_index += 1
matches = []
for row in response.xpath('//*[@id="sched_ks_3232_1"]//tbody/tr'):
match = {}
for column_name in columns.keys():
match[column_name] = row.xpath('./td[{index}]//text()'.format(index=columns[column_name])).extract_first()
matches.append(match)
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.