[英]how to write the scraped data in the csv fromat?
您好,我是 python 的新手,我不知道如何将抓取的数据转换为 csv 格式。 这是我的程序
import requests
import urllib.request
from bs4 import BeautifulSoup
import pandas
url = 'https://menupages.com/restaurants/ny-new-york/2'
response = requests.get(url)
soup = BeautifulSoup(response.text, "html.parser")
all_links = soup.find_all("a")
for link in all_links:
print(link.get("href"))
rows = soup.find_all('tr')
print(rows[:10])
它刮掉了我想要的 output 并且我想将我的 output 保存在 csv 文件中。任何人请帮忙
您可以在 python csv文档中找到以下示例。
import csv
with open('eggs.csv', 'w', newline='') as csvfile:
spamwriter = csv.writer(csvfile, delimiter=' ',
quotechar='|', quoting=csv.QUOTE_MINIMAL)
spamwriter.writerow(['Spam'] * 5 + ['Baked Beans'])
spamwriter.writerow(['Spam', 'Lovely Spam', 'Wonderful Spam'])
如您所见,您需要做的就是将行转换为列表,然后将其传递给writerow
方法。
您可以将抓取的链接列表存储在python列表中,然后通过创建 pandas DataFrame 创建 Z628CB5675FF524F3EZE7819 文件。
import requests
import urllib.request
from bs4 import BeautifulSoup
import pandas
url = 'https://menupages.com/restaurants/ny-new-york/2'
response = requests.get(url)
soup = BeautifulSoup(response.text, "html.parser")
all_links = soup.find_all("a")
list_links = []
for link in all_links:
list_links.append(link.get("href"))
rows = soup.find_all('tr')
df = pandas.DataFrame({'WebLinks':list_links})
df.to_csv('/home/stackoverflow/links.csv', index=0)
Output 文件
WebLinks
https://menupages.com/
https://menupages.com/
https://menupages.com/restaurants/cities
https://menupages.com/info/about-us
https://menupages.com/info/contact-us
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.