[英]Writing python list values to a csv file
I'm trying to export the list generated to a csv file where each row in the website table corresponds to a new row in the file and each value is in an individual cell, such as: 我正在尝试将生成的列表导出到csv文件,其中网站表中的每一行对应于文件中的新行,并且每个值都在单个单元格中,例如:
NAME.....ICO DATE....ICO PRICE....CURR.
NAME ..... ICO DATE .... ICO PRICE .... CURR。 PRICE....24 HR ROI Stratis.....06/20/16.......$0.007...........$7.480................+38.80%
价格...... 24 HR HRI Stratis ..... 06/20/16 ....... $ 0.007 ........... $ 7.480 ........... ..... + 38.80%
The current output looks like this: 当前输出如下所示:
['Patientory\\n05/31/17\\n$0.104\\n$0.274\\n+46.11%\\n+25.54%\\nN/A']
[ 'Patientory \\ N05 /一十七分之三十一\\ n $的0.104 \\ N $ 0.274 \\ N + 46.11%\\ N + 25.54%\\ NN / A']
import csv
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.ui import WebDriverWait as wait
csvrows = []
def get_css_sel(selector):
posts = browser.find_elements_by_css_selector(selector)
for post in posts:
print(post.text)
csvrows.append([post.text])
browser = webdriver.Chrome(executable_path=r'C:\Scrapers\chromedriver.exe')
browser.get("https://icostats.com")
wait(browser, 20).until(EC.presence_of_element_located((By.CSS_SELECTOR, "#app > div > div.container-0-16 > div.table-0-20 > div.tbody-0-21 > div:nth-child(2) > div:nth-child(8)")))
get_css_sel("#app > div > div.container-0-16 > div.table-0-20 > div.tableheader-0-50") #fetch header of table
get_css_sel("#app > div > div.container-0-16 > div.table-0-20 > div.tbody-0-21 > div") #fetch rows of table
def create_csv(thelist):
with open('ICO.csv', 'w') as myfile:
for i in thelist:
wr = csv.writer(myfile, quoting=csv.QUOTE_ALL)
wr.writerow([i])
create_csv(csvrows)
In get_css_sel()
, each post.text
contains the row text separated by newlines \\n
- same as your example of the output. 在
get_css_sel()
,每个post.text
包含由换行符分隔的行文本\\n
- 与输出示例相同。 So appending [post.text]
appends a list with a single item for the full row. 因此,附加
[post.text]
附加一个列表,其中包含整行的单个项目。 Change that to: 改为:
csvrows.append(post.text.split('\n')) # remove the extra list brackets
# since split returns a list.
Ex: 例如:
>>> y = 'Patientory\n05/31/17\n$0.104\n$0.274\n+46.11%\n+25.54%\nN/A'
>>> y.split('\n')
['Patientory', '05/31/17', '$0.104', '$0.274', '+46.11%', '+25.54%', 'N/A']
Additionally, in your writing loop, you shouldn't re-create the csv.writer
for every row, just do it once before looping over thelist
. 另外,在你的写入循环中,你不应该为每一行重新创建
csv.writer
,只需在循环遍历thelist
之前执行一次。
And since you have all the rows you want in csvrows
, you can use csvwriter.writerows
directly. 既然你在
csvrows
拥有了你想要的所有行,你可以直接使用csvwriter.writerows
。
def create_csv(thelist):
with open('ICO.csv', 'w') as myfile:
wr = csv.writer(myfile, quoting=csv.QUOTE_ALL)
wr.writerows(thelist)
Try this code: 试试这段代码:
import csv
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.ui import WebDriverWait as wait
csvrows = []
def get_css_sel(selector):
posts = browser.find_elements_by_css_selector(selector)
for post in posts:
print(post.text)
csvrows.append(post.text)
browser = webdriver.Chrome(executable_path=r'//Users/Pranavtadepalli/Downloads/chromedriver')
browser.get("https://icostats.com")
wait(browser, 20).until(EC.presence_of_element_located((By.CSS_SELECTOR, "#app > div > div.container-0-16 > div.table-0-20 > div.tbody-0-21 > div:nth-child(2) > div:nth-child(8)")))
get_css_sel("#app > div > div.container-0-16 > div.table-0-20 > div.tableheader-0-50") #fetch header of table
get_css_sel("#app > div > div.container-0-16 > div.table-0-20 > div.tbody-0-21 > div") #fetch rows of table
new=[",".join(elem.split("\n")) for elem in csvrows]
newfile=open("csvfile.csv",'r')
newfile1=open("csvfile.csv",'w')
newstuff=newfile.read()
for elem in new:
newfile1.write(elem+'\n')
newfile1.close()
newfile.close()
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.