繁体   English   中英

无法以某种自定义方式将结果写入 csv 文件

[英]Unable to write results in a csv file in some customized manner

我创建了一个脚本来从网页的不同容器中解析singerstheir concerning linksactors their concerning links 脚本运行良好。 我不能做的是将结果相应地写入 csv 文件中。

网页链接

我试过:

import csv
import requests
from bs4 import BeautifulSoup
from urllib.parse import urljoin

base_url = 'https://www.hindigeetmala.net'
link = 'https://www.hindigeetmala.net/movie/2_states.htm'

res = requests.get(link)
soup = BeautifulSoup(res.text,"lxml")

with open("hindigeetmala.csv","w",newline="") as f:
    writer = csv.writer(f)
    writer.writerow(['singer_records','actor_records'])

    for item in soup.select("tr[itemprop='track']"):
        try:
            singers = [i.get_text(strip=True) for i in item.select("span[itemprop='byArtist']") if i.get_text(strip=True)]
        except Exception: singers = ""

        try:
            singer_links = [urljoin(base_url,i.get("href")) for i in item.select("a:has(> span[itemprop='byArtist'])") if i.get("href")]
        except Exception: singer_links = ""
        singer_records = [i for i in zip(singers,singer_links)]

        try:
            actors = [i.get_text(strip=True) for i in item.select("a[href^='/actor/']") if i.get("href")]
        except Exception: actors = ""
        try:
            actor_links = [urljoin(base_url,i.get("href")) for i in item.select("a[href^='/actor/']") if i.get("href")]
        except Exception: actor_links = ""
        actor_records = [i for i in zip(actors,actor_links)]
        song_name = item.select_one("span[itemprop='name']").get_text(strip=True)
        writer.writerow([singer_records,actor_records,song_name])
        print(singer_records,actor_records,song_name)

如果我按原样执行脚本,这就是我得到的 output

当我尝试像writer.writerow([*singer_records,*actor_records,song_name])时,我得到这种类型的output 只写入第一对元组。

这是我预期的 output

如何根据第三张图像将结果写入 csv 文件中的名称及其链接?

PS 为了简洁起见,output 的所有图像代表 csv 文件的第一列。

根据 SIM 的反馈,我认为这就是您要寻找的(我刚刚添加了一个 function 用于格式化您的列表)

import csv
import requests
from bs4 import BeautifulSoup
from urllib.parse import urljoin

base_url = 'https://www.hindigeetmala.net'
link = 'https://www.hindigeetmala.net/movie/2_states.htm'

res = requests.get(link)
soup = BeautifulSoup(res.text, "lxml")


def merge_results(inpt):
    return [','.join(nested_items for nested_items in
                     [','.join("'" + tuple_item + "'" for tuple_item in item)
                      for item in inpt])]


with open("hindigeetmala.csv", "w", newline="") as f:
    writer = csv.writer(f)
    writer.writerow(['singer_records', 'actor_records'])

    for item in soup.select("tr[itemprop='track']"):
        try:
            singers = [i.get_text(strip=True) for i in item.select(
                "span[itemprop='byArtist']") if i.get_text(strip=True)]
        except Exception:
            singers = ""

        try:
            singer_links = [urljoin(base_url, i.get("href")) for i in item.select(
                "a:has(> span[itemprop='byArtist'])") if i.get("href")]
        except Exception:
            singer_links = ""
        singer_records = [i for i in zip(singers, singer_links)]

        try:
            actors = [i.get_text(strip=True) for i in item.select(
                "a[href^='/actor/']") if i.get("href")]
        except Exception:
            actors = ""
        try:
            actor_links = [urljoin(base_url, i.get("href")) for i in item.select(
                "a[href^='/actor/']") if i.get("href")]
        except Exception:
            actor_links = ""
        actor_records = [i for i in zip(actors, actor_links)]
        song_name = item.select_one(
            "span[itemprop='name']").get_text(strip=True)
        writer.writerow(merge_results(singer_records) +
                        merge_results(actor_records)+[song_name])
        print(singer_records, actor_records, song_name)

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM