简体   繁体   中英

Exporting to JSON file of a continuous data

I write a script for web scraping and it is successfully scraping data. Only problem is with exporting data to JSON file

def scrape_post_info(url):
    content = get_page_content(url)
    title, description, post_url = get_post_details(content, url)
    job_dict = {}
    job_dict['title'] = title
    job_dict['Description'] = description
    job_dict['url'] = post_url

    #here json machanism
    json_job = json.dumps(job_dict)
    with open('data.json', 'r+') as f:
        f.write("[")
        f.seek(0)
        f.write(json_job)
        txt = f.readline()
        if txt.endswith("}"):
            f.write(",")

def crawl_web(url):
    while True:
        post_url = get_post_url(url)
        for urls in post_url:
            urls = urls
            scrape_post_info(urls)

# Execute the main fuction 'crawl_web'
if __name__ == '__main__':
    crawl_web('www.examp....com')

The data is exported to JSON but it is not proper format of JSON. I am expecting the data should look like:

[
{
    "title": "this is title",
    "Description": " Fendi is an Italian luxury labelarin. ",
    "url": "https:/~"
},

{
    "title": " - Furrocious Elegant Style", 
    "Description": " the Italian luxare vast. ", 
    "url": "https://www.s"
},

{
    "title": "Rome, Fountains and Fendi Sunglasses",
    "Description": " Fendi started off as a store. ",
    "url": "https://www.~"
},

{
    "title": "Tipsnglasses",
    "Description": "Whether irregular orn season.", 
    "url": "https://www.sooic"
},

]

How can I achieve this?

How about:

def scrape_post_info(url):
    content = get_page_content(url)
    title, description, post_url = get_post_details(content, url)
    return {"title": title, "Description": description, "url": post_url}


def crawl_web(url):
    while True:
        jobs = []
        post_urls = get_post_url(url)
        for url in post_urls:
            jobs.append(scrape_post_info(url))
            with open("data.json", "w") as f:
                json.dumps(jobs)


# Execute the main fuction 'crawl_web'
if __name__ == "__main__":
    crawl_web("www.examp....com")

Note that this will rewrite your entire file on each iteration of "post_urls", so it might become quite slow with large files and slow I/O.

Depending on how long your job is running and how much memory you have, you might want to move the file writing out of the for loop, and only write it out once.

Note: if you really want to write JSON streaming, you might want to look at something like this package: https://pypi.org/project/jsonstreams/ , however I'd suggest to choose another format such as CSV that is much more well suited to streaming writes.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM