简体   繁体   中英

Creating multiple text files with unique file names from scraped data

I took an introductory course in Python this semester and am now trying to do a project. However, I don't really know what code I should write to create multiple .txt files of which the title will be different for each file.

I scraped all the terms and definitions from the website http://www.hogwartsishere.com/library/book/99/ . Title of the .txt file should for example be 'Aconite.txt' and the content of the file should be the title and the definition. Every term with its definition can be found in a separate p-tag and the term itself is a b-tag withing the p-tag. Can I use this to write my code?

I suppose I will need to use a for-loop for this, but I don't really know where to start. I searched StackOverflow and found several solutions, but all of them contain code I am not familiar with and/or relate to another issue.

This is what I have so far:

#!/usr/bin/env/ python

import requests
import bs4

def download(url):
    r = requests.get(url) 
    html = r.text
    soup = bs4.BeautifulSoup(html, 'html.parser')
    terms_definition = []

    #for item in soup.find_all('p'): #beter definiëren
    items = soup.find_all("div", {"class" : "font-size-16 roboto"})
    for item in items:
        terms = item.find_all("p")
        for term in terms:
            #print(term)
            if term.text is not 'None':
                #print(term.text)
                #print("\n")
                term_split = term.text.split()
                print(term_split)

            if term.text != None and len(term.text) > 1:
                if '-' in term.text.split():
                    print(term.text)
                    print('\n')
                if item.find('p'):
                    terms_definition.append(item['p'])
                print(terms_definition)

    return terms_definition

def create_url(start, end):
    list_url = []
    base_url = 'http://www.hogwartsishere.com/library/book/99/chapter/'
    for x in range(start, end):
        list_url.append(base_url + str(x))

    return list_url

def search_all_url(list_url):
    for url in list_url:
        download(url)

#write data into separate text files. Word in front of the dash should be title of the document, term and definition should be content of the text file
#all terms and definitions are in separate p-tags, title is a b-tag within the p-tag
def name_term



def text_files
    path_write = os.path.join('data', name_term +'.txt') #'term' should be replaced by the scraped terms

    with open(path_write, 'w') as f:
    f.write()
#for loop? in front of dash = title / everything before and after dash = text (file content) / empty line = new file



if __name__ == '__main__':
    download('http://www.hogwartsishere.com/library/book/99/chapter/1')
    #list_url = create_url(1, 27)
    #search_all_url(list_url)

Thanks in advance!

You can iterate over all pages ( 1-27 ) to get its content, then parse each page with bs4 and then save results to files:

import requests
import bs4
import re

for i in range(1, 27):
    r = requests.get('http://www.hogwartsishere.com/library/book/99/chapter/{}/'.format(i)).text
    soup = bs4.BeautifulSoup(r, 'html.parser')
    items = soup.find_all("div", {"class": "font-size-16 roboto"})
    for item in items:
        terms = item.find_all("p")
        for term in terms:
            title = re.match('^(.*) -', term.text).group(1).replace('/', '-')
            with open(title + '.txt', 'w', encoding='utf-8') as f:
                f.write(term.text)

Output files:

在此处输入图片说明

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM