简体   繁体   中英

Get website's contents from a list of URLs in a text-file

I have a list of URLs in a text file as below:

File URL.txt

https://url2.html
https://url3.html
...
https://urln.html

I want to get content of those URLs into a text file line by line as shown below

Expected file Content.txt:

Content of web from url2.html
Content of web from url3.html
...
Content of web from urln.html

Please help me to find a solution for my problem, can I use Python or Java code for this.

Thank you for consideration!

Your question is a bit unclear but I will assume for now that you want to read a single line from a text file somewhere online with a given URL. If this is not what you wanted to know please let me know and I will do my best to further help you. Anyhow here is a simple way of doing this in pure Java using java.io.InputStreamReader and java.net.URL#openStream() :

/** 
     * Reads a text file from url and returns the first line as string. 
     * @param url web location of the text file to read
     * @return {@code null} if an error occurred
     */
    static String downloadStringLine(URL url) {

        try {
            java.io.InputStreamReader stream = new java.io.InputStreamReader(url.openStream());
            java.io.BufferedReader reader = new java.io.BufferedReader(stream);
            return reader.readLine();
        }
        catch (java.io.IOException e) {
            System.out.printf("Unable to download string from %s", url.toString());
            return null;
        }
    }

EDIT: Since you wanted a way to read all text content from an URL here is how to do that by iterating over lines from BufferedReader and storing them to a local text file using PrintWriter :

public class Main {

/**
 * Reads and writes text based content from the given url to a file
 * @param url web location of the content to store
 */
private static void storeURLContent(java.net.URL url, java.io.File file) {

    try {
        java.io.InputStreamReader stream = new java.io.InputStreamReader(url.openStream());
        java.io.BufferedReader reader = new java.io.BufferedReader(stream);

        java.io.PrintWriter writer = new java.io.PrintWriter(file);

        System.out.println("Reading contents of " + url.toString());
        java.util.Iterator<String> iter = reader.lines().iterator();
        while (iter.hasNext()) {
            writer.println(iter.next());
            writer.flush();
        }
        System.out.println("Done, contents have been saved to " + file.getPath());
        // Do not forget to close all streams
        stream.close(); reader.close(); writer.close();
    }
    catch (java.io.IOException e) {
        e.printStackTrace();
    }
}

public static void main(String[] args) {

    try {
        java.net.URL url = new java.net.URL("https://www.w3.org/TR/PNG/iso_8859-1.txt");
        java.io.File file = new java.io.File("contents.txt");

        storeURLContent(url, file);
    }
    catch (java.net.MalformedURLException e) {
        e.printStackTrace();
    }
}

}

You can try out the follwing python script.

import requests
filepath = 'url.txt' 
cnt=0 
f= open("content.txt","w+")
with open(filepath) as fp: 
    for line in fp 
        file_url = fp.readline()
        cnt = cnt+1
        f.write("Content of web from url%d.html\n ",cnt)
        r = requests.get(file_url)
        f.write(r.content)

Thank you everyone for helping, I got an answer from my friend, this is what exactly what I want.

I am happy to receive your supports Best regards.

import requests, sys, webbrowser, bs4
import codecs

def get_content(link):
  page = requests.get(link)
  soup = bs4.BeautifulSoup(page.content, 'html.parser')
  all_p = soup.find_all('p')
  content = ''
  for p in all_p:
    content += p.get_text().strip('\n')
  return content

in_path = "link.txt"
out_path = "outputData.txt"

with open(in_path, 'r') as fin:
  links = fin.read().splitlines()
with open(out_path, 'w') as fout:
  for i, link in enumerate(links):
     fout.write(get_content(link) + '\n')

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM