简体   繁体   English

无法纠正-ValueError:未知的网址类型:链接

[英]Unable to rectify - ValueError: unknown url type: Link

I am currently running this code to scrape article url links into a csv file and also access these urls(in the csv file) to scrape respective information into a text file. 我目前正在运行此代码,以将文章网址链接抓取到csv文件中,并且还访问这些urls(在csv文件中)以将各自的信息抓取到文本文件中。

I am able to scrape the links to the csv file but I'm unable to access the csv file to scrape further information(the text file is also not created) and I encounter a ValueError 我可以抓取csv文件的链接,但无法访问csv文件以抓取更多信息(也未创建文本文件),并且遇到ValueError

import csv
from lxml import html
from time import sleep
import requests
from bs4 import BeautifulSoup
import urllib
import urllib2 
from random import randint

outputFile = open("All_links.csv", r'wb')
fileWriter = csv.writer(outputFile)

fileWriter.writerow(["Link"])
#fileWriter.writerow(["Sl. No.", "Page Number", "Link"])

url1 = 'https://www.marketingweek.com/page/'
url2 = '/?s=big+data'

sl_no = 1

#iterating from 1st page through 361th page
for i in xrange(1, 361):

    #generating final url to be scraped using page number
    url = url1 + str(i) + url2

    #Fetching page
    response = requests.get(url)
    sleep(randint(10, 20))
    #using html parser
    htmlContent = html.fromstring(response.content)

    #Capturing all 'a' tags under h2 tag with class 'hentry-title entry-title'
    page_links = htmlContent.xpath('//div[@class = "archive-constraint"]//h2[@class = "hentry-title entry-title"]/a/@href')
    for page_link in page_links:
        print page_link
        fileWriter.writerow([page_link])
        sl_no += 1

with open('All_links.csv', 'rb') as f1:
    f1.seek(0)
    reader = csv.reader(f1)

    for line in reader:
        url = line[0]       
        soup = BeautifulSoup(urllib2.urlopen(url))


        with open('LinksOutput.txt', 'a+') as f2:
            for tag in soup.find_all('p'):
                f2.write(tag.text.encode('utf-8') + '\n')

This is the error I encounter: 这是我遇到的错误:

  File "c:\users\rrj17\documents\visual studio 2015\Projects\webscrape\webscrape\webscrape.py", line 47, in <module>
    soup = BeautifulSoup(urllib2.urlopen(url))
  File "C:\Python27\lib\urllib2.py", line 154, in urlopen
    return opener.open(url, data, timeout)
  File "C:\Python27\lib\urllib2.py", line 421, in open
    protocol = req.get_type()
  File "C:\Python27\lib\urllib2.py", line 283, in get_type
    raise ValueError, "unknown url type: %s" % self.__original
ValueError: unknown url type: Link

Request some help on this. 要求一些帮助。

Try skipping the first line in your csv file... you're likely unknowingly trying to parse the header. 尝试跳过csv文件中的第一行...您可能在不知不觉中尝试解析标头。

with open('All_links.csv', 'rb') as f1:
    reader = csv.reader(f1)
    next(reader) # read the header and send it to oblivion

    for line in reader: # NOW start reading
        ...

You also don't need f1.seek(0) , since f1 automatically points to the start of the file in read mode. 您也不需要f1.seek(0) ,因为f1在读取模式下自动指向文件的开头。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM