简体   繁体   中英

Want to get all links in a webpage using urllib.request

When I test it, it keeps printing out (None, 0) even though the url I used has several < a href=

import urllib.request as ur
def getNextlink(url): 
    sourceFile = ur.urlopen(url)
    sourceText = sourceFile.read()
    page = str(sourceText)

    startLink = page.find('<a href=')
    if startLink == -1:
        return None, 0
    startQu = page.find('"', startLink)
    endQu = page.find('"', startQu+1)
    url = page[startQu +1:endQu]
    return url, endQu

You should use beautiful soup instead it works pretty smoothly along with requests for your requirement. I will give an example below:

from bs4 import BeautifulSoup
import requests

def links(url):
    html = requests.get(url).content
    bsObj = BeautifulSoup(html, 'lxml')

    links = bsObj.findAll('a')
    finalLinks = set()
    for link in links:
        finalLinks.add(link.attrs['href'])

Try This

import urllib.request

import re

#pass any url url = " Want to get all links in a webpage using urllib.request "

urllist = re.findall(r"""<\\s*a\\s*href=["']([^=]+)["']""", urllib.request.urlopen(url).read().decode("utf-8"))

print(urllist)

Here is another solution:

from urllib.request import urlopen

url = ''
html = str(urlopen(url).read())

for i in range(len(html) - 3):
    if html[i] == '<' and html[i+1] == 'a' and html[i+2] == ' ':
        pos = html[i:].find('</a>')
        print(html[i: i+pos+4])

Define your url. Hope this helps and don't forget to up vote and accept.

How about one of these solutions?

import requests
from bs4 import BeautifulSoup

research_later = "giraffe"
goog_search = "https://www.google.co.uk/search?sclient=psy-ab&client=ubuntu&hs=k5b&channel=fs&biw=1366&bih=648&noj=1&q=" + research_later

r = requests.get(goog_search)
print r

soup = BeautifulSoup(r.text, "html.parser")
print soup

import requests
from bs4 import BeautifulSoup
r = requests.get("http://www.flashscore.com/soccer/netherlands/eredivisie/results/")
soup = BeautifulSoup(r.content)
htmltext = soup.prettify()
print htmltext

import sys,requests,csv,io
from bs4 import BeautifulSoup
from urllib.parse import urljoin

url = "http://www.cricbuzz.com/cricket-stats/icc-rankings/batsmen-rankings"
r = requests.get(url)
r.content
soup = BeautifulSoup(r.content, "html.parser")

maindiv = soup.find_all("div", {"class": "text-center"})
for div in maindiv:
    print(div.text)

Sometimes BeautifulSoup and requests is not what you want to use.

In some cases when using requests library you can be prevented by the website in question from scraping (get a response 403). So you have to use urllib.request instead.

Here is how you can get all links (hrefs) listed on a webpage that you are trying to scrape using urllib.request.

import urllib.request
from urllib.request import urlretrieve, Request, urlopen
import re

# get full html code from a website
response = Request('https://www.your_url.com', headers={'User-Agent':      'Mozilla/5.0'})

webpage = urlopen(response)
print(webpage.read())

# create a list of all links/href tags 
url = 'https://www.your_url.com'

urllist = re.findall("href=[\"\'](.*?)[\"\']",    urllib.request.urlopen(url).read().decode("utf-8"))

print(urllist)

# print each link on a seperate line
for elem in urllist:
    print(elem)

In the code we use str.decode(x) with the chosen plaintext encoding x to convert HTML object to a plaintext string. Standard encoding is utf-8. You may need to change encoding if a website that you are trying to scrape uses diffrent encoding.

We find links with the help of regular expressions: Call re.findall(pattern,string) with the regular expression pattern href=\\"\\'[\\"\\'] on the plaintext string to match on all href tags but only extract the url text that follows in quotations to return a list of links contained inside href tags.

Try it with the request-html which can parse the HTML anf we can search any tag, cladd or ID in HTML

from requests_html import HTMLSession
session = HTMLSession()
r = session.get(url)
r.html.links

if you want the absolute links use

r.html.absolute_links

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM