简体   繁体   中英

Scraping website and collect all the hyperlinks using python

I am making a program which could take information from any website. But the program is not working.

Example-- the website is naukri.com and we have to collect all the hyperlinks of a page:

import urllib.request, urllib.parse, urllib.error
from bs4 import BeautifulSoup
import ssl

isc = ssl.create_default_context()
isc.check_hostname = False
isc.verify_mode = ssl.CERT_NONE

open = urllib.request.urlopen('https://www.naukri.com/job-listings-Python- 
Developer-Cloud-Analogy-Softech-Pvt-Ltd-Noida-Sector-63-Noida-1-to-2-years-250718003152src=jobsearchDesk&sid=15325422374871&xp=1&px=1&qp=python%20developer 
&srcPage=s', context = isc).read()
soup = BeautifulSoup(open, 'html.parser')

tags = soup('a')

for tag in tags:
    print(tag.get('href', None))

I would use requests and bs4. I was able to get this to work and I think it has the desired outcome. Try this:

import requests
from bs4 import BeautifulSoup

url = ('https://www.naukri.com/job-listings-Python-Developer-Cloud-Analogy-Softech-Pvt-Ltd-Noida-Sector-63-Noida-1-to-2-years-250718003152src=jobsearchDesk&sid=15325422374871&xp=1&px=1&qp=python%20developer&srcPage=s')
response = requests.get(url)
page = response.text
soup = BeautifulSoup(page, 'html.parser')
links = soup.find_all('a', href=True)

for each in links:
    print(each.get('href'))

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM