简体   繁体   中英

I'm facing a problem with my web scraping code I don't really know the problem is

I'm facing a problem with my web scraping code I don't really know what the problem is, could any one of you guys help me, please this code is used to scrape data from a job's website I used python and some libraries such as beatifulsoup

job_titles = []
company_names = []
locations = []
links = []
salaries = []
#using requests to fetch the URL :
result = requests.get('https://wuzzuf.net/search/jobs/?q=python&a=hpb')

#saving page's content/markup :
src = result.content

#create soup object to parse content 
soup = BeautifulSoup(src ,'lxml')
#print(soup)

#Now we're looking for the elements that conains the info we need (job title, job skills, company name, location)
job_title = soup.find_all("h2",{"class":"css-m604qf"})
company_name = soup.find_all("a", {"class": "css-17s97q8"})
location = soup.find_all("span", {"class": "css-5wys0k"})

#Making a loop over returned lists to extract needed info into other lists 
for I in range(len(job_title)):
    job_titles.append(job_title[I].text)
    links.append(job_title[I].find("a").attrs['href'])
    company_names.append(company_name[I].text)
    locations.append(location[I].text)
for link in links :
    results = requests.get(link)
    src = results.content
    soup = BeautifulSoup(src, 'lxml')
    salary = soup.find("a", {"class": "css-4xky9y"})
    salaries.append(salary.text)
#Creating a CSV file to store our values 
file_list = [job_titles, company_names, locations, links, salaries]
exported = zip_longest(*file_list)
with open("C:\\Users\\NOUFEL\\Desktop\\scraping\\wazzuf\\jobs.csv", "w") as myfile :
    wr = csv.writer(myfile)
    wr.writerow(["job title", "company name", "location", "links", "salaries"])
    wr.writerows(exported)

the problem is PS C:\Users\NOUFEL> & C:/Users/NOUFEL/AppData/Local/Microsoft/WindowsApps/python3.10.exe c:/Users/NOUFEL/Desktop/ScrapeWuzzuf.py Traceback (most recent call last): File "c:\Users\NOUFEL\Desktop\ScrapeWuzzuf.py", line 33, in results = requests.get(link) File "C:\Users\NOUFEL\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\requests\api.py", line 75, in get return request('get', url, params=params, **kwargs) File "C:\Users\NOUFEL\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\requests\api.py", line 61, in request return session.request(method=method, url=url, **kwargs) File "C:\Users\NOUFEL\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\requests\sessions.py", line 515, in request prep = self.prepare_request(req) File "C:\Users\NOUFEL\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\requests\sessions.py", line 443, in prepare_request p.prepare( File "C:\Users\NOUFEL\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\requests\models.py", line 318, in prepare self.prepare_url(url, params) File "C:\Users\NOUFEL\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\requests\models.py", line 392, in prepare_url raise MissingSchema(error) requests.exceptions.MissingSchema: Invalid URL '/jobs/p/1XOMELtShdah-Flask-Python-Backend-Developer-Virtual-Worker-Now-Cairo-Egypt?o=1&l=sp&t=sj&a=python|search-v3|hpb': No scheme supplied. Perhaps you meant http:///jobs/p/1XOMELtShdah-Flask-Python-Backend-Developer-Virtual-Worker-Now-Cairo-Egypt?o=1&l=sp&t=sj&a=python|search-v3|hpb?

thanks in advance

If you would read error message

requests.exceptions.MissingSchema: Invalid URL '/jobs/p/1XOMELtShdah-Flask-Python-Backend-

or if you would display link then you would see that you get relative link like /jobs/p/1XOMELtShdah-Flask-Python-... and you have to add https://wuzzuf.net at the beginning to get absolute link .

results = requests.get(  "https://wuzzuf.net" + link )

To get the required data, you need to get a "container" selector that contains all the information about the job we need as child elements. In our case, this is the .css-1gatmva selector. Have a look at the SelectorGadget Chrome extension to easily pick selectors by clicking on the desired element in your browser ( not always works perfectly ).

Problems with site parsing may arise because when you try to request a site, it may consider that this is a bot, so that this does not happen, you need to send headers that contain user-agent in the request , then the site will assume that you're a user and display information.

The request might be blocked (if using requests as default user-agent in requests library is a python-requests . Additional step could be to rotate user-agent , for example, to switch between PC, mobile, and tablet, as well as between browsers eg Chrome, Firefox, Safari, Edge and so on.

Check code in online IDE .

import requests, lxml, json
from bs4 import BeautifulSoup

# https://docs.python-requests.org/en/master/user/quickstart/#passing-parameters-in-urls
params = {
    "q": "python"   # query
}

# https://docs.python-requests.org/en/master/user/quickstart/#custom-headers
# https://www.whatismybrowser.com/detect/what-is-my-user-agent
headers = {
    "user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/104.0.5112.102 Safari/537.36"
}

html = requests.get("https://wuzzuf.net/search/jobs/", params=params, headers=headers, timeout=30)
soup = BeautifulSoup(html.text, "lxml")

data = []

for result in soup.select(".css-1gatmva"):                            
    title = result.select_one(".css-m604qf .css-o171kl").text         
    company_name = result.select_one(".css-17s97q8").text             
    adding_time = result.select_one(".css-4c4ojb, .css-do6t5g").text  
    location = result.select_one(".css-5wys0k").text                  
    employment = result.select_one(".css-1lh32fc").text               
    snippet = result.select_one(".css-1lh32fc+ div").text             

    data.append({
      "title" : title,
      "company_name" : company_name,
      "adding_time" : adding_time,
      "location" : location,
      "employment" : employment,
      "snippet" : snippet    
    })
    print(json.dumps(data, indent=2))

Example output

[
    {
    "title": "Python Developer For Job portal",
    "company_name": "Fekra Technology Solutions and Construction -",
    "adding_time": "24 days ago",
    "location": "Dokki, Giza, Egypt ",
    "employment": "Full TimeWork From Home",
    "snippet": "Experienced \u00b7 4+ Yrs of Exp \u00b7 IT/Software Development \u00b7 Engineering - Telecom/Technology \u00b7 backend \u00b7 Computer Science \u00b7 Django \u00b7 Flask \u00b7 Git \u00b7 Information Technology (IT) \u00b7 postgres \u00b7 Python"
  },
  {
    "title": "Senior Python Linux Engineer",
    "company_name": "El-Sewedy Electrometer -",
    "adding_time": "1 month ago",
    "location": "6th of October, Giza, Egypt ",
    "employment": "Full Time",
    "snippet": "Experienced \u00b7 3 - 5 Yrs of Exp \u00b7 IT/Software Development \u00b7 Engineering - Telecom/Technology \u00b7 Software Development \u00b7 Python \u00b7 C++ \u00b7 Information Technology (IT) \u00b7 Computer Science \u00b7 SQL \u00b7 Programming \u00b7 Electronics"
  }
]
[
  {
    "title": "Senior Python Developer",
    "company_name": "Trufla -",
    "adding_time": "2 days ago",
    "location": "Heliopolis, Cairo, Egypt ",
    "employment": "Full Time",
    "snippet": "Experienced \u00b7 4+ Yrs of Exp \u00b7 IT/Software Development \u00b7 Engineering - Telecom/Technology \u00b7 Agile \u00b7 APIs \u00b7 AWS \u00b7 Computer Science \u00b7 Git \u00b7 Linux \u00b7 Python \u00b7 REST"
  },
      # ...
]

u need to use = results = requests.get( "https://wuzzuf.net" + link )

for link in links :
    results = requests.get("https://wuzzuf.net"+link)
    src = results.content
    soup = BeautifulSoup(src, 'lxml')
    salary = soup.find("a", {"class": "css-4xky9y"})
    salaries.append(salary.text)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM