简体   繁体   中英

LinkedIn profile name scraping

I have been trying to scrape only the profile names from a bunch of LinkedIn URLs that I have. I am using bs4 with python. But no matter what I do the bs4 returns empty array. What is happening?

import requests
from bs4 import BeautifulSoup
import numpy as np
import pandas as pd
import re
r1 = requests.get("https://www.linkedin.com/in/agazdecki/")
coverpage = r1.content
soup1 = BeautifulSoup(coverpage, 'html5lib')
name_container = soup1.find_all("li", class_ = "inline t-24 t-black t-normal break-words")
print(name_container)

If you try to load the page without JavaScript you will see that the element you are trying to look for doesn't exist. In other words, the whole LinkedIn page is loaded with Javascript (like single-page applications . In fact, BeautifulSoup is working as expected and parsing the page it gets, which has the JavaScript code and not the page you expected.

>>> coverpage = r1.content
>>> coverpage
b'<html><head>\n<script type="text/javascript">\nwindow.onload =
function() {\n  // Parse the tracking code from cookies.\n  var trk =
"bf";\n  var trkInfo = "bf";\n  var cookies = document.cookie.split(";
");\n  for (var i = 0; i < cookies.length; ++i) {\n    if
((cookies[i].indexOf("trkCode=") == 0) && (cookies[i].length > 8)) {\n
 trk = cookies[i].substring(8);\n    }\n    else if
((cookies[i].indexOf("trkInfo=") == 0) && (cookies[i].length > 8)) {\n
 trkInfo = cookies[i].substring(8);\n    }\n  }\n\n  if
(window.location.protocol == "http:") {\n    // If "sl" cookie is set,
redirect to https.\n    for (var i = 0; i < cookies.length; ++i) {\n
 if ((cookies[i].indexOf("sl=") == 0) && (cookies[i].length > 3)) {\n
 window.location.href = "https:" +
window.location.href.substring(window.location.protocol.length);\n
 return;\n      }\n    }\n  }\n\n  // Get the new domain. For international
domains such as\n  // fr.linkedin.com, we convert it to www.linkedin.com\n
 var domain = "www.linkedin.com";\n  if (domain != location.host) {\n
 var subdomainIndex = location.host.indexOf(".linkedin");\n    if
(subdomainIndex != -1) {\n      domain = "www" +
location.host.substring(subdomainIndex);\n    }\n  }\n\n
 window.location.href = "https://" + domain + "/authwall?trk=" + trk +
"&trkInfo=" + trkInfo +\n      "&originalReferer=" +
document.referrer.substr(0, 200) +\n      "&sessionRedirect=" +
encodeURIComponent(window.location.href);\n}\n</script>\n</head></html>'

You could try to use something like Selenium .

  1. First Mistake: you are using requests to fetch the page but you have to know, You must be logged in first so for that you need to use sessions.

  2. Second mistake: You are using css selector to get an element which is dynamically generated by JavaScript and is being rendered by the browser so if you view the source code of the page you won't find that li tag or the class or the profile name anywhere except in a code tag in a json object.

I'm assuming you are using a session

import requests , re , json
from bs4 import BeautifulSoup

r1 = requests.Session.get("https://www.linkedin.com/in/agazdecki/", headers={"User-Agent": "Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.149 Safari/537.36"})
soup = BeautifulSoup(r1.content, 'html.parser')
info_tag = soup.find('code',text=re.compile('"data":{"firstName":'))
data = json.loads(info_tag.text)
first_name = data['data']['firstName']
last_name = data['data']['lastName']
occupation = data['data']['occupation']
print('First Name :' , first_name)
print('Last Name :' , last_name)
print('occupation :' , occupation)

Output:

First Name : Andrew
Last Name : Gazdecki
occupation : Chief Revenue Officer @ Spiff. Inc. 30 under 30 Entrepreneur.

I recommend to scrape data using selenium.
Download Chrome WebDriver from here

from selenium import webdriver

driver = webdriver.Chrome("Path to your Chrome Webdriver")

#login using webdriver
driver.get('https://www.linkedin.com/login?trk=guest_homepage-basic_nav-header-signin')
username = driver.find_element_by_id('username')
username.send_keys('your email_id here')
password = driver.find_element_by_id('password')
password.send_keys('your password here')
sign_in_button = driver.find_element_by_xpath('//*[@type="submit"]')
sign_in_button.click()


driver.get('https://www.linkedin.com/in/agazdecki/') #change profile_url here.

name = driver.find_element_by_xpath('//li[@class = "inline t-24 t-black t-normal break-words"]').text
print(name)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM