简体   繁体   中英

How to webscrape password protected website

I have a website from which I need to scrape some data (The website is https://www.merriam-webster.com/ and I want to scrape the saved words).

This website is password protected, and I also think there is some javascript stuff going on that I don't understand (I think certain elements are loaded by the browser since they don't show up when I wget the html).

I currently have a solution using selenium, it does work, but it requires firefox to be opened, and I would really like a solution where I can let it run as a console only programm in the background.

How would I archieve this, if possible using pythons requests library and the least amount of additional third party librarys?

Here is the code for my selenium solution:

from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.keys import Keys
import time
import json

# Create new driver
browser = webdriver.Firefox()
browser.get('https://www.merriam-webster.com/login')

# Find fields for email and password
username = browser.find_element_by_id("ul-email")
password = browser.find_element_by_id('ul-password')
# Find button to login
send = browser.find_element_by_id('ul-login')
# Send username and password 
username.send_keys("username")
password.send_keys("password")

# Wait for accept cookies button to appear and click it
WebDriverWait(browser, 20).until(EC.element_to_be_clickable((By.CLASS_NAME, "accept-cookies-button"))).click()
# Click the login button
send.click()

# Find button to go to saved words
WebDriverWait(browser, 20).until(EC.element_to_be_clickable((By.CLASS_NAME, "ul-favorites"))).click()


words = {}
# Now logged in
# Loop over pages of saved words
for i in range(2):
    print("Now on page " + str(i+1))
    # Find next page button
    nextpage = browser.find_element_by_class_name("ul-page-next")
    # Wait for the next page button to be clickable
    WebDriverWait(browser, 20).until(EC.element_to_be_clickable((By.CLASS_NAME, "ul-page-next")))

    # Find all the words on the page
    for word in browser.find_elements_by_class_name('item-headword'):
        # Add the href to the dictonary
        words[word.get_attribute("innerHTML")] = word.get_attribute("href")
    # Naivgate to the next page
    nextpage.click()

browser.close()

# Print the words list
with open("output.json", "w", encoding="utf-8") as file:
    file.write(json.dumps(words, indent=4))

If you want to use the requests module you need to use a session.

To initialise a session you do:

session_requests = requests.session()

Then you need a payload with the username and password

payload = {
    "username":<USERNAME>,
    "password":<PASSWORD>}

Then to log in you do:

result = session_requests.post(
    login_url, 
    data = payload, 
    headers = dict(referer=login_url)
)

Now your session should be logged in, so to go to any other password protect page you use the same session:

result = session_requests.get(
    url, 
    headers = dict(referer = url)
)

Then you can use result.content to view the content of that page.

EDIT if your site includes a CSRF token you will need to include that in the `payload'. To get the CSRF token replace the "payload" section with:

from lxml import html

tree = html.fromstring(result.text)
#you may need to manually inspect the tree to find how your CSRF token is specified.
authenticity_token = list(set(tree.xpath("//input[@name='csrfmiddlewaretoken']/@value")))[0]

payload = {
    "username":<USERNAME>,
    "password":<PASSWORD>,
    "csrfmiddlewaretoken":authenticity_token
    }

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM