简体   繁体   中英

Unable to scrape names from the second page of a webpage when the url remains unchanged

I'm trying to scrape different agency name from the second page of a webpage using requests module. I can parse the names from it's landing page by sending a get requests to the very url.

However, when it comes to access the names from it's second page and latter, I need to send post http requests along with appropriate parameters. I tried to mimic the post requests exactly the way I see it in dev tools but all I get in return is the following:

<?xml version='1.0' encoding='UTF-8'?>
<partial-response id="j_id1"><redirect url="/ptn/exceptionhandler/sessionExpired.xhtml"></redirect></partial-response>

This is how I've tried:

import requests
from bs4 import BeautifulSoup
from pprint import pprint

link = 'https://www.gebiz.gov.sg/ptn/opportunity/BOListing.xhtml?origin=menu'
url = 'https://www.gebiz.gov.sg/ptn/opportunity/BOListing.xhtml'

with requests.Session() as s:
    s.headers['User-Agent'] = 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.104 Safari/537.36'
    r = s.get(link)
    soup = BeautifulSoup(r.text,"lxml")

    payload = {
        'contentForm': 'contentForm',
        'contentForm:j_idt171_windowName': '',
        'contentForm:j_idt187_listButton2_HIDDEN-INPUT': '',
        'contentForm:j_idt192_searchBar_INPUT-SEARCH': '',
        'contentForm:j_idt192_searchBarList_HIDDEN-SUBMITTED-VALUE': '',
        'contentForm:j_id135_0': 'Title',
        'contentForm:j_id135_1': 'Document No.',
        'contentForm:j_id136': 'Match All',
        'contentForm:j_idt853_select': 'ON',
        'contentForm:j_idt859_select': '0',
        'javax.faces.ViewState': soup.select_one('input[name="javax.faces.ViewState"]')['value'],
        'javax.faces.source': 'contentForm:j_idt902:j_idt955_2_2',
        'javax.faces.partial.event': 'click',
        'javax.faces.partial.execute': 'contentForm:j_idt902:j_idt955_2_2 contentForm:j_idt902',
        'javax.faces.partial.render': 'contentForm:j_idt902:j_idt955 contentForm dialogForm',
        'javax.faces.behavior.event': 'action',
        'javax.faces.partial.ajax': 'true'
    }

    s.headers['Referer'] = 'https://www.gebiz.gov.sg/ptn/opportunity/BOListing.xhtml?origin=menu'
    s.headers['Faces-Request'] = 'partial/ajax'
    s.headers['Origin'] = 'https://www.gebiz.gov.sg'
    s.headers['Host'] = 'www.gebiz.gov.sg'
    s.headers['Accept-Encoding'] = 'gzip, deflate, br'

    res = s.post(url,data=payload,allow_redirects=False)
    # soup = BeautifulSoup(res.text,"lxml")
    # for item in soup.select(".commandLink_TITLE-BLUE"):
    #     print(item.get_text(strip=True))
    print(res.text)

How can I parse names from a webpage from it's second page when the url remains unchanged?

You can use Selenium to traverse between pages. The following code will allow you to do this.

from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.chrome.options import Options
import time


chrome_options = Options()
#chrome_options.add_argument("--headless")
#chrome_options.add_argument("user-agent=Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.104 Safari/537.36")


driver = webdriver.Chrome(executable_path="./chromedriver", options=chrome_options)
driver.get("https://www.gebiz.gov.sg/ptn/opportunity/BOListing.xhtml?origin=menu")

#check if next page exists
next_page = driver.find_element_by_xpath("//input[starts-with(@value, 'Next')]")

#click the next button
while next_page is not None:
    time.sleep(5)
    click_btn = driver.find_element_by_xpath("//input[starts-with(@value, 'Next')]")
    click_btn.click()
    time.sleep(5)
    next_page = driver.find_element_by_xpath("//input[starts-with(@value, 'Next')]")

I have not added the code for extracting the Agency names. I presume it will not be difficult for you.

Make sure to install Selenium and download the chrome driver . Also make sure to download the correct version of the driver. You can confirm the version by viewing the 'About' section of your chrome browser.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM