简体   繁体   English

无法使用请求从每个结果的内页中抓取名称

[英]Unable to scrape the name from the inner page of each result using requests

I've created a script in python making use of post http requests to get the search results from a webpage.我在 python 中创建了一个脚本,利用 http 后的请求从网页获取搜索结果。 To populate the results, it is necessary to click on the fields sequentially shown here .要填充结果,需要单击此处按顺序显示的字段。 Now a new page will be there and this is how to populate the result.现在将出现一个新页面,就是填充结果的方法。

There are ten results in the first page and the following script can parse the results flawlessly.第一页有十个结果,下面的脚本可以完美地解析结果。

What I wish to do now is use the results to reach their inner page in order to parse Sole Proprietorship Name (English) from there.我现在想做的是使用结果到达他们的内页,以便从那里解析Sole Proprietorship Name (English)

website address网站地址

I've tried so far with:到目前为止,我已经尝试过:

import re
import requests
from bs4 import BeautifulSoup

url = "https://www.businessregistration.moc.gov.kh/cambodia-master/service/create.html?targetAppCode=cambodia-master&targetRegisterAppCode=cambodia-br-soleproprietorships&service=registerItemSearch"

payload = {
    'QueryString': '0',
    'SourceAppCode': 'cambodia-br-soleproprietorships',
    'OriginalVersionIdentifier': '',
    '_CBASYNCUPDATE_': 'true',
    '_CBHTMLFRAG_': 'true',
    '_CBNAME_': 'buttonPush'
}

with requests.Session() as s:
    s.headers['User-Agent'] = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:74.0) Gecko/20100101 Firefox/74.0'
    res = s.get(url)
    target_url = res.url.split("&")[0].replace("view.", "update.")
    node = re.findall(r"nodeW\d.+?-Advanced",res.text)[0].strip()
    payload['_VIKEY_'] = re.findall(r"viewInstanceKey:'(.*?)',", res.text)[0].strip()
    payload['_CBHTMLFRAGID_'] = re.findall(r"guid:(.*?),", res.text)[0].strip()
    payload[node] = 'N'
    payload['_CBNODE_'] = re.findall(r"Callback\('(.*?)','buttonPush", res.text)[2]
    payload['_CBHTMLFRAGNODEID_'] = re.findall(r"AsyncWrapper(W\d.+?)'",res.text)[0].strip()

    res = s.post(target_url,data=payload)
    soup = BeautifulSoup(res.content, 'html.parser')
    for item in soup.find_all("span", class_="appReceiveFocus")[3:]:
        print(item.text)

How can I parse the Name (English) from each of the results inner page using requests?如何使用请求从每个结果内页解析Name (English)

This is one of the ways you can parse the name from the site's inner page and then email address from the address tab.这是您可以从站点的内页解析名称,然后从地址选项卡解析 email 地址的方法之一。 I added this function .get_email() only because I wanted to let you know as to how you can parse content from different tabs.我添加此 function .get_email()只是因为我想让您知道如何解析来自不同选项卡的内容。

import re
import requests
from bs4 import BeautifulSoup

url = "https://www.businessregistration.moc.gov.kh/cambodia-master/service/create.html?targetAppCode=cambodia-master&targetRegisterAppCode=cambodia-br-soleproprietorships&service=registerItemSearch"
result_url = "https://www.businessregistration.moc.gov.kh/cambodia-master/viewInstance/update.html?id={}"
base_url = "https://www.businessregistration.moc.gov.kh/cambodia-br-soleproprietorships/viewInstance/update.html?id={}"

def get_names(s):
    s.headers['User-Agent'] = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:74.0) Gecko/20100101 Firefox/74.0'
    res = s.get(url)
    target_url = result_url.format(res.url.split("id=")[1])
    soup = BeautifulSoup(res.text,"lxml")
    payload = {i['name']:i.get('value','') for i in soup.select('input[name]')}

    payload['QueryString'] = 'a'
    payload['SourceAppCode'] = 'cambodia-br-soleproprietorships'
    payload['_CBNAME_'] = 'buttonPush'
    payload['_CBHTMLFRAG_'] = 'true'
    payload['_VIKEY_'] = re.findall(r"viewInstanceKey:'(.*?)',", res.text)[0].strip()
    payload['_CBHTMLFRAGID_'] = re.findall(r"guid:(.*?),", res.text)[0].strip()
    payload['_CBNODE_'] = re.findall(r"Callback\('(.*?)','buttonPush", res.text)[-1]
    payload['_CBHTMLFRAGNODEID_'] = re.findall(r"AsyncWrapper(W\d.+?)'",res.text)[0].strip()

    res = s.post(target_url,data=payload)
    soup = BeautifulSoup(res.text,"lxml")
    payload.pop('_CBHTMLFRAGNODEID_')
    payload.pop('_CBHTMLFRAG_')
    payload.pop('_CBHTMLFRAGID_')

    for item in soup.select("a[class*='ItemBox-resultLeft-viewMenu']"):
        payload['_CBNAME_'] = 'invokeMenuCb'
        payload['_CBVALUE_'] = ''
        payload['_CBNODE_'] = item['id'].replace('node','')

        res = s.post(target_url,data=payload)
        soup = BeautifulSoup(res.text,'lxml')
        address_url = base_url.format(res.url.split("id=")[1])
        node_id = re.findall(r"taba(.*)_",soup.select_one("a[aria-label='Addresses']")['id'])[0]
        payload['_CBNODE_'] = node_id
        payload['_CBHTMLFRAGID_'] = re.findall(r"guid:(.*?),", res.text)[0].strip()
        payload['_CBNAME_'] = 'tabSelect'
        payload['_CBVALUE_'] = '1'
        eng_name = soup.select_one(".appCompanyName + .appAttrValue").get_text()
        yield from get_email(s,eng_name,address_url,payload)

def get_email(s,eng_name,url,payload):
    res = s.post(url,data=payload)
    soup = BeautifulSoup(res.text,'lxml')
    email = soup.select_one(".EntityEmailAddresses:contains('Email') .appAttrValue").get_text()
    yield eng_name,email

if __name__ == '__main__':
    with requests.Session() as s:
        for item in get_names(s):
            print(item)

Output are like: Output 就像:

('AMY GEMS', 'amy.n.company@gmail.com')
('AHARATHAN LIN LIANJIN FOOD FLAVOR', 'skykoko344@gmail.com')
('AMETHYST DIAMOND KTV', 'twobrotherktv@gmail.com')

To get the Name (English) you can simply replace print(item.text) with print(item.text.split('/')[1].split('(')[0].strip()) which prints AMY GEMS要获得名称(英文),您只需将print(item.text)替换为print(item.text.split('/')[1].split('(')[0].strip())打印艾米宝石

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM