[英]Any way to set request headers when doing a request using urllib in Python 2.x?
[英]Is there any way to capture selenium request headers with python?
我想直接使用 selenium 或通過代理從傳出請求中捕獲授權 header。
我試過的方法:
driver.get_log('performance')
=> 獲取請求日志似乎只有一些請求被索引,並且沒有一個包含授權 header。headers==[]
,即使headersSize==814
)這是當前代碼:
from time import sleep
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from browsermobproxy import Server
# Set configuration variables
browsermob_binary_path = r"path\to\browsermob-proxy"
facebook_credentials = {'email': 'my_email', 'password': 'my_password'}
# Configure proxy server
server = Server(browsermob_binary_path)
server.start()
proxy = server.create_proxy()
# Configure chrome to use proxy
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument('--proxy-server=%s' % proxy.proxy)
chrome_options.add_argument('--ignore-certificate-errors')
# Start chrome
driver = webdriver.Chrome(chrome_options=chrome_options)
# Start network capture
proxy.new_har('capture')
# Login to facebook
driver.get('https://apps.facebook.com/coin-master/?pid=socialfacebook')
driver.find_element_by_id("email").send_keys(facebook_credentials['email'])
driver.find_element_by_id("pass").send_keys(facebook_credentials['password'] + Keys.ENTER)
# Wait until game fully loads to make sure login request has taken place
sleep(100)
# Return all headers from captured requests
for i in range(len(proxy.har['log']['entries'])):
print(proxy.har['log']['entries'][i]['request']['headers']) # Always returns "[]"
# Close all dependencies
server.stop()
driver.quit()
要捕獲每個請求中的標頭,我必須將proxy.new_har('capture')
替換為proxy.new_har('capture', options={'captureHeaders': True})
以前標頭被忽略,但 captureHeaders 標志強制 browsermobproxy 捕獲它們。
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.