![](/img/trans.png)
[英]Any way to set request headers when doing a request using urllib in Python 2.x?
[英]Is there any way to capture selenium request headers with python?
我想直接使用 selenium 或通过代理从传出请求中捕获授权 header。
我试过的方法:
driver.get_log('performance')
=> 获取请求日志似乎只有一些请求被索引,并且没有一个包含授权 header。headers==[]
,即使headersSize==814
)这是当前代码:
from time import sleep
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from browsermobproxy import Server
# Set configuration variables
browsermob_binary_path = r"path\to\browsermob-proxy"
facebook_credentials = {'email': 'my_email', 'password': 'my_password'}
# Configure proxy server
server = Server(browsermob_binary_path)
server.start()
proxy = server.create_proxy()
# Configure chrome to use proxy
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument('--proxy-server=%s' % proxy.proxy)
chrome_options.add_argument('--ignore-certificate-errors')
# Start chrome
driver = webdriver.Chrome(chrome_options=chrome_options)
# Start network capture
proxy.new_har('capture')
# Login to facebook
driver.get('https://apps.facebook.com/coin-master/?pid=socialfacebook')
driver.find_element_by_id("email").send_keys(facebook_credentials['email'])
driver.find_element_by_id("pass").send_keys(facebook_credentials['password'] + Keys.ENTER)
# Wait until game fully loads to make sure login request has taken place
sleep(100)
# Return all headers from captured requests
for i in range(len(proxy.har['log']['entries'])):
print(proxy.har['log']['entries'][i]['request']['headers']) # Always returns "[]"
# Close all dependencies
server.stop()
driver.quit()
要捕获每个请求中的标头,我必须将proxy.new_har('capture')
替换为proxy.new_har('capture', options={'captureHeaders': True})
以前标头被忽略,但 captureHeaders 标志强制 browsermobproxy 捕获它们。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.