简体   繁体   English

使用URLLIB2的客户端摘要身份验证Python不会记住授权标头信息

[英]Client Digest Authentication Python with URLLIB2 will not remember Authorization Header Information

I am trying to use Python to write a client that connects to a custom http server that uses digest authentication. 我正在尝试使用Python编写一个客户端,该客户端连接到使用摘要身份验证的自定义http服务器。 I can connect and pull the first request without problem. 我可以毫无问题地连接并拉出第一个请求。 Using TCPDUMP (I am on MAC OS X--I am both a MAC and a Python noob) I can see the first request is actually two http requests, as you would expect if you are familiar with RFC2617. 使用TCPDUMP(我在MAC OS X上—我既是MAC又是Python noob),我可以看到第一个请求实际上是两个http请求,就像您熟悉RFC2617一样。 The first results in the 401 UNAUTHORIZED. 401 UNAUTHORIZED中的第一个结果。 The header information sent back from the server is correctly used to generate headers for a second request with some custom Authorization header values which yields a 200 OK response and the payload. 从服务器发回的标头信息已正确用于生成第二个请求的标头,该请求具有一些自定义的Authorization标头值,从而产生200 OK响应和有效负载。

Everything is great. 一切都很棒。 My HTTPDigestAuthHandler opener is working, thanks to urllib2. 感谢urllib2,我的HTTPDigestAuthHandler开瓶器正在工作。

In the same program I attempt to request a second, different page, from the same server. 在同一程序中,我尝试从同一服务器请求另一个不同的页面。 I expect, per the RFC, that the TCPDUMP will show only one request this time, using almost all the same Authorization Header information (nc should increment). 我希望根据RFC,这次TCPDUMP将使用几乎所有相同的授权标头信息(nc应该增加)仅显示一个请求。

Instead it starts from scratch and first gets the 401 and regenerates the information needed for a 200. 而是从头开始,首先获取401,然后重新生成200所需的信息。

Is it possible with urllib2 to have subsequent requests with digest authentication recycle the known Authorization Header values and only do one request? urllib2是否可以使具有摘要身份验证的后续请求回收已知的Authorization Header值,并且仅执行一个请求?

[Re-read that a couple times until it makes sense, I am not sure how to make it any more plain] [重新阅读几次直到有意义为止,我不确定如何使它变得更加简单]

Google has yielded surprisingly little so I guess not. Google的收益令人惊讶地少,所以我想没有。 I looked at the code for urllib2.py and its really messy (comments like: "This isn't a fabulous effort"), so I wouldn't be shocked if this was a bug. 我查看了urllib2.py的代码,它的确很乱(注释:“这不是一个很棒的尝试”),所以如果这是一个bug,我不会感到震惊。 I noticed that my Connection Header is Closed, and even if I set it to keepalive, it gets overwritten. 我注意到我的连接头已关闭,即使将其设置为keepalive,它也会被覆盖。 That led me to keepalive.py but that didn't work for me either. 这导致我进入keepalive.py,但对我也不起作用。

Pycurl won't work either. Pycurl也不起作用。

I can hand code the entire interaction, but I would like to piggy back on existing libraries where possible. 我可以编写整个交互的代码,但我想尽可能地使用现有的库。

In summary, is it possible with urllib2 and digest authentication to get 2 pages from the same server with only 3 http requests executed (2 for first page, 1 for second). 总之,使用urllib2和摘要身份验证可以从同一服务器获取2个页面,而只执行3个http请求(第一个页面2个,第二个页面1个)。

If you happen to have tried this before and already know its not possible please let me know. 如果您碰巧曾经尝试过此方法,并且已经知道不可能,请告诉我。 If you have an alternative I am all ears. 如果您有其他选择,我会全力以赴。

Thanks in advance. 提前致谢。

Although it's not available out of the box, urllib2 is flexible enough to add it yourself. 尽管无法立即使用,但urllib2足够灵活以自己添加。 Subclass HTTPDigestAuthHandler , hack it ( retry_http_digest_auth method I think) to remember authentication information and define an http_request(self, request) method to use it for all subsequent requests (add WWW-Authenticate header). 子类化HTTPDigestAuthHandler ,对其进行破解(我认为是retry_http_digest_auth方法)以记住身份验证信息,并定义一个http_request(self, request)方法以将其用于所有后续请求(添加WWW-Authenticate标头)。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM