简体   繁体   中英

How do I enable cookies with python requests

I am trying to sign into facebook with python requests. When I run the following code: import requests

def get_facebook_cookie():
    sign_in_url = "https://www.facebook.com/login.php?login_attempt=1"
    #need to post: email , pass
    payload = {"email":"xxxx@xxxx.com", "pass":"xxxxx"}
    headers = {"accept":"text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8", "user-agent":"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.132 Safari/537.36"}
    s   = requests.Session()
    r1  = s.get(sign_in_url, headers = headers, timeout = 2)
    r   = s.post(sign_in_url,  data = payload, headers = headers, timeout = 2, cookies = r1.cookies)
    print r.url
    text = r.text.encode("ascii", "ignore")
    fbhtml = open("fb.html", "w")
    fbhtml.write(text)
    fbhtml.close()
    return r.headers


print get_facebook_cookie()

( note the url is supposed to redirect to facebook.com-- still does not do that)

facebook returns the following error: Facebook错误

(the email is actually populated in the email box -- so I know it is passing it in atleast)

According to the requests session documentation it handles all of the cookies, so I do not even think I need to be passing it in. However, I have seen other do it elsewhere in order to populate the second request with an empty cookie so I gave it a shot.

The question is, why is facebook telling me I do not have cookies enabled? Is there some extra request header I need to pass in? Would urllib2 be a better choice for something like this?

it looks like according to this answer Login to Facebook using python requests

there is more data that needs to be sent in, in order to successfully sign into facebook.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM