簡體   English   中英

無法從服務器端檢索搜索結果:使用Python的Facebook Graph API

[英]Unable to retrieve search results from server side : Facebook Graph API using Python

我自己進行一些簡單的Python + FB Graph培訓,但遇到一個奇怪的問題:

import time
import sys
import urllib2
import urllib
from json import loads

base_url = "https://graph.facebook.com/search?q="
post_id = None
post_type = None
user_id = None 
message = None
created_time = None

def doit(hour):
    page = 1
    search_term = "\"Plastic Planet\""
    encoded_search_term = urllib.quote(search_term)
    print encoded_search_term
    type="&type=post"
    url = "%s%s%s" % (base_url,encoded_search_term,type)
    print url
    while(1):

        try:
            response = urllib2.urlopen(url)
        except urllib2.HTTPError, e:
            print e
        finally:
            pass   

        content = response.read()
        content = loads(content)

        print "=================================="
        for c in content["data"]:
            print c
            print "****************************************"

        try:
            content["paging"]
            print "current URL"
            print url
            print "next page!------------"
            url = content["paging"]["next"]
            print url
        except:
            pass
        finally:
            pass

        """
        print "new URL is ======================="
        print url
        print "==================================" 
        """
        print url

我在這里要做的是自動翻頁搜索結果,但是嘗試搜索內容[“ paging”] [“ next”]

但是奇怪的是,沒有數據被返回。 我收到以下信息:

{"data":[]}

即使在第一個循環中。

但是,當我將URL復制到瀏覽器中時,返回了很多結果。

我還嘗試了使用訪問令牌的版本,並且發生了同樣的事情。

+++++++++++++++++++編輯並簡化++++++++++++++++++++

好的,感謝TryPyPy,這是我上一個問題的簡化和編輯版本:

這是為什么:

   import urllib2
       url = "https://graph.facebook.com/searchq=%22Plastic+Planet%22&type=post&limit=25&until=2010-12-29T19%3A54%3A56%2B0000"
       response = urllib2.urlopen(url)
       print response.read() 

導致{"data":[]}嗎?

但是相同的URL在瀏覽器中會產生大量數據嗎?

使用Chrome(我有很多數據)和Firefox(我得到空響應)的嘗試和錯誤使我在“ Accept-Language”標頭上的值為零。 其他修改據說只是表面上的,但我不確定CookieJar。

import time
import sys
import urllib2
import urllib
from json import loads
import cookielib

base_url = "https://graph.facebook.com/search?q="
post_id = None
post_type = None
user_id = None 
message = None
created_time = None

jar = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(jar))
opener.addheaders = [
    ('Accept-Language', 'en-US,en;q=0.8'),]

def doit(hour):
    page = 1
    search_term = "\"Plastic Planet\""
    encoded_search_term = urllib.quote(search_term)
    print encoded_search_term
    type="&type=post"
    url = "%s%s%s" % (base_url,encoded_search_term,type)

    print url

    data = True
    while data:
        response = opener.open(url)
        opener.addheaders += [
            ('Referer', url) ]

        content = response.read()
        content = loads(content)

        print "=================================="
        for c in content["data"]:
            print c.keys()
        print "****************************************"

        if "paging" in content:
            print "current URL"
            print url
            print "next page!------------"
            url = content["paging"]["next"]
            print url
        else:
            print content
            print url
            data = False

doit(1)

這是清理后的最低工作版本:

import urllib2
import urllib
from json import loads
import cookielib

def doit(search_term, base_url = "https://graph.facebook.com/search?q="):
    opener = urllib2.build_opener()
    opener.addheaders = [('Accept-Language', 'en-US,en;q=0.8')]

    encoded_search_term = urllib.quote(search_term)
    type="&type=post"
    url = "%s%s%s" % (base_url,encoded_search_term,type)

    print encoded_search_term
    print url

    data = True
    while data:
        response = opener.open(url)

        content = loads(response.read())

        print "=================================="
        for c in content["data"]:
            print c.keys()
        print "****************************************"

        if "paging" in content:
            url = content["paging"]["next"]
        else:
            print "Empty response"
            print content
            data = False

doit('"Plastic Planet"')

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM