简体   繁体   English

加快可进行许多Facebook API调用的应用程序

[英]Speeding up an app that makes many Facebook API calls

I've got a simple app that fetches a user's complete feed from the Facebook API in order to tally the number of words he or she has written total on the site. 我有一个简单的应用程序,可从Facebook API提取用户的完整供稿,以统计用户在网站上写的总字数。

After he or she authenticates, the page makes a Graph call to /me/feed?limit100 and counts the number of responses and their dates. 在他或她进行身份验证之后,页面/me/feed?limit100进行Graph调用,并计算响应的数量及其日期。 If there is a "next" cursor in the response, it then pings that next URL, which looks something like this: 如果响应中有一个“下一个”游标,则它会ping该下一个URL,如下所示:

https://graph.facebook.com/[UID]/feed?limit=100&until=1386553333

And so on, recursively, until we reach the time that the user joined Facebook. 依此类推,以递归的方式进行,直到达到用户加入Facebook的时间为止。 The function looks like this: 该函数如下所示:

var words = 0;

var posts = function(callback, url) { 
    url = url || '/me/posts?limit=100';

    FB.api(url, function(response) {
        if (response.data) {
            response.data.forEach(function(status) {
                if (status.message) {
                    words += status.message.split(/ /g).length;
                }
            });
        }

        if (response.paging && response.paging.next) {
            posts(callback, response.paging.next);
        } else {
            alert("You wrote " + words + " on Facebook!");
        }
    });
}

This works just fine for people who have posts a total of up to 4,000 statuses, but it really starts to crawl for power users with 10,000 lifetime updates or more. 这对于总共发布了多达4,000个状态的人来说效果很好,但是对于拥有10,000个或更多生命周期更新的高级用户来说,它确实开始抓取。 Each response from the API is only about 25Kb, but I cannot figure out what's straining the most. API的每个响应仅约25Kb,但我无法弄清最费劲的是什么。

After I've added the number of words in each status to my total word count, do I need to specifically destroy the response object so as not to overload memory? 在将每种状态下的单词数添加到总单词数后,是否需要专门销毁响应对象,以免造成内存过载?

Alternatively, is the recursion depth a problem? 或者,递归深度是否有问题? we're realistically talking about a total of 100 calls to the API for power users. 实际上,我们实际上是在讨论高级用户对API的100次调用。 I've experimented with upping the limit on each call to fetch larger chunks, but it doesn't seem to make a huge difference. 我已经尝试提高每次调用的限制以获取更大的数据块,但似乎并没有太大的不同。

Thanks. 谢谢。

So, you're doing this with the JS SDK I guess, which mean this runs in the Browser... Did you try to run this in Chrome and then watch the network monitor to see about the response time etc.? 因此,我猜您正在使用JS SDK来执行此操作,这意味着它会在浏览器中运行...您是否尝试在Chrome中运行它,然后观看网络监视器以了解响应时间等?

With 100 requests, this also means that the data object/JSON must be about the size of 2.5mb, which for some browsers/machines could be quite challenging I guess. 对于100个请求,这还意味着数据对象/ JSON的大小必须约为2.5mb,对于某些浏览器/机器来说,这可能是非常具有挑战性的。 Also, it must take quite a while to fetch the data from FB. 同样,从FB提取数据也需要花费相当长的时间。 What does the user see in the meantime? 用户在此期间看到了什么?

Did you think of implementing this in the backend on the server side, and then just passing the results to the frontend? 您是否考虑过在服务器端的后端中实现此功能,然后将结果传递给前端?

For exmple use NodeJS together with SocketIO to do it on the server side and dynamically update the word count? 例如,将NodeJS与SocketIO一起在服务器端使用它并动态更新字数?

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM