简体   繁体   English

优化Google App Engine代码

[英]Optimization of Google App Engine Code

Google app engine tells me to optimize this code. Google应用引擎告诉我优化此代码。 Anybody any ideas what I could do? 任何人有什么想法我能做什么?

def index(request):
    user = users.get_current_user()
    return base.views.render('XXX.html', 
                 dict(profiles=Profile.gql("").fetch(limit=100), user=user))

And later in the template I do: 后来我在模板中做了:

{% for profile in profiles %}
  <a href="/profile/{{profile.user.email}}/"><img src="{{profile.gravatarUrl}}"></a>
  <a href="/profile/{{profile.user.email}}/">{{ profile.user.nickname }}</a>
  <br/>{{ profile.shortDisplay }}

Where the methods used are: 使用的方法是:

def shortDisplay(self):
    return "%s/day; %s/week; %s days" % (self.maxPerDay, self.maxPerWeek, self.days)

def gravatarUrl(self):
    email = self.user.email().lower()
    default = "..."
    gravatar_url = "http://www.gravatar.com/avatar.php?"
    gravatar_url += urllib.urlencode({'gravatar_id':hashlib.md5(email).hexdigest(), 
        'default':default, 'size':"64"})
    return gravatar_url

The high CPU usage will be due to fetching 100 entities per request. 高CPU使用率将是由于每个请求获取100个实体。 You have several options here: 你有几个选择:

  • Using Profile.all().fetch(100) will be ever so slightly faster, and easier to read besides. 使用Profile.all()。fetch(100)将会更快,更容易阅读。
  • Remove any extraneous properties from the Profile model. 从Profile模型中删除任何无关的属性。 There's significant per-property overhead deserializing entities. 反序列化实体具有重要的每个属性开销。
  • Display fewer users per page. 每页显示的用户更少。
  • Store the output of this page in memcache, and render from memcache whenever you can. 将此页面的输出存储在memcache中,并尽可能从memcache进行渲染。 That way, you don't need to generate the page often, so it doesn't matter so much if it's high CPU. 这样,您不需要经常生成页面,因此如果CPU高,则无关紧要。

I would guess that performing an md5 hash on every item every time is pretty costly. 我猜想每次在每个项目上执行md5哈希是非常昂贵的。 Better store the gravatar email hash somewhere. 更好地存储gravatar电子邮件哈希某处。

I had an issue with a lot of CPU being used for seemingly little work, which turned out ot be queries running multiple times. 我有一个问题,很多CPU被用于看似很少的工作,结果是多次查询运行。 Eg. 例如。 In my Django template, I did post.comments.count and then looped through post.comments. 在我的Django模板中,我做了post.comments.count,然后通过post.comments循环。 This resulted in two executions - one getting the count, and one getting the entities. 这导致两次执行 - 一次获得计数,一次获得实体。 Oops! 哎呀!

I'd also say grab a copy of Guido's Appstats. 我还说要抓一份Guido的Appstats。 It won't help with the Python, but it's very useful to see the time spent in API calls (and the time between them - which often gives an indication of where you've got slow Python). 它对Python没有帮助,但是看看在API调用中花费的时间(以及它们之间的时间 - 这通常表明你的Python速度很慢)非常有用。

You can get the library here: https://sites.google.com/site/appengineappstats/ 您可以在此处获取图书馆: https//sites.google.com/site/appengineappstats/

I wrote an article about it on my blog (with some screenshots): http://blog.dantup.com/2010/01/profiling-google-app-engine-with-appstats 我在我的博客上写了一篇关于它的文章(有一些截图): http//blog.dantup.com/2010/01/profiling-google-app-engine-with-appstats

Appstats http://blog.dantup.com/pi/appstats_4_thumb.png Appstats http://blog.dantup.com/pi/appstats_4_thumb.png

It depends where you get the warning of too much CPU. 这取决于你得到太多CPU警告的地方。

Is it in the dashboard, it probably is a lot of datastore CPU, no need for optimization. 它是否在仪表板中,它可能是很多数据存储CPU,无需优化。

If the request takes more then 10 sec you need to optimize. 如果请求超过10秒,则需要进行优化。

If you get regular Log warnings that a certain request is x.xx over CPU limit it means your application code is taking too long. 如果定期发出日志警告,表明某个请求是x.xx超过CPU限制,则表示您的应用程序代码花费的时间太长。 And needs optimization. 并需要优化。

I have found that a lot of Django template stuff does not take a lot of application CPU (50-100 Mcycle). 我发现很多Django模板的东西都不占用很多应用CPU(50-100 Mcycle)。 If all the fields for the template are precomputed. 如果模板的所有字段都已预先计算。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM