简体   繁体   中英

Best practices for developing a feed aggregator / proxy?

I'm looking into writing a proxy aggregator for feeds, where a handful of users specify a feed URL and some set of conditions, and the system outputs a continually-updated RSS/Atom feed of entries that match those conditions.

Are there best practices for feed aggregators? (Or filtering feed proxies?)

For example:

  • Are there certain feed elements that should or shouldn't be modified be proxies?
  • How should a feed proxy/parser indicate that it's not passing along a pristine copy of the original feed?
  • Does it make sense to delegate the work of downloading/updating to a third party aggregator platform, eg the Google Feed API ? I presume that'll save a lot of work, vs. having to do updates, 301 handling, etc. by myself.

Thanks for your help.

Do not query any feed more frequently than 30 minutes. Use caching.

-Adam

您也可以使用Yahoo Pipes,我猜......或者这个:planetplanet.org

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM