[英]Better Way to Pull Data Through API?
So I'm trying to design a web app that requires a good amount of data (10,000 rows).所以我正在尝试设计一个需要大量数据(10,000 行)的网络应用程序。 This amount, however, will grow exponentially over time.
然而,随着时间的推移,这个数量将呈指数增长。 Maybe into hundreds of thousands of rows.
也许成几十万行。 I want to describe two methodologies of how to pull in and present the data and get recommendations on which method would be faster overall.
我想描述两种关于如何提取和呈现数据并获得关于哪种方法总体上更快的建议的方法。 Also, the people using the webapplication are local to the servers (very low, if any latency)
此外,使用 web 应用程序的人在服务器本地(非常低,如果有延迟)
The first method:第一种方法:
Second Method:第二种方法:
My hangup on this - The first method requires only 1 API call, but counts all the data in JavaScript and all at once, meaning even if you don't need some of the data, it is all loaded.我的挂断 - 第一种方法只需要 1 个 API 调用,但会一次性计算 JavaScript 中的所有数据,这意味着即使您不需要某些数据,它也会全部加载。
The second method requires multiple API calls, but counts all the data on the SQL Server and only returns the data that you need on demand.第二种方法需要多次调用API,但是统计SQL Server上的所有数据,只返回自己需要的数据。 Will making a lot of separate API calls instead of 1 large API call slow things down?
进行大量单独的 API 调用而不是 1 个大型 API 调用会减慢速度吗?
Imho you shold go with the second option.恕我直言,你应该选择第二个选项。 i would not see a problem wrt to performance.
我不会看到性能问题。 Counting based on category will be fast and the next calls will be smaller as amount returned and processed by the server.
基于类别的计数将很快,并且随着服务器返回和处理的数量,下一次调用将更小。
It's important to consider how many times you would need to retrieve this data in a day.重要的是要考虑一天内需要检索这些数据的次数。 If its a handful--like 500 times max--option 1 is fine.
如果它是少数——比如最大 500 倍——选项 1 就可以了。 If the number of calls is significantly higher than that, option 2 would be better.
如果调用次数明显高于此值,则选项 2 会更好。 You would just retrieve a subset as needed.
您只需根据需要检索一个子集。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.