简体   繁体   English

优化文件缓存和HTTP2

[英]Optimizing File Cacheing and HTTP2

Our site is considering making the switch to http2. 我们的网站正在考虑切换到http2。

My understanding is that http2 renders optimization techniques like file concatenation obsolete , since a server using http2 just sends one request. 我的理解是http2渲染文件连接等优化技术已经过时 ,因为使用http2的服务器只发送一个请求。

Instead, the advice I am seeing is that it's better to keep file sizes smaller so that they are more likely to be cached by a browser. 相反,我看到的建议是, 最好保持较小的文件大小,以便它们更有可能被浏览器缓存。

It probably depends on the size of a website, but how small should a website's files be if its using http2 and wants to focus on caching? 它可能取决于网站的大小,但如果网站的文件使用http2并希望专注于缓存,它应该有多小?

In our case, our many individual js and css files fall in the 1kb to 180kb range. 在我们的例子中,我们的许多单独的js和css文件都在1kb到180kb的范围内。 Jquery and bootstrap might be more. Jquery和bootstrap可能更多。 Cumulatively, a fresh download of a page on our site is usually less than 900 kb. 累积起来,我们网站上新下载的页面通常不到900 kb。

So I have two questions: 所以我有两个问题:

Are these file sizes small enough to be cached by browsers? 这些文件大小是否足够小,可以被浏览器缓存?

If they are small enough to be cached, is it good to concatenate files anyways for users who use browsers that don't support http2. 如果它们小到可以缓存,那么对于使用不支持http2的浏览器的用户来说,连接文件是否合适。

Would it hurt to have larger file sizes in this case AND use HTTP2? 在这种情况下拥有更大的文件大小并使用HTTP2会不会有害? This way, it would benefit users running either protocol because a site could be optimized for both http and http2. 这样,运行任一协议的用户都会受益,因为站点可以针对http和http2进行优化。

Let's clarify a few things: 让我们澄清一些事情:

My understanding is that http2 renders optimization techniques like file concatenation obsolete, since a server using http2 just sends one request. 我的理解是http2渲染文件连接等优化技术已经过时,因为使用http2的服务器只发送一个请求。

HTTP/2 renders optimisation techniques like file concatenation somewhat obsolete since HTTP/2 allows many files to download in parallel across the same connection. HTTP / 2渲染优化技术(如文件串联) 有点过时,因为HTTP / 2允许许多文件在同一连接上并行下载。 Previously, in HTTP/1.1, the browser could request a file and then had to wait until that file was fully downloaded before it could request the next file. 以前,在HTTP / 1.1中,浏览器可以请求文件,然后必须等到该文件完全下载才能请求下一个文件。 This lead to workarounds like file concatenation (to reduce the number of files required) and multiple connections (a hack to allow downloads in parallel). 这导致了诸如文件串联(以减少所需文件的数量)和多个连接(允许并行下载的黑客)之类的变通方法。

However there's a counter argument that there are still overheads with multiple files, including requesting them, caching them, reading them from cache... etc. It's much reduced in HTTP/2 but not gone completely. 然而,有一个反驳论点,即仍然存在多个文件的开销,包括请求它们,缓存它们,从缓存中读取它们等等。它在HTTP / 2中大大减少但没有完全消失。 Additionally gzipping text files works better on larger files, than gzipping lots of smaller files separately. 另外,gzipping文本文件在较大的文件上工作得更好,而不是分别对大量较小的文件进行gzipping。 Personally, however I think the downsides outweigh these concerns, and I think concatenation will die out once HTTP/2 is ubiquitous. 就个人而言,我认为其缺点超过了这些问题,我认为一旦HTTP / 2无处不在,连接就会消失。

Instead, the advice I am seeing is that it's better to keep file sizes smaller so that they are more likely to be cached by a browser. 相反,我看到的建议是,最好保持较小的文件大小,以便它们更有可能被浏览器缓存。

It probably depends on the size of a website, but how small should a website's files be if its using http2 and wants to focus on caching? 它可能取决于网站的大小,但如果网站的文件使用http2并希望专注于缓存,它应该有多小?

The file size has no bearing on whether it would be cached or not (unless we are talking about truly massive files bigger than the cache itself). 文件大小与它是否会被缓存无关(除非我们讨论的是比缓存本身更大的真正大量文件)。 The reason splitting files into smaller chunks is better for caching is so that if you make any changes, then any files which has not been touched can still be used from the cache. 将文件拆分为较小块的原因更适合缓存,因此如果进行任何更改,则仍可以从缓存中使用任何未触摸的文件。 If you have all your javascript (for example) in one big .js file and you change one line of code then the whole file needs to be downloaded again - even if it was already in the cache. 如果您在一个大的.js文件中拥有所有的javascript(例如),并且您更改了一行代码,则需要再次下载整个文件 - 即使它已经在缓存中。

Similarly if you have an image sprite map then that's great for reducing separate image downloads in HTTP/1.1 but requires the whole sprite file to be downloaded again if you ever need to edit it to add one extra image for example. 类似地,如果你有一个图像精灵图,那么这对于减少HTTP / 1.1中的单独图像下载非常有用,但是如果你需要编辑它以添加一个额外的图像,则需要再次下载整个精灵文件。 Not to mention that the whole thing is downloaded - even for pages which just use one of those image sprites. 更不用说整个东西都被下载了 - 即使对于只使用其中一个图像精灵的页面也是如此。

However, saying all that, there is a train of thought that says the benefit of long term caching is over stated. 然而,说了这么多,有一种思路认为长期缓存的好处已经过时了。 See this article and in particular the section on HTTP caching which goes to show that most people's browser cache is smaller than you think and so it's unlikely your resources will be cached for very long. 请参阅此文章 ,特别是有关HTTP缓存的部分,该部分显示大多数人的浏览器缓存比您想象的要小,因此您的资源不太可能被缓存很长时间。 That's not to say caching is not important - but more that it's useful for browsing around in that session rather than long term. 这并不是说缓存并不重要 - 但更重要的是它在该会话中而不是长期浏览时非常有用。 So each visit to your site will likely download all your files again anyway - unless they are a very frequent visitor, have a very big cache, or don't surf the web much. 因此,每次访问您的网站都可能会再次下载您的所有文件 - 除非他们是非常频繁的访问者,拥有非常大的缓存,或者不在网上冲浪。

is it good to concatenate files anyways for users who use browsers that don't support http2. 对于使用不支持http2的浏览器的用户来说,连接文件是否合适。

Possibly. 有可能。 However, other than on Android, HTTP/2 browser support is actually very good so it's likely most of your visitors are already HTTP/2 enabled. 但是,除了在Android上, HTTP / 2浏览器支持实际上非常好,因此大多数访问者可能已启用HTTP / 2。

Saying that, there are no extra downsides to concatenating files under HTTP/2 that weren't there already under HTTP/1.1. 如果说,在HTTP / 2下连接文件没有额外的缺点,而这些文件在HTTP / 1.1下已经不存在了。 Ok it could be argued that a number of small files could be downloaded in parallel over HTTP/2 whereas a larger file needs to be downloaded as one request but I don't buy that that slows it down much any. 好吧可以说有一些小文件可以通过HTTP / 2并行下载,而一个较大的文件需要作为一个请求下载,但我不会购买它会使任何速度慢下来。 No proof of this but gut feel suggests the data still needs to be sent, so you have a bandwidth problem either way, or you don't. 没有这方面的证据,但直觉感觉表明仍然需要发送数据,因此无论是哪种方式都存在带宽问题,或者您没有。 Additionally the overhead of requesting many resources, although much reduced in HTTP/2 is still there. 此外,请求许多资源的开销虽然在HTTP / 2中大大减少,但仍然存在。 Latency is still the biggest problem for most users and sites - not bandwidth. 对于大多数用户和站点而言,延迟仍然是最大的问题 - 而不是带宽。 Unless your resources are truly huge I doubt you'd notice the difference between downloading 1 big resource in I've go, or the same data split into 10 little files downloaded in parallel in HTTP/2 (though you would in HTTP/1.1). 除非您的资源非常庞大,否则我怀疑您是否注意到我在下载1大资源之间的区别,或者将相同的数据拆分为10个并行下载的HTTP / 2小文件(尽管您会在HTTP / 1.1中) 。 Not to mention gzipping issues discussed above. 更不用说上面讨论的gzipping问题了。

So, in my opinion, no harm to keep concatenating for a little while longer. 因此,在我看来,保持连接一段时间没有害处。 At some point you'll need to make the call of whether the downsides outweigh the benefits given your user profile. 在某些时候,您需要调用下方是否超过给定用户配置文件的好处。

Would it hurt to have larger file sizes in this case AND use HTTP2? 在这种情况下拥有更大的文件大小并使用HTTP2会不会有害? This way, it would benefit users running either protocol because a site could be optimized for both http and http2. 这样,运行任一协议的用户都会受益,因为站点可以针对http和http2进行优化。

Absolutely wouldn't hurt at all. 绝对不会受伤。 As mention above there are (basically) no extra downsides to concatenating files under HTTP/2 that weren't there already under HTTP/1.1. 如上所述,在HTTP / 2下连接文件(基本上)没有额外的缺点,而这些文件在HTTP / 1.1下已经不存在了。 It's just not that necessary under HTTP/2 anymore and has downsides (potentially reduces caching use, requires a build step, makes debugging more difficult as deployed code isn't same as source code... etc.). 它只是在HTTP / 2下不再需要并且有缺点(可能减少缓存使用,需要构建步骤,使调试更加困难,因为部署的代码与源代码不同......等等)。

Use HTTP/2 and you'll still see big benefits for any site - except the most simplest sites which will likely see no improvement but also no negatives. 使用HTTP / 2,你仍然可以看到任何网站的巨大好处 - 除了最简单的网站,可能看不到任何改善,但也没有负面。 And, as older browsers can stick with HTTP/1.1 there are no downsides for them. 而且,由于旧版浏览器可以坚持使用HTTP / 1.1,因此它们没有任何缺点。 When, or if, you decide to stop implementing HTTP/1.1 performance tweaks like concatenating is a separate decision. 何时,或者如果您决定停止实施HTTP / 1.1性能调整,如连接是一个单独的决定。

In fact only reason not to use HTTP/2 is that implementations are still fairly bleeding edge so you might not be comfortable running your production website on it just yet. 事实上,只有使用HTTP / 2的原因是实现仍然相当容易,所以你可能不习惯在它上面运行你的生产网站。

**** Edit August 2016 **** **** 2016年8月编辑****

This post from an image heavy, bandwidth bound, site has recently caused some interest in the HTTP/2 community as one of the first documented example of where HTTP/2 was actually slower than HTTP/1.1. 这个帖子来自图像重,带宽限制,网站最近引起了对HTTP / 2社区的一些兴趣,作为HTTP / 2实际上比HTTP / 1.1慢的第一个文档示例之一。 This highlights the fact that HTTP/2 technology and understand is still new and will require some tweaking for some sites. 这突出了HTTP / 2技术和理解仍然是新的事实,并且需要对某些站点进行一些调整。 There is no such thing as a free lunch it seems! 似乎没有免费午餐这样的东西! Well worth a read, though worth bearing in mind that this is an extreme example and most sites are far more impacted, performance wise, by latency issues and connection limitations under HTTP/1.1 rather than bandwidth issues. 非常值得一读,但值得注意的是,这是一个极端的例子,大多数网站受到更多影响,性能明智,延迟问题和HTTP / 1.1下的连接限制而不是带宽问题。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM