Reorganizing website architecture for HTTP/2 and beyond
New performance bottlenecks become apparent as web browsers and servers start using HTTP/2. Kazuho Oku explains the issues, their mitigation, and how the developers of the HTTP protocol are trying to make the Web even faster, covering the reality of HTTP/2 prioritization, cache-aware server push, the impact of load balancers for HTTP/2, mobile optimization, and HTTP caching.
|Talk Title||Reorganizing website architecture for HTTP/2 and beyond|
|Conf Tag||Build resilient systems at scale|
|Location||Amsterdam, The Netherlands|
|Date||November 7-9, 2016|
The Web is becoming faster as more and more web browsers and servers adopt HTTP/2. But support for the new protocol is still rough cut, and new performance bottlenecks are becoming apparent as people start using HTTP/2. To give one example, server push in HTTP/2 is a promising feature but is hard to use in practice without the server knowing what the client has in its cache. To give another, due to the latency-sensitive nature of the protocol, a misconfigured load balancer becomes a major performance bottleneck. Kazuho Oku, the author of the H2O HTTP/2 server (often referred to as the most sophisticated implementation of the protocol) as well as the author of the cache digests for HTTP/2 Internet-Draft, explains the various issues discovered by the server-side and the client-side developers of the protocol and the solutions invented to address them. Kazuho also covers upcoming standards such as TLS 1.3 and QUIC and their impact.