The document discusses the performance of HTTP/2 compared to HTTP/1.1 across different network conditions. It summarizes results from testing 8 real websites under 16 bandwidth and latency combinations with varying packet loss rates. Overall, HTTP/2 performs better for document complete time and speed index, especially on slower connections, though results vary depending on the specific site and metrics measured.
Reorganizing Website Architecture for HTTP/2 and BeyondKazuho Oku
This document discusses reorganizing website architecture for HTTP/2 and beyond. It summarizes some issues with HTTP/2 including errors in prioritization where some browsers fail to specify resource priority properly. It also discusses the problem of TCP head-of-line blocking where pending data in TCP buffers can delay higher priority resources. The document proposes solutions to these issues such as prioritizing resources on the server-side and writing only what can be sent immediately to avoid buffer blocking. It also examines the mixed success of HTTP/2 push and argues the server should not push already cached resources.
This document discusses programming TCP for responsiveness when sending HTTP/2 responses. It describes how to reduce head-of-line blocking by filling the TCP congestion window before sending data. The key points are reading TCP states via getsockopt to determine how much data can be sent immediately, and optimizing this only for high latency connections or small congestion windows to avoid additional response delays. Benchmarks show this approach can reduce response times from multiple round-trip times to a single RTT.
The document discusses optimizations to TCP and HTTP/2 to improve responsiveness on the web. It describes how TCP slow start works and the delays introduced in standard HTTP/2 usage from TCP/TLS handshakes. The author proposes adjusting the TCP send buffer polling threshold to allow switching between responses more quickly based on TCP congestion window state. Benchmark results show this can reduce response times by eliminating an extra round-trip delay.
Presentation material for TokyoRubyKaigi11.
Describes techniques used by H2O, including: techniques to optimize TCP for responsiveness, server-push and cache digests.
Cache aware-server-push in H2O version 1.5Kazuho Oku
This document discusses cache-aware server push in H2O version 1.5. It describes calculating a fingerprint of cached assets using a Golomb compressed set to identify what assets need to be pushed from the server. It also discusses implementing this fingerprint using a cookie or service worker. The hybrid approach stores responses in the service worker cache and updates the cookie fingerprint. H2O 1.5 implements cookie-based fingerprints to cancel push indications for cached assets, potentially improving page load speeds.
JSON SQL Injection and the Lessons LearnedKazuho Oku
This document discusses JSON SQL injection and lessons learned from vulnerabilities in SQL query builders. It describes how user-supplied JSON input containing operators instead of scalar values could manipulate queries by injecting conditions like id!='-1' instead of a specific id value. This allows accessing unintended data. The document examines how SQL::QueryMaker and a strict mode in SQL::Maker address this by restricting query parameters to special operator objects or raising errors on non-scalar values. While helpful, strict mode may break existing code, requiring changes to parameter handling. The vulnerability also applies to other languages' frameworks that similarly convert arrays to SQL IN clauses.