Return-Path: X-Original-To: apmail-httpd-dev-archive@www.apache.org Delivered-To: apmail-httpd-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id CF86E18CC7 for ; Fri, 26 Feb 2016 17:06:30 +0000 (UTC) Received: (qmail 26640 invoked by uid 500); 26 Feb 2016 17:06:30 -0000 Delivered-To: apmail-httpd-dev-archive@httpd.apache.org Received: (qmail 26575 invoked by uid 500); 26 Feb 2016 17:06:30 -0000 Mailing-List: contact dev-help@httpd.apache.org; run by ezmlm Precedence: bulk Reply-To: dev@httpd.apache.org list-help: list-unsubscribe: List-Post: List-Id: Delivered-To: mailing list dev@httpd.apache.org Received: (qmail 26565 invoked by uid 99); 26 Feb 2016 17:06:30 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd4-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 26 Feb 2016 17:06:30 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd4-us-west.apache.org (ASF Mail Server at spamd4-us-west.apache.org) with ESMTP id DFBF5C093B for ; Fri, 26 Feb 2016 17:06:29 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd4-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -0.43 X-Spam-Level: X-Spam-Status: No, score=-0.43 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, RP_MATCHES_RCVD=-0.329, SPF_PASS=-0.001] autolearn=disabled Authentication-Results: spamd4-us-west.apache.org (amavisd-new); dkim=pass (1024-bit key) header.d=greenbytes.de header.b=Xqn0Jm5M; dkim=pass (1024-bit key) header.d=greenbytes.de header.b=qDAi4bps Received: from mx2-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd4-us-west.apache.org [10.40.0.11]) (amavisd-new, port 10024) with ESMTP id 32FjZvItHwrE for ; Fri, 26 Feb 2016 17:06:27 +0000 (UTC) Received: from mail.greenbytes.de (mail.greenbytes.de [217.91.35.233]) by mx2-lw-eu.apache.org (ASF Mail Server at mx2-lw-eu.apache.org) with ESMTPS id AD3F35F5CE for ; Fri, 26 Feb 2016 17:06:26 +0000 (UTC) Received: by mail.greenbytes.de (Postfix, from userid 117) id 1C28B15A0AB3; Fri, 26 Feb 2016 18:06:26 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=greenbytes.de; s=mail; t=1456506386; bh=g/76AJpj+cA/aj2rdYXQH6onwnTv/xBti90Nb0gwKJE=; h=From:Subject:Date:To:From; b=Xqn0Jm5MIfGVePP6AETEO/XBtsxmxlYfRiNuRYnJaLvsB8jjtxK4TNOjJv0M3b35X k4sp4zLTUE5BBVgbd3OdHUbWDKlWCX2L9T3iQGBdFftA+sHRgU2dSvRMkzcIdgPRXa HTcqvT80+sX2AtAh4NqYu1L/Ylk32t5H/EbEIkNQ= Received: from [192.168.1.150] (unknown [217.91.35.233]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mail.greenbytes.de (Postfix) with ESMTPSA id 7321115A04B1 for ; Fri, 26 Feb 2016 18:06:24 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=greenbytes.de; s=mail; t=1456506384; bh=g/76AJpj+cA/aj2rdYXQH6onwnTv/xBti90Nb0gwKJE=; h=From:Subject:Date:To:From; b=qDAi4bpsRuk6PyFMSaN9dPlJopa9OnCduoawGBA1KTCimwJcGXI46FG9liiad/hlZ Kt0XzCVHou+tHSHM5X5HC2q13NLeUUpB7OArQoZVwDB/molCAL21NG+UUfXUhF0FPp fd4dGimjhsBNS840PuNUA0Rg3TThaotZ835QcGsQ= From: Stefan Eissing Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable Subject: state of h2 (long) Message-Id: Date: Fri, 26 Feb 2016 18:06:24 +0100 To: dev@httpd.apache.org Mime-Version: 1.0 (Mac OS X Mail 9.2 \(3112\)) X-Mailer: Apple Mail (2.3112) Things winding down here a bit before the weekend (at least I try) and I = thought=20 I'd summarize a bit the state of HTTP/2 in our little project, = because...well, some might be interested and certainly no one has the time to follow all my = crazy submits. * trunk <-> 2.4.x the version in 2.4.x has gathered a bit dust, as we made several = tweaks in trunk in regard to async connection handling and scoreboard updates. These = have all been backported, except one. Once that is through, I'll make a backport of = mod_http2, so that 2.4.x gets all the nice new things. * nice new things in trunk we have the following additions: - http/2 connections get queued properly when they become idle on the = event mpm. that should be nice for people with many connections or long = keepalives configured. - timeouts and keepalive timeouts are respected as for http/1 = connections, no extra configuration. - stability: with the interim releases in github and the help of nice = people, several improvements have been made here and the 1.2.5 github has no = reported open blockers, hanging connections or segfaults. All those changes are in trunk. - server push: the module now remembers in each open connection which = resources have already been pushed, using hash digests. This also implements = an outsketched extension = https://datatracker.ietf.org/doc/draft-kazuho-h2-cache-digest/ whereby clients can send a highly compressed digest of the resources they = have. This is very early and experimental and we'll see how/if browsers adapt this = and how it will change over time.=20 - To offload worker threads, the module allows a number of file = handles to have open. So, ideally, when serving static content, workers just lookup = the file, return and the master connections streams them out. This number = existed before as number per master connection. Now this number is multiplied by = the number of workers and made a process wide pool where h2 connections can borrow = amounts. Still, I am not totally satisfied with this. This should not be = configurable, but httpd itself should check for ulimits of the process and configure = itself, I think. - the scoreboard shows more information for h2 connections, such as = its connection state and some stream statistics. Maybe the h2 workers should show = up in a separate section on day... 127.0.0.1 http/1.1 test2.example.org:12345 wait, streams: = 1/100/100/0/0 (open/recv/resp/push/rst) - request engines! which leads us to: * mod_proxy_http2 is configured just like other proxy modules with by using 'h2' or = 'h2c' as url prefix in the configuration directives.=20 BalancerMember "h2://test2.example.org:12346" ProxyPass "/h2proxy" "balancer://h2-local" ProxyPassReverse "/h2proxy" "balancer://h2-local" Initially, it used one h2 connection for one request. The = connection, and the http2 session associated with it, was reused via the nice proxy = infrastructure. This is how things still are when the original connection is = http/1.1 When this is http/2 itself, however, the first such request will = register a=20 "request engine" that will accept more requests while the initial = one is still being served and use the same backend connection for it. When the = last assigned request is done, it unregisters and dissolves into fine mist. The connection and h2 session stays as before, so a new request can = reuse the connection in a new engine. This works quite (but not 100%) reliable at the moment. There are = still some races when requests are sent out while the backend is already shutting down and = the retry does not catch all cases. Important here is that requests for engines process all the usual = hooks and filters and yaddayadda of our infrastructure, just like with http/1.1. This = works as follows: - incoming request is handed to a worker thread as is done for all = by mod_http2 - httpd/proxy identifies the handler of mod_proxy_http2 as the = responsible - mod_proxy_http2 finds out what backend it shall talk to and ask = from mod_http2 (if present, the usual optionals) if there is already an engine = for this backend, and that it is willing to host one if there is not. - mod_http2, if it has one, *freezes* the task for this request = (which holds the replacements for the core input/output filters on this slave = connection) and returns that it will take care of it, once the handler is done. = The handler then just returns as if it had processed the request. Upon return of the worker, the mod_http2 sees a frozen task and = makes it ready for processing in an engine. Next time the engine polls for more = requests, it is forwarded. - What is this freezing? Basically, an additional output filter that = saves all=20 incoming buckets for later. So, the EOR bucket is set aside here, = for example. - Is it fast? No, not yet. Flow control handling is not optimal and = I am sure there are lots of other places that can be improved. - Then why? This way of handling proxy requests saves connections to = the backend and threads in the server process. That should be motivation = enough. If httpd is a reverse proxy, then doing the same work with 1-2 orders of = magnitude less file handles and threads should be interesting. - And: it could be done for mod_proxy_http, too! I see no reason why = a single thread cannot use pollsets to juggle a couple of http/1.1 backend = connections on top of a http/2 master connection. Anyways, let me hear what you think. Anyone wants to help? -Stefan