Received: by taz.hyperreal.com (8.8.4/V2.0) id PAA19918; Sun, 26 Jan 1997 15:11:33 -0800 (PST) Received: from futurefx.com by taz.hyperreal.com (8.8.4/V2.0) with ESMTP id PAA19913; Sun, 26 Jan 1997 15:11:30 -0800 (PST) Received: (from root@localhost) by futurefx.com (8.8.3/8.8.3) id RAA01606 for new-httpd@hyperreal.com; Sun, 26 Jan 1997 17:11:27 -0600 From: "Jason S. Clary" Message-Id: <199701262311.RAA01606@futurefx.com> Subject: HTTP/1.0 to HTTP/1.1 irony To: new-httpd@hyperreal.com Date: Sun, 26 Jan 1997 17:11:22 -0600 (CST) In-Reply-To: <199701262228.RAA19193@shado.jaguNET.com> from "Jim Jagielski" at Jan 26, 97 05:28:18 pm X-Mailer: ELM [version 2.4 PL25 PGP3 *ALPHA*] MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: new-httpd-owner@apache.org Precedence: bulk Reply-To: new-httpd@hyperreal.com Has anyone else noticed the irony of a supposed minor protocol change that requires a draft spec that is 3 times longer than its predecessor? I just printed both HTTP/1.0 and HTTP/1.1 along with the other related RFC's and HTTP/1.1 is HUGE... 1 1/2 inch thick BOOK... HTTP/1.0 was less than a half inch. Doesn't seem "minor" to me.. ;P BTW, I've been reading this thing pretty closely.. what is this need for threading that everyone is talking about? It seems like it could all be done the same way its done now with the possible exception of requiring output processing even on CGIs (chunking and encoding and whatnot) which would best be accomplished through threading but could be done just as easily with fork() and a select loop on the output of the forked process. I haven't read the whole thing yet.. Maybe I'be not gotten to that bit yet... But I've skimmed most of it and am not settling in to an in-depth read. BTW, does anyone know if the SHTTP spec complies with HTTP/1.1? It seems like you could use the content encoding headers to specify say a PEM format or something... Of course, it could be a problem to use CGI style URI extensions since many CGI's pass sensative data via the URI which wouldn't be encoded. Course, a CGI wanting to be secure could request the client encode the parameters as a post or other data and include it as an entity in the request. It would require an entity on transactions anyways, to include a public key for the session. making it a multipart entity shouldn't hurt. But it won't be backward compatible.. :( Course, the security won't be backward compatible so that shouldn't make a big difference. Just looking for alternatives to SSL. I'd prefer low level encryption on the IP stack with optional authentication by other means like an HTTP auth sequence of sorts. Currently HTTPS can't support renegotiate key exchange in the middle of a transaction so there's no simple way to implament multiple keys for various parts of the document space on a server. Which, until smartcards and whatnot are common, is unfortunately what you HAVE to do since you can have all sorts of keys for various things. Ohwell.. Jason S. Clary