httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Luke Kenneth Casson Leighton <>
Subject Re: More migration of code from httpd to apr-util
Date Fri, 08 Jun 2001 12:45:09 GMT
On Thu, Jun 07, 2001 at 12:10:15PM -0700, Justin Erenkrantz wrote:
> On Thu, Jun 07, 2001 at 01:33:15PM +0200, Luke Kenneth Casson Leighton wrote:
> > ack.  if i make a [rather short] list of functions i am literally
> > duplicating line-for-line, would that suffice as motivation to,
> > say, add them to aprutil?
> Your list of potential functions looks mostly good to me (haven't sat
> down and thought too hard about it though).
:)  btw, i deleted apr_to_strlower() because there is an apr_tolower()
so that's off the list.

what i perhaps should recommend is that a series of independent
libraries be created.

e.g. libaprhttputil (i know it's a bit long).

just for xvl, i need the following bits of code / functionality,
and i _know_ they're already partially in httpd because i ripped
entire sections out for use in xvl once before, already:

- string/path/cgi manipulation.

this i need to be able to parse HTTP requests, to parse
directory components.  i know httpd does, too.

- date/time manipulation.

at present, all i need to be able to do is get a
time_t and display it in a nice localtime format.
ripping code from httpd to do this is... uh.. silly?

- an HTTP client.

i need to be able to make http GETs / POSTs, and so
originally i ripped entire bits of mod_proxy out into
a separate file.

i can believe that these two programs, xvl and mod_proxy
are not the _only_ places where HTTP client GETs/POSTS
are needed.  for example, what about martin pool's
work on mod_rproxy?  [that's a bit like mod_proxy except
it uses the rsync algorithm to sync up the file being
requested with the last version obtained, etc., very cool
work :)]


- an HTTP server-parser

yes, that's right: i need to be able to parse HTML requests
and the prospect of ripping and having to maintain entire
sections of HTTP parsing code is not one i will take on


- a programmatic means to plug-in PROGRAMS - not cgi-bin
executables - DIRECTLY into httpd.

now, on unix, the obvious way to do this is unix domain
sockets, ala mysql, nssswitch, winbindd, tng domain architecture
X-windows and probably a whole boat-load of others.

on NT, it spells 'Named Pipes'.

dunno about Beos and OS/2, OS/2 probly supports both,
i suspect (?)

basically, the mod_xxx is all very well and good, until
you start wanting to plug in thing like... oh, i dunno,
8 MILLION lines of existing code [yes, there is such
a project].

... and you want to turn all these services, some of
which are #ifdef KERNEL based, into mod_xxxes?  ha!
don't make me laugh :)

so, having an inter-process communication mechanism,
there, becomes essential, allowing for separate
programs to plug their data directly in as HTTP

i.e. httpd can become, effectively, a 'transport' layer

[for those of you familiar with dce/rpc, look up
ncacn_http :) :) ]

> I think the question is whether or not there is a predefined limit to
> what goes in apr-util.  We've hit this recently with the crypto/ stuff.
so, create some new libraries.  maybe apr-util isn't the
best place for some of this stuff, but an apr-http-util is.
or whatever.  i don't mind, or really care [well, i do,
but not enough to lose sleep over it if it doesn't happen :)]

[and then think carefully about where and how to use them:
you can't distribute anything that uses openssl / any crypto
library, everywhere]

> I think it probably means that some of the committers need to work on
> things outside of httpd itself.  When I wrote the mod_mbox stuff and 
> had to write a standalone indexer, I needed the date and uri functions 
> from httpd.  I initially copied them over, but then, since I now have 
> commit access and no one screamed too loudly, I moved them into 
> apr-util (with some help on the httpd side) and deleted my private 
> copy.  But, I think this will have to be done on a case-by-case basis 
> with lots of thought for each one.  

> There also needs to be a guiding vision for apr-util as well.  I'm not 
> sure what that is exactly.  =)  I think we might be able to define that
> though...
> I'd be curious to look at subversion to see if it is duplicating any 
> code, but the domains are so different, I doubt there is a lot of
> overlap.  

well, surely they have string / file / directory manipulation.

surely they do timestamps!

> The more users of apr-util, the more we can identify things
> that might be worth migrating from httpd into apr-util.  


> But, you
> are always free to copy the code if we don't move it into apr-util.

i am trying to keep xvl's codebase down to an absolute, absolute
minimum.  the more LOCs i can take _out_, the happier i will be.
duplicating or copying effort is a no-no in my book.

btw, who was it who recommended mod_cgid?  [was it you,

i took a look at it, and... well, okay, what i _meant_ was,
as described above, i need a platform-independent inter-process-
communication mechanism, where process includes separate

and i think that doing Named Pipes, ala NT, is the best way
to go about this, emulating the same functionality under
Unix, not the other way round.  and i'm saying that from
the viewpoint of having developed Named Pipes in Samba TNG to
emulate the NP functionality, for unix, already. 

sander and i discussed this another time [3 months ago]
related to transport APIs, on APR.  the topic was inter-related
with buckets and filters etc, and was... well, a bit too
abstract, ambitious and not well understood or explained.
[best, therefore, i think, to go in small stages, using
xvl to work to that end.]

iow, i need to make a decision to remain with the hook into
httpd via a ux-dom-sock-XML-wrapped implementation of mod_proxy,
help you guys write an apr_named_pipe API, or to cut out
the middle man and shove in entire sections of httpd so
xvl can be a self-serving http server.

my preference is to do the unix-implementation of the
apr_named_pipe API, if there's anyone else willing to
help parallel-develop the unix implementation, using
xvl as a test to prove the case.

takers, anyone?


View raw message