httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From r..@covalent.net
Subject Re: what are the issues? (was: Re: Patch review: ...)
Date Tue, 27 Jun 2000 15:19:16 GMT

> > I am beginning to think though
> > that we aren't making much more progress.  I need to step back from this
> > for a while.  I am -1 for putting ANY patch in at all right now.
> 
> I can wait, but I don't have a clear understanding on your issues with my
> patch. I also don't want to see us try to solve ALL the filtering problems
> in one fell swoop. We should take a step at a time.

The issues are intertwined.  You can't just lop off part of the problem
and try to solve it that way.  We need a design that incorporates
everything.  Part of waht scares me with your design is that you do keep
leaving the small stuff off.  This makes the patch incomplete.

> > also beginning to think that we may need to re-implement some pool
> > functions in order to do this at all properly.
> 
> *blink*  Eh? I don't see this, but I'm sure you will demonstrate :-)
> 
> I'll also say that I believe my patches empirically show that pool changes
> aren't needed to move us that first step.

It's not the first step that's important here.  If you take the first
step, and tie ourselves to pools, then the second step is not possible.

One of the things you've consistently said about my patch, is that
everything is allocated on the heap, and that means things stick around
until the end of the request.  That's true.  The fact is though, that the
same is true of your patch, once we create complicated filters.

Yes, this behavior is not seen in mod_gargle or any of the examples we
have coded so far.  It is there in some of the examples that we have
ignored (both of us).  For example, take real-time color quantization
(Kevin Kiley's example), which is then gzip compressed.

In the link-based scheme, because lower filters have no way of knowing
where the data they are using comes from, they have to memcopy
it.  Without changes to the pool structure (or ignoring pools), this
requires allocating a buffer, and just sticking all the data in the buffer
until you have soaked up all the data.  Then, the data can be passed to
the gzip filter, which also has to memcopy it (could be an mmap'ed
file) off to the side.

Now, you have two full copies of the data sitting on the heap, neither of
which will be freed until the request is done.  And, there is no way to
free them, because the pool used is the same as the pool the request_rec
comes from.

Now, make the image that we are doing this to REALLY big, like 30 
MB.  Now, you have the original 30 MB that was allocated (most likely
mmap'ed), plus the 30MB that was allocated for the quantifying layer, plus
the 30 MB that was allocated for the gzip layer.

What we need is exactly what Roy and Dean have been proposing, which is
buckets.  Not the buckets you have implemented or the buckets I have
implemented, because they don't go far enough.

> > I am -1 for using Greg's patch for all the reasons I dislike my patch.
> 
> I have three review points from you so far, each are soluable. I would like
> to hear from you whether these don't seem solved, or whether you see other
> problems:

> 2) Should not use char* callbacks.
> 
>    SOLUTION: I disagree this is a problem. It is a stylistic issue, but has
>              no technical problems that are not present with buckets or
>              ioblock -based callbacks.

It does have drawbacks and problems that aren't there in the other
cases.  The other cases allow for single-copy writes.  This
doesn't.  Because the other design allows people to associate everything
about the bucket with the bucket.

The problem is that we are hiding what we are doing from the programmer,
so they can't make good decisions.  By telling the programmer operate on
this string, and I'll give you more later, you are crippling what they
can do.  (example in next section using mod_autoindex.)

>    RATIONALE: If a filter callback is passed a bucket/ioblock and it needs a
> 	      char* to examine the actual content, then it must call a
> 	      utility function right away to get that char*:
> 	      
> 	      my_callback(bucket):
> 	          str = give_me_string(bucket)
> 		  process_string(str)
> 
>               The simple callback form alters this to:
> 	      
> 	      str = give_me_string(bucket)
> 	      my_callback(str):
> 	          process_string(str)

But, the design encourages people to not do this well.  Let me give you an
example.  Mod_autoindex.c:

Currently, mod_autoindex calls ap_rputs with a bunch of very small
strings.  If we pass down strings, then we are essentially telling module
writers that this is a valid way to write data.  This will increase our
memory footpring.  For example:   mod_auto->index  ->  mod_digest.

Mod_autoindex will write little strings a bunch of times, and mod_digest
will cache them all (on the heap, from the pool).  When mod_digest has
seen all of the data, it will calculate the md5sum, and then pass it on to
the next layer.

A much better way to write this, is to have mod_Autoindex allocate a big
block of data (on the heap, not using the pool, using malloc), which
stores all of the data that mod_autoindex can grab.  This requires that
mod_autoindex is re-written to do some of it's own buffering.  Then, that
data is passed as one big chunk to mod_digest, which doesn't need to copy
it at all.  Mod_digest knows the data is modifiable, because the bucket
tells it so.  Mod_digest also knows that if it gets too much data, it can
write the data out to the disk, and just start freeing memory, because the
memory is associated with the bucket, not the pool.  Later, when
mod_Autoindex has received all the data, it knows if anything has been
written to disk, and can calculate the md5sum.

Also, having both buckets and char *'s opens us up to bugs.  One of the
incredibly nice things about the buckets, is that they allow programmers
to group the data.  This means having lists of buckets.  In the code you
have posted, if I create a list of buckets, and then I try to pass that to
a char * filter, I will only get the first bucket.  The rest are
forgotten.  (I noticed this yesterday, but I was busy hacking my patch,
and then I was honestly pissed off, and I forgot to mention it).  The
ability to string buckets together is where this feature really pays off.

>               By doing the "give me a string" in the framework, we have more
> 	      opportunity to optimize the string's construction and
> 	      handling. For example, create and destroy a pool to hold the
> 	      string while the callback is invoked. In the "build the string
> 	      inside" approach, the callback would look like:

If we associate the memory with a pool, we not only have to keep track of
that pool, but also the pool's parent pool, and it's parent's parent pool,
etc.  Once one pool is destroyed, all of the child pools are
destroyed.  This means that filters will have to either have A LOT of
information about the lifetime of pools, or all of the data lives as long
as the request.

> 	      my_callback(bucket):
> 	          pool = create_pool(bucket->r->pool)
> 		  str = give_me_string(bucket)
> 		  process_string_str)
> 		  destroy_pool(pool)

I thought of this, and it is what I was getting at when I was talking
about sibling pools, because it allows two filters to operate on two
different pools.  It also requires that each filter copy all of the data
into it's own pool before it can do anything.  OTherwise, it doesn't know
what the previous layers will do with their pools.

Look at it this way:

     my_callback(bucket)
          p = create_pool(bucket->r->pool);
          str = give_me_string(Bucket)
          process_string(Str)
                 [which calls the next layer, which doesn't have enough
                  data, and it moves the string off to the side.]
          destroy_pool(pool)
                 [The string is gone now, so the next layer had better
                  have made a copy.  If each layer does this, we end up
                  with a lot more data on the heap than we want. ]

>               For the simple callback, the pool handling occurs *outside*
> 	      the callback. It remains just as simple as above. The point
> 	      here is that we have optimization choices that will not affect
> 	      the simple_cb style of filter.

The point is that this looks like an optimization, but it is really not
one.  In reality, this design (as well as any other that requires
pools) requires more memory and pool memcpy's than are necessary.

>    SOLUTION: Per a previous email, I added a condition to the scan loop in
> 	     ap_add_filter(). I had guessed that would eventually be added,
> 	     but wanted to defer it until it was found to be really
> 	     necessary.

I need to look at that still.

> Other problems?

The content-length thing is a real issue.  You can say the spec allows us
to drop the content-length, but two people have said that isn't an
option.  Kevin Kiley and Roy.  I'll be number three.  If we are not just
going to drop the content-length, then we need to allow filters to keep
the whole request around (but not in memory).  This means being able to
flush the data to disk, and free up the memory.  With all of the current
patches, we are using pools.  This means that we have one of two options:

1)  Each layer creates it's own sub-pool.  In this case, dumping to the
disk doesn't really do us any good, because all of the previous layers
have copies of the data lying around.

2)  We use one pool.  In this case, we can't free the pool, because we
don't know who else is using it.  This means that while all of the memory
has been flushed to disk, we can't use any of the now available memory (a
previous layer could have gotten there before us).

I really hope that outlines and explains my issues with this patch.  I
couldn't sleep last night so I start fiddling with some of Dean's ap_buf
stuff.  I may have a brand new patch in the next few days, I may
not.  Please do not wait for me.  I will review anything that comes along,
but I am more and more convinced that we need to re-vamp our memory model
to make filters work.  In the meantime, I have some configuration issues
that have been bugging me.  :-)

BTW, I tend to belive (although I have no hard proof, it is a
very strong feeling) that sub-requests won't and can't work with any of
the current models.  Any sub-request that generates data (all of them),
needs to pass that data back to the original request to go through the
rest of the filters.  This finally struck me this morning.  I'm trying to
figure out how to express why this is so, but the words aren't coming
right now.  I'll try again later.

Ryan

_______________________________________________________________________________
Ryan Bloom                        	rbb@apache.org
406 29th St.
San Francisco, CA 94131
-------------------------------------------------------------------------------







Mime
View raw message