apr-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Alex Dubov <oa...@yahoo.com>
Subject Re: Binary data in apr dbd - where should buckets come from
Date Thu, 29 Jun 2006 04:37:05 GMT
Ok, after reading it another time: your mail contains
no useful information. It does not matters to me if
there's a duplication of api for binary purpose or
it's stuck over the existing one. Moreover, as I have
to do some work right now and I don't feel like
changing things in the apr (see below), I have to
stick it over existing api. This is not a problem.

The problem: the bucket idea from Nick was good. I get
a bucket from server and pass it to database. On the
way back, I get a bucket from database and pass it
server. To achieve this, I'm currently forced to
retain reference to pool and to create my own
bucket_alloc for each row. This makes the server
occasionally crash, because of some
order-of-destruction problems with buckets (the
crashes are consistently in pool_bucket_cleanup). This
is fixable.

However, I don't have a good understanding of how the
buckets work. Therefore, I depend of somebody with
clear grasp of the memory issues in apr to define the
api. Some points:
1. What should happen if pselect is passed a different
pool than get_row and they get destroyed in different
times? (I'm not sure if this is a valid point, but it
looks confusing).
2. What bucket_alloc should be used: a user supplied
to get_entry or separate, pselect/get_row created one?
And which pool should it use?
3. Is it possible to implement realloc with pools, or,
at least, have some sort of explicit free?

The last point arises from the fact that it's
impossible to know real maximal length of field in
record set without a large performance hit (this
calculation is disabled by default in mysql and I've
made a mysql_stmt_store_result invocation to depend on
"random"). So I'm bound to fetch the field, check for
truncation, then reallocate and refetch. A lot of
memory gets consumed for nothing in worst case.
Alternative to this is to refetch only the remainder
of data into separate bucket and pass down a brigade,
but this complicates my code.

In general, it's also very befeficial to keep down the
number of parameters passed to each function,
especially if they may cause confusion (see issue 1
and  2).

Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 

View raw message