httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dean Gaudet <>
Subject general/736: pointers cast as ints of different size - potential memory problems (fwd)
Date Fri, 13 Jun 1997 16:50:35 GMT
Hey Sameer did you delete your 64-bit patch?  I have one floating around
too.  We need to Known-Bugs this and put a patch into contrib.

I'd like to revisit this problem.  When I posted my patch I proposed a new
type "ptr_int" which is large enough to hold a pointer or an int.  But it
was vetoed by Roy because it wasn't "clean enough".  I'd like to submit
that the type "int" which we presently use is not an abstraction at all,
and is just a convenience and really isn't clean either.

Neither int nor ptr_int is guaranteed by ANSI to work for this.  ANSI
makes no guarantee that any integer type is large enough to hold a
pointer.  They provide ptrdiff_t which is a type large enough to hold the
different between two pointers *to the same object* (i.e. a result of
malloc, an auto or a static declaration).  But on 386 "large data models" 
pointers are actually 6 byte quantities.  Two pointers to the same object
always have the same segment bytes (2 bytes) and ptrdiff_t is a 4 byte
quantity.  Nobody uses large data models though (except kernel hackers). 

A union would be strictly correct for ANSI compliance.  However a compiler
is not required to pass such a union by value, and in fact gcc will pass
it by copy-and-reference on i386 (and probably most other archs).  The
caller makes a temporary space with a copy of the union in it and passes a
reference to that space.  This bothers me from a performance point of

Given the very large number of programs that already require an integral
type which holds a pointer, vendors always choose models which allow such
a type to exist.  A vendor that chose to build a unix on i386 which used
large data models in user space would shoot themselves in the foot both
for compatibility, and for speed (far pointer arithmetic is a pain). 
They'd gain >4Gb user processes, but at an expense.  I doubt anyone is
going to do it, and besides, gcc supports long long which fits a far
pointer and is passed by value.  The i286 architecture uses far pointers
frequently, but they're only 4 bytes and a long is 4 bytes.  That exhausts
the "modern" segmented architectures.  The other modern architectures are
all clean 32 or 64 bit without segmentation.

So, given all the above, I propose the following:

#if defined( ARCH_IS_32_BITS )
typedef int generic_data;
#define GENERIC_INT(x)	(x)
#define GENERIC_PTR(x)	(x)

#elif defined( ARCH_IS_64_BITS )
typedef long generic_data;
#define GENERIC_INT(x)	(x)
#define GENERIC_PTR(x)	(x)

typedef union {
    int __i;
    void *__p;
} generic_data;
#define GENERIC_INT(x)	((x).__i)
#define GENERIC_PTR(x)	((x).__p)

Those macros are lvalues as well as rvalues.  An for example we'd change

void table_do (int (*comp)(void *, const char *, const char *), void *rec,
               const table *t, ...);


void table_do (int (*comp)(generic_data, const char *, const char *),
		generic_data rec, const table *t, ...);

And to call it:

    table_do((int (*)(void *, const char *, const char *))send_header_field,
                 (void *)r, r->headers_out, NULL);


    generic_data gd;
    GENERIC_PTR(gd) = r;
    table_do((int (*)(generic_data, const char *, const char *))
		send_header_field, gd, r->headers_out, NULL);

This gives us performance where we can use it (32 and 64 bit clean archs)
and ANSI compliance elsewhere.

The remaining alternative that I can think of is to have a table_do_int and
a table_do_ptr.  Then we'd need a register_cleanup_int and a
register_cleanup_ptr.  And more crap like that.

See <> for the
example patch using ptr_int that I posted a while back.  That will give you
an example of what needs to be changed.


---------- Forwarded message ----------
Date: Fri, 13 Jun 1997 08:50:01 -0700 (PDT)
From: Ondrej Pribyl <>
Subject: general/736: pointers cast as ints of different size - potential memory problems

>Number:         736
>Category:       general
>Synopsis:       pointers cast as ints of different size - potential memory problems
>Confidential:   no
>Severity:       serious
>Priority:       medium
>Responsible:    apache (Apache HTTP Project)
>State:          open
>Class:          sw-bug
>Submitter-Id:   apache
>Arrival-Date:   Fri Jun 13 08:50:01 1997
>Release:        1.2.0
compiler: gcc
system:   OSF1 V3.2 41 alpha
During compilation by gcc (not DEC's cxx) you get complaints about
pointers being cast as integers of a different size in alloc.c,
http_config.c, mod_alias.c, mod_dir.c, alloc.h, http_core.c,
mod_browser.c, mod_rewrite.c.  This is no wonder as any reasonable
compiler on any 64-bit machine should complain if you cast a pointer
to a 32-bit int.  I suspect memory problems might occur due to this
handling of pointers.
Compile Apache 1.2.0 (the older versions had the same problem, I
think) with a good compiler on a 64-bit machine.
The fix is quite simple, just change the relevant
occurences of `int' into `long.'  You can pick up my ideas of how
to change things at
(or, or  Unfortunately,
I am not absolutely sure I have done everything consistently, after
all I have not written Apache.  But the server seems to be running
rather happily.  On 32-bit machines, longs are often the same length
as ints (32 bits), so changing the important `ints' into `longs'
should be OK.  Alternatively, you can introduce preprocessor
directives of the sort:

#if defined( __alpha) || defined(__mips64)
long stuff
int stuff

etc, and put the `long' alternatives inside those.

View raw message