apr-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Rainer Jung <rainer.j...@kippdata.de>
Subject Re: [Bug 45615] "Large Files not supported" with 64-bit build
Date Tue, 10 Jan 2017 13:12:16 GMT
Hi Dennis,

Am 10.01.2017 um 13:36 schrieb Dennis Clarke:
> re: https://bz.apache.org/bugzilla/show_bug.cgi?id=45615
> I think it best to follow up here as per suggestions by Yann and Rainer
> wherein I can run further tests and experiments to determine what is
> happening here in these Niagara class systems.
> Firstly, sorry for awaking what seems like a long dead cold bug but it
> really isn't a "Large Files not supported" bug as opposed to just a
> message that needs to be tweaked.  Indeed yes this is a 64 bit build and
> so off_t and size_t and going to be 64-bit sized :

I will have a look at the message "Line 345: Large Files not supported" 
coming from "make check" and see whether I can tweak it ("not supported 
or not needed" or similar). Due to the API used by APR I think we can't 
be more precise (effectively distinguish between the two cases) easily.


> As for the blocking rand, well hrmmm, not unless I was trying to read a
> ton of data from /dev/random wherein :
> Devices                                                random(7D)
>      random, urandom - Strong random number generator device
>      /dev/random
>      /dev/urandom
>      The /dev/random and /dev/urandom  files  are  special  files
>      that  are  a source for random bytes generated by the kernel
>      random  number  generator  device.   The   /dev/random   and
>      /dev/urandom  files  are suitable for applications requiring
>      high quality random numbers for cryptographic purposes.
>      The generator device produces random numbers from  data  and
>      devices available to the kernel and estimates  the amount of
>      randomness (or entropy) collected from  these  sources.  The
>      entropy  level  determines the amount of high quality random
>      numbers that are produced at a given time.
>      Applications retrieve random bytes by reading /dev/random or
>      /dev/urandom. The /dev/random interface returns random bytes
>      only when sufficient amount of entropy has  been  collected.
>      If  there  is  no entropy to produce the requested number of
>      bytes,  /dev/random  blocks  until  more  entropy   can   be
>      obtained.  Non-blocking  I/O mode can be used to disable the
>      blocking behavior. The /dev/random interface  also  supports
>      poll(2). Note that using poll(2) will not increase the speed
>      at which random numbers can be read.
> etc etc ...
> So I would expect that /dev/random may slow down but not by too much
> while many other processes are running on the system and in this case
> the build system is a single virtual zone inside a machine with a
> number of zones.  There should be plenty of data. However I have never
> run a benchmark. I can certainly run a test for "blocking rand()" as
> per http://pubs.opengroup.org/onlinepubs/9699919799/functions/rand.html
> However I will say that the Apache 2.4.25 server seems to be running
> very well with this new apr and apr-util but I am sure we can sort out
> this weird test behavior. I certainly have the Sparc hardware sitting
> here and can even provide the Oracle Sparc M7 tests if needed.

/dev/random can block because many server systems quickly run out of 

If you could follow my suggestions for further debugging your hanging or 
extremely long running apr "make check" (using pstack, prstat, truss as 
written in the ticket), I'm sure we can find the culprit. If it is 
/dev/random, we can find that in this type of debug info, if it is 
something else likely we can also determine it from the three.



View raw message