hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Brian Bockelman <bbock...@cse.unl.edu>
Subject Re: Running Hadoop on cluster with NFS booted systems
Date Wed, 30 Sep 2009 12:06:18 GMT

On Sep 30, 2009, at 4:24 AM, Steve Loughran wrote:

> Todd Lipcon wrote:
>> Yep, this is a common problem. The fix that Brian outlined helps a  
>> lot, but
>> if you are *really* strapped for random bits, you'll still block.  
>> This is
>> because even if you've set the random source, it still uses the real
>> /dev/random to grab a seed for the prng, at least on my system.
> Is there anyway to test/timeout for this on startup and respond?

The amount of available entropy is recorded in this file:


That's the number of bytes available in the entropy pool.  From what I  
can see, 200 is considered a low number.  It appears that the issue is  
deep within Java's security stack.  I'm not sure how easy it is to  
turn it into non-blocking-I/O.  If you've got a nice fat paid contract  
with Sun, you might have a chance...

> At the very least, a new JIRA issue should be opened for this with  
> the stack trace and workaround, so that people have something to  
> search on

Nick - want to contribute back?

View raw message