cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Héctor Izquierdo <izquie...@strands.com>
Subject Re: timeout while running simple hadoop job
Date Wed, 12 May 2010 15:25:28 GMT
Have you checked your open file handler limit? You can do that by using 
"ulimit" in the shell. If it's too low, you will encounter the "too many 
open files" error. You can also see how many open handlers an 
application has with "lsof".

Héctor Izquierdo

On 12/05/10 17:00, gabriele renzi wrote:
> On Wed, May 12, 2010 at 4:43 PM, Jonathan Ellis<jbellis@gmail.com>  wrote:
>    
>> On Wed, May 12, 2010 at 5:11 AM, gabriele renzi<rff.rff@gmail.com>  wrote:
>>      
>>> - is it possible that such errors show up on the client side as
>>> timeoutErrors when they could be reported better?
>>>        
>> No, if the node the client is talking to doesn't get a reply from the
>> data node, there is no way for it to magically find out what happened
>> since ipso facto it got no reply.
>>      
> Sorry I was not clear: I meant the first error (where we get a
> RuntimeException in reading the file, not in the socket.accept()).
> There we have a reasonable error message (either "too many open files"
> or "corrupt sstable") that does not appear client side.
>
>
>
>    


Mime
View raw message