incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From gabriele renzi <rff....@gmail.com>
Subject Re: timeout while running simple hadoop job
Date Wed, 12 May 2010 15:00:34 GMT
On Wed, May 12, 2010 at 4:43 PM, Jonathan Ellis <jbellis@gmail.com> wrote:
> On Wed, May 12, 2010 at 5:11 AM, gabriele renzi <rff.rff@gmail.com> wrote:
>> - is it possible that such errors show up on the client side as
>> timeoutErrors when they could be reported better?
>
> No, if the node the client is talking to doesn't get a reply from the
> data node, there is no way for it to magically find out what happened
> since ipso facto it got no reply.

Sorry I was not clear: I meant the first error (where we get a
RuntimeException in reading the file, not in the socket.accept()).
There we have a reasonable error message (either "too many open files"
or "corrupt sstable") that does not appear client side.



-- 
blog en: http://www.riffraff.info
blog it: http://riffraff.blogsome.com

Mime
View raw message