hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Bryan Duxbury <br...@rapleaf.com>
Subject Re: "Could not get block locations. Aborting..." exception
Date Tue, 30 Sep 2008 01:47:32 GMT
Ok, so, what might I do next to try and diagnose this? Does it sound  
like it might be an HDFS/mapreduce bug, or should I pore over my own  
code first?

Also, did any of the other exceptions look interesting?

-Bryan

On Sep 29, 2008, at 10:40 AM, Raghu Angadi wrote:

> Raghu Angadi wrote:
>> Doug Cutting wrote:
>>> Raghu Angadi wrote:
>>>> For the current implementation, you need around 3x fds. 1024 is  
>>>> too low for Hadoop. The Hadoop requirement will come down, but  
>>>> 1024 would be too low anyway.
>>>
>>> 1024 is the default on many systems.  Shouldn't we try to make  
>>> the default configuration work well there?
>> How can 1024 work well for different kinds of loads?
>
> oops! 1024 should work for anyone "working with just one file" for  
> any load. I didn't notice that. My comment can be ignored.
>
> Raghu.


Mime
View raw message