hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Brian Bockelman <bbock...@cse.unl.edu>
Subject Re: Problem reading HDFS block size > 1GB
Date Tue, 29 Sep 2009 21:58:03 GMT
It breaks at 2^31 bytes = 2 GB.  Any size smaller than that should work.

Brian

On Sep 29, 2009, at 4:14 PM, Vinay Setty wrote:

> Hi Owen,
> Thank you for the quick reply. Can you please tell me what is the  
> exact
> limit with which HDFS is known to work?
>
> Vinay
>
> On Tue, Sep 29, 2009 at 8:20 PM, Owen O'Malley <omalley@apache.org>  
> wrote:
>
>>
>> On Sep 29, 2009, at 10:59 AM, Vinay Setty wrote:
>>
>> We are running Yahoo distribution of Hadoop based on Hadoop  
>> 0.20.0-2787265
>>> .
>>> On a 10 nodes cluster with OpenSUSE Linux Operating System. We  
>>> have HDFS
>>> configured with Block Size 5GB (This is for our experiments).
>>>
>>
>> There is a known limitation to HDFS to blocks of less than 2^31  
>> bytes.
>> Fixing it would be tedious and no one has signed up to take a pass  
>> at it.
>>
>> -- Owen
>>


Mime
View raw message