hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dhruba Borthakur <dhr...@gmail.com>
Subject Re: Problem reading HDFS block size > 1GB
Date Tue, 29 Sep 2009 22:16:41 GMT
http://issues.apache.org/jira/browse/HDFS-96


On Tue, Sep 29, 2009 at 2:58 PM, Brian Bockelman <bbockelm@cse.unl.edu>wrote:

> It breaks at 2^31 bytes = 2 GB.  Any size smaller than that should work.
>
> Brian
>
>
> On Sep 29, 2009, at 4:14 PM, Vinay Setty wrote:
>
>  Hi Owen,
>> Thank you for the quick reply. Can you please tell me what is the exact
>> limit with which HDFS is known to work?
>>
>> Vinay
>>
>> On Tue, Sep 29, 2009 at 8:20 PM, Owen O'Malley <omalley@apache.org>
>> wrote:
>>
>>
>>> On Sep 29, 2009, at 10:59 AM, Vinay Setty wrote:
>>>
>>> We are running Yahoo distribution of Hadoop based on Hadoop
>>> 0.20.0-2787265
>>>
>>>> .
>>>> On a 10 nodes cluster with OpenSUSE Linux Operating System. We have HDFS
>>>> configured with Block Size 5GB (This is for our experiments).
>>>>
>>>>
>>> There is a known limitation to HDFS to blocks of less than 2^31 bytes.
>>> Fixing it would be tedious and no one has signed up to take a pass at it.
>>>
>>> -- Owen
>>>
>>>
>


-- 
Connect to me at http://www.facebook.com/dhruba

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message