hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Vinay Setty <vinay.se...@gmail.com>
Subject Re: Problem reading HDFS block size > 1GB
Date Tue, 29 Sep 2009 21:14:55 GMT
Hi Owen,
Thank you for the quick reply. Can you please tell me what is the exact
limit with which HDFS is known to work?

Vinay

On Tue, Sep 29, 2009 at 8:20 PM, Owen O'Malley <omalley@apache.org> wrote:

>
> On Sep 29, 2009, at 10:59 AM, Vinay Setty wrote:
>
>  We are running Yahoo distribution of Hadoop based on Hadoop 0.20.0-2787265
>> .
>> On a 10 nodes cluster with OpenSUSE Linux Operating System. We have HDFS
>> configured with Block Size 5GB (This is for our experiments).
>>
>
> There is a known limitation to HDFS to blocks of less than 2^31 bytes.
> Fixing it would be tedious and no one has signed up to take a pass at it.
>
> -- Owen
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message