hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mohammad Tariq <donta...@gmail.com>
Subject Re: Metadata size for 1 TB HDFS data?
Date Thu, 20 Dec 2012 15:12:04 GMT
Thank you so much for the valuable response Stephen. But I have a few
questions to ask here. Could you please elaborate a bit, if possible?

Each of the specified objects are totally different from each other. A file
will be smaller than a directory in size, and a directory might be smaller
than a block. They might have totally different attributes as well. But
still the space required by each object is same as the other. How is it
possible? Is there any formula or rule of thumb to calculate this?

Many thanks.

Best Regards,

On Thu, Dec 20, 2012 at 8:10 PM, Stephen Fritz <stephenf@cloudera.com>wrote:

> Each block, file, and directory is an object in the namenodes heap, so it
> depends on how you're storing your data.  You may need to account for those
> in your calculations.
> On Thu, Dec 20, 2012 at 7:01 AM, Mohammad Tariq <dontariq@gmail.com>wrote:
>> Hello group,
>>         What could be the approx. size of the metadata if I have 1 TB of
>> data in my HDFS?I am not doing anything additional but just a simple put.
>> Will it be ((1*1024*1024)/64)*200 Bytes?
>> *Keeping 64M as the block size.
>> Is my understanding right?Please correct me if i'm wrong.
>> Many thanks.
>> Best Regards,
>> Tariq
>> +91-9741563634
>> https://mtariq.jux.com/

View raw message