hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Steve Loughran <ste...@hortonworks.com>
Subject Re: One petabyte of data loading into HDFS with in 10 min.
Date Mon, 10 Sep 2012 09:40:55 GMT
On 10 September 2012 08:40, prabhu K <prabhu.hadoop@gmail.com> wrote:

> Hi Users,
>
> Thanks for the response.
>
>
> We have loaded 100GB data loaded into HDFS, time taken 1hr.with below
> configuration.
>
> Each Node (1 machine master, 2 machines  are slave)
>
> 1.    500 GB hard disk.
>
> 2.    4Gb RAM
>
> 3.    3 quad code CPUs.
>
> 4.    Speed 1333 MHz
>
>
>
> Now, we are planning to load 1 petabyte of data (single file)  into
> Hadoop HDFS and Hive table within 10-20 minutes. For this we need a
> clarification below.
>
> 1. what are the system configuration setup required for all the 3
> machine’s ?.
>

2. Hard disk size.
>

At least a petabyte, maybe three.

If you were planning to do some pre-storage processing, such as filter or
compress the data, to it before the upload.


> 3. RAM size.
>
> 4. Mother board
>
> 5. Network cable
>


> 6. How much Gbps  Infiniband required.
>
>
yes.



>  For the same setup we need cloud computing environment too?
>
> Please suggest and help me on this.
>
>  Thanks,
>

Prabhu, I don't think you've been reading the replies fully.

The data rate coming off the filtered Cern LHC experiments is 1.6 PB/month.
Your "10 minute" upload is trying to handle two weeks' worth of CERN data
in a fraction of time.

Nobody can seriously point to your questions and say "this is the
motherboard you need" as your project seems to have some unrealistic goals.
If you do want to do a 1PB upload in 10 minutes -or even, say 30-60
minutes, the first actions in your project should be


   1. Come up with some realistic deliverables rather than a a vague "1
   PB/10 minute" requirements.
   2. Include a realistic timetable as part of those deliverables.
   3. Look at the data source(s) and work out how fast they can actually
   generate data off their hard disks, out of their database, or whatever.
   That's your maximum bandwidth irrespective of what you do with the data
   afterwards.
   4. Hire someone who knows about these problems and how to solve them -or
   who at least is respected enough that  when they say "you need realistic
   goals" they'd be believed.

Someone could set up a network to transfer 1 PB of data into a Hadoop
cluster in 10 Minutes, but it would be a bleeding edge exercise you'd end
up writing papers about in VLDB or similar conferences.

The cost of doing so would be utterly excessive unless you were planning to
load (and then hopefully, discard) another PB in the next 10 minutes -and
again, repeatedly. Otherwise you would be paying massive amounts for
network bandwidth that would only ever be using for ten minutes.

Asking for help on the -user list isn't going to solve your problems, as
the "1 PB in 10 minutes" goal is the problem. Do you really need all that
data? In 10 minutes? IF so, then you're going to have to find someone who
really, really knows about networking, disk IO bandwidth, cluster
commissioning, etc. I'm not volunteering. I may have some colleagues you
could talk to, but that -as with other people on this list- would be in the
category of action 5, "pay for consultancy"

Sorry.

Mime
View raw message