hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ralph H Castain (Commented) (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (MAPREDUCE-2911) Hamster: Hadoop And Mpi on the same cluSTER
Date Wed, 11 Apr 2012 10:06:20 GMT

    [ https://issues.apache.org/jira/browse/MAPREDUCE-2911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13251452#comment-13251452
] 

Ralph H Castain commented on MAPREDUCE-2911:
--------------------------------------------

I'm afraid our optimism about a near-term patch proved a little too hopeful. Milind's initial
prototype (based on advice from me, before I really understood the situation) won't work on
multi-tenant systems, so another method had to be developed. After spending a couple of months
beating on this, I've pretty much put it on hold for now as I pursue an alternative approach.

Adding MPI support to Yarn has proven to be very difficult, and may not be worth the pain.
The problems stem from some basic Yarn architectural decisions that run counter to HPC standards.
For example, the linear launch pattern creates scaling behaviors that are objectionable to
HPC users, who generally consider anything less than logarithmic to be unacceptable. All HPC
RMs meet that requirement, with many of them scaling at better than logarithmic levels. The
result is striking: on a 64-node launch, Yarn will take several seconds to start the job -
whereas an HPC RM will start the same job in milliseconds.

Similarly, the lack of collective communication support in Yarn means that MPI wireup scales
quadratically under Yarn with the number of processes. Contrast this with a typical HPC installation
where wireup scales logarithmically, and remember that wireup is the largest time consumer
during MPI startup, and you can understand the concern. As a benchmark, we routinely start
a 3k-node, 12k-process job (including MPI wireup) in about 5 seconds using Moab/Torque.

Finally, when we compared fault tolerance performance, we didn't see a significant difference
between Yarn and the latest HPC RM releases. Both exhibited similar recovery behaviors, and
had similar multi-failure race condition issues.

Just to be clear, I'm not criticizing Yarn - the other RMs had similar behaviors at a corresponding
point in their development (including my own past efforts in that arena!). Remember, today's
HPC RMs each can boast of roughly 50-100 man-years of development behind them, and have undergone
several cycles of architectural change to improve scalability, so one would naturally expect
them to out-perform Yarn at this point.

There are other issues, including the difficulty of getting a Yarn AM to actually work. However,
the impetus behind the "hold" really was the above observations, combined with an overwhelmingly
negative reaction from the HPC community when I asked about using Yarn on general purpose
(Hadoop + non-Hadoop apps) clusters. In contrast, I received a correspondingly positive reaction
to the idea of running Hadoop MR on a general purpose cluster, separating that code (plus
HDFS) from Yarn.

I have therefore started pursuing this option, which proved to be much easier to do. I expect
to have an "early adopter" version of MR/HDFS on an HPC cluster sometime in the next week
or two, with a general release this summer (aided by other members of the HPC community who
have volunteered their help).

Of course, I realize that there will be people out there that decide to run Yarn on their
Hadoop systems (as opposed to worrying about general purpose clusters), and that they might
also be interested in using MPI. So I'll return to this after I get the HPC problem solved,
with the caveat that such users understand that the scaling and performance will not be what
they are used to seeing on non-Yarn systems.

HTH
Ralph

                
> Hamster: Hadoop And Mpi on the same cluSTER
> -------------------------------------------
>
>                 Key: MAPREDUCE-2911
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2911
>             Project: Hadoop Map/Reduce
>          Issue Type: New Feature
>          Components: mrv2
>    Affects Versions: 0.23.0
>         Environment: All Unix-Environments
>            Reporter: Milind Bhandarkar
>            Assignee: Ralph H Castain
>             Fix For: 0.24.0
>
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> MPI is commonly used for many machine-learning applications. OpenMPI (http://www.open-mpi.org/)
is a popular BSD-licensed version of MPI. In the past, running MPI application on a Hadoop
cluster was achieved using Hadoop Streaming (http://videolectures.net/nipsworkshops2010_ye_gbd/),
but it was kludgy. After the resource-manager separation from JobTracker in Hadoop, we have
all the tools needed to make MPI a first-class citizen on a Hadoop cluster. I am currently
working on the patch to make MPI an application-master. Initial version of this patch will
be available soon (hopefully before September 10.) This jira will track the development of
Hamster: The application master for MPI.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message