hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yang Zhou (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HADOOP-6483) Provide Hadoop as a Service based on standards
Date Mon, 18 Jan 2010 00:38:54 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-6483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Yang Zhou updated HADOOP-6483:

    Attachment: OGF27-HPCBPforHadoop.ppt

The HPC-BP implementation for Hadoop we present in SC'08 is the one without resource provision
The HPC-BP implementation for Hadoop we present in OGF27 is the one with resource provision
Of course, there are some other difference between these two implementation, e.g. Hadoop job
I attach both of them for comparison.

> Provide Hadoop as a Service based on standards
> ----------------------------------------------
>                 Key: HADOOP-6483
>                 URL: https://issues.apache.org/jira/browse/HADOOP-6483
>             Project: Hadoop Common
>          Issue Type: New Feature
>            Reporter: Yang Zhou
>         Attachments: OGF27-HPCBPforHadoop.ppt, SC08-HPCBPforHadoop.ppt
> Hadoop as a Service provides a standards-based web services interface that layers on
top of Hadoop on Demand and allows Hadoop jobs to be submitted via popular schedulers, such
as Sun Grid Engine (SGE), Platform LSF, Microsoft HPC Server 2008 etc., to local or remote
Hadoop clusters.  This allows multiple Hadoop clusters within an organization to be efficiently
shared and provides flexibility, allowing remote Hadoop clusters, offered as Cloud services,
to be used for experimentation and burst capacity. HaaS hides complexity, allowing users to
submit many types of compute or data intensive work via a single scheduler without actually
knowing where it will be done. Additionally providing a standards-based front-end to Hadoop
means that users would be able to easily choose HaaS providers without being locked in, i.e.
via proprietary interfaces such as Amazon's map/reduce service.  
> Our HaaS implementation uses the OGF High Performance Computing Basic Profile standard
to define interoperable job submission descriptions and management interfaces to Hadoop. It
uses Hadoop on Demand to provision capacity. Our HaaS implementation also supports files stage
in/out with protocols like FTP, SCP and GridFTP.
> Our HaaS implementation also provides a suit of RESTful interface which  compliant with

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message