hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Steve Loughran (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-6483) Provide Hadoop as a Service based on standards
Date Sat, 30 Jan 2010 15:24:34 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-6483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12806684#action_12806684
] 

Steve Loughran commented on HADOOP-6483:
----------------------------------------

yes, I would argue for opening up to all interested parties. We could talk next week tuesday
or wednesday?

> Provide Hadoop as a Service based on standards
> ----------------------------------------------
>
>                 Key: HADOOP-6483
>                 URL: https://issues.apache.org/jira/browse/HADOOP-6483
>             Project: Hadoop Common
>          Issue Type: New Feature
>            Reporter: Yang Zhou
>         Attachments: OGF27-HPCBPforHadoop.ppt, SC08-HPCBPforHadoop.ppt
>
>
> Hadoop as a Service provides a standards-based web services interface that layers on
top of Hadoop on Demand and allows Hadoop jobs to be submitted via popular schedulers, such
as Sun Grid Engine (SGE), Platform LSF, Microsoft HPC Server 2008 etc., to local or remote
Hadoop clusters.  This allows multiple Hadoop clusters within an organization to be efficiently
shared and provides flexibility, allowing remote Hadoop clusters, offered as Cloud services,
to be used for experimentation and burst capacity. HaaS hides complexity, allowing users to
submit many types of compute or data intensive work via a single scheduler without actually
knowing where it will be done. Additionally providing a standards-based front-end to Hadoop
means that users would be able to easily choose HaaS providers without being locked in, i.e.
via proprietary interfaces such as Amazon's map/reduce service.  
> Our HaaS implementation uses the OGF High Performance Computing Basic Profile standard
to define interoperable job submission descriptions and management interfaces to Hadoop. It
uses Hadoop on Demand to provision capacity. Our HaaS implementation also supports files stage
in/out with protocols like FTP, SCP and GridFTP.
> Our HaaS implementation also provides a suit of RESTful interface which  compliant with
HPC-BP.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message