airavata-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Lahiru Gunathilake <>
Subject Re: XBaya/Hadoop Integration - Concern
Date Sat, 22 Jun 2013 03:40:58 GMT
Hi Danushka,

I am +1 for this approach, but I am sure you need to patch gfac-core
without breaking default gfac functionality.


On Fri, Jun 21, 2013 at 7:44 PM, Danushka Menikkumbura <> wrote:

> Hadoop deployment model (single node, local cluster, EMR, etc) is not
> exactly a host, as in Airavata, but is along the lines of host IMO.
> Therefore we can still stick to a similar model but need to have a
> different UI interface to configure them. Still Hadoop jobs would be
> treated differently and have them configured in workflow itself (i.e. the
> current implementation), as opposed to having them predefined as in GFac
> applications.
> Please kindly let me know if you think otherwise.
> Cheers,
> Danushka
> On Wed, Jun 19, 2013 at 12:57 AM, Danushka Menikkumbura <
>> wrote:
>> Hi All,
>> The current UI implementation does not take application/host description
>> into account simply because they have little or no meaning in the Hadoop
>> world as I believe. The current implementation enables configuring each
>> individual job using the UI (Please see the attached xbaya-hadoop.png).
>> The upside of this approach is that new jobs could be added/configured
>> dynamically, without adding application descriptions/generating
>> code/compiling/re-deploying/etc. The downside is that it is different from
>> general GFac application invocation, where each application has an
>> associated application/host/etc. Nevertheless we are trying to incorporate
>> something that does not quite fit into application/host domain.
>> Thoughts appreciated.
>> Thanks,
>> Danushka

System Analyst Programmer
Indiana University

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message