airavata-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Emre Brookes <e...@biochem.uthscsa.edu>
Subject Re: genapp airavata integration
Date Tue, 29 Apr 2014 13:06:13 GMT
Hi Suresh,

For expediency (due to the 1 June timeframe) , I was thinking it would 
be simpler to run the Airavata gateway on a VM
within the  security environment of the localhost and only talk to the 
localhost's compute nodes and have everything
pre-staged and available directly to GenApp to return to the client.

I had some discussions yesterday with Joseph Curtis (who is having the 1 
June workshop, is part of CCP-SAS,
and is currently utilizing GenApp to wrap his modules and is running 
tests directly on quarry).
We can replicate our current GenApp-calls-local-executable scenario on 
the new compute resource
(entropy.chem.utk.edu) and he will do "manual" queuing for the workshop 
1 June
(i.e. let only a few attendees submit at a time).  Not ideal, but 
workable, and buys us a bit of time.

His modules can execute directly without going through the batch system 
this way, but this *needs*
to be migrated to the full queue submission mechanism before general 
alpha deployment to
various beamlines,  which is when we need to bring in Airavata and 
hopefully can be ready
in the June/July timeframe (for which your points below are valid).

Summarizing a plan forward:

Within the next week or two:
run GenApp directly on a VM on entropy - no queue submission - 
acceptable for 1 June workshop.

As soon as Airavata is ready (app-catalog etc):
Bring in an Airavata quarry VM dedicated to "GenApp" and begin testing 
integration and submission to entropy's SGE batch.

Thanks,
Emre.



Suresh Marru wrote:
> Hi Lahiru,
>
> I do not think Emre is trying to run Airavata on the resource. If I 
> understood Emre’s scenario, here are the steps:
>
> * Install Airavata on a VM on a Quarry gateway hosting machine
> * Setup SSH keys for the target CCP-SAS cluster and register them with 
> credential store
> * Register applications automatically during gen-app code generation 
> (this will rely on app catalog)
> * Bundle thrift clients to gen-app interfaces
> * thrift clients talk to airavata on GHM which ssh to CCP-SAS cluster 
> and do a qsub submission
> * monitor jobs using qstat
> * If file staging is needed, use scp
>
> I only see app catalog as a blocker, do you see any?
>
> Emre please correct any of the steps above.
>
> Cheers,
> Suresh
>
> On Apr 28, 2014, at 8:47 PM, Lahiru Gunathilake <glahiru@gmail.com 
> <mailto:glahiru@gmail.com>> wrote:
>
>> Hi Emre,
>>
>> We haven't tested that scenario where we run airavata in the resource 
>> and basically talking to localhost, but ideally if you specify some 
>> dummy cert files or empty some random password it might work.
>>
>> Can you please let us know if you get any errors....
>>
>> Thanks
>> Lahiru
>>
>>
>> On Mon, Apr 28, 2014 at 4:09 PM, Emre Brookes 
>> <emre@biochem.uthscsa.edu <mailto:emre@biochem.uthscsa.edu>> wrote:
>>
>>     Lahiru Gunathilake wrote:
>>
>>         What do you mean by DIRECT job submission to SGE ? But I know
>>         we can submit to SGE and monitor the jobs.
>>
>>         Regards
>>         Lahiru
>>
>>     I mean Airavata would be running in a vm on the cluster and will
>>     be able to directly submit (via ssh) to the
>>     queue without certificates etc.
>>
>>
>>     Raminder wrote:
>>
>>         What language client you are looking for? Airavata can
>>         integrate with cluster even now but it would be good to learn
>>         the security requirements. Do you need to transfer data?
>>
>>         Thanks
>>         Raminder
>>
>>     For now, no transfer of data would be required, simple use of the
>>     local file system with work directories pre-staged.
>>
>>     Thanks for the quick feedback.
>>     Emre.
>>
>>
>>
>>
>>
>> -- 
>> System Analyst Programmer
>> PTI Lab
>> Indiana University
>


Mime
View raw message