airavata-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Saminda Wijeratne <samin...@gmail.com>
Subject Re: Orchestrator Overview Meeting Summary
Date Mon, 20 Jan 2014 06:45:34 GMT
On Sun, Jan 19, 2014 at 4:03 PM, Lahiru Gunathilake <glahiru@gmail.com>wrote:

> Hi saminda,
>
> I am writing this to clarify the CIPRES scenario, please correct me if I
> am wrong.
>
> CIPRES  users create experiments with all the parameters.
>
> Easy step is they simply give the input values and run jobs (because they
> store job related configuration to application descriptor, and doesn't have
> to send job configuratino data).
>
> Second scenario is when they want to change the job configuration data.
>
For CIPRES how they manage the 2nd scenario when needed is by defining 2
tools for different deployments of the same application (The name of the
tool somewhat reflects the deployment location).

>
> To handle this case we are trying to think of a template approach ?
>
We are just considering template as another way to look at solving it.

>
> If my understanding above is correct, we need to save the job
> configuration data each experiment have used if that is different from the
> original. Or we need to create a separate App descriptor each time some
> user change some parameter in AD (this is not a good approach).
>
> How about we create a base Application descriptor and associate it with a
> runtime job data used for each experiment invocation ? In that case we have
> to save finally used job configuration and users can view this information
> for analyse the experiment results. In this case users can send this data
> along the request (this works fine with Orchestrator now if user send
> Application Descriptor along the request).
>
+1

>
> WDYT ?
>
> Lahiru
>
>
> On Mon, Jan 20, 2014 at 12:40 AM, Suresh Marru <smarru@apache.org> wrote:
>
>>
>> On Jan 19, 2014, at 12:38 PM, Saminda Wijeratne <samindaw@gmail.com>
>> wrote:
>>
>> > My initial idea is to have an experiment template saved and later users
>> would launch a experiment template as much as they would want each time
>> creating an experiment only at the launch. If users want to make small
>> changes, they could take the template, change it and save it again either
>> to a new template or to the same one. But I was wondering how intuitive it
>> would be to the user to follow up such an approach.
>>
>> I like the template approach as one of the an implementation option, but
>> I wonder if it is not applicable for the current discussion of cloning. Let
>> me explain my thoughts more clearly.
>>
>> For eScience use cases workflow (or application in this case) is the
>> recipe and experiment is an instance of executing the recipe. So naturally
>> workflow and application descriptions are templates and instantiated for
>> each execution. But here I see the use case is cloning the experiment (an
>> instance of end result) and not the application/workflow template (which is
>> what Amila alluded earlier on this thread). By exploratory nature of
>> science, experiments are trial and errors, so it may not be a-priorly
>> possible to determine re-usable experiments and template them. Rather users
>> roll the dice and when they start seeing expected results, they would like
>> clone the experiments and fine tune it or repeat it over finer data and so
>> forth. So in summary, I think applications/workflows are good examples for
>> template approach and experiments are good for after-the-fact cloning.
>>
> I think what you mentioned for the idea of a template for a gateway makes
alot of sense.

>
>> I say cloning experiments can be an implement as templates, because if a
>> user is having a huge list of executed experiments then it will be tough to
>> navigate through the workspace to find the ones they want to clone. So an
>> option can be provided to mark the ones they think are worth cloning in
>> future and make the shorter list available. This very arguably mimics
>> templates.
>>
> This sounds like a new use case for Airavata?

Just a thought,

Experiment = Experiment ID + Experiment Metadata (eg: name, user,
date/time, status...) + Experiment Configuration (eg: input, descriptors to
use...)

IMO cloning an experiment is just duplicating "Experiment Configuration"
and creating new "Experiment ID" + "Experiment Metadata"


>> Suresh
>>
>> >
>> >
>> > On Sun, Jan 19, 2014 at 7:58 AM, Suresh Marru <smarru@apache.org>
>> wrote:
>> > I see Amila’s point and can be argued that, Airavata Client can fetch
>> experiment, modify what is needed and re-submit as a new experiment.
>> >
>> > But I agree with Saminda, if an experiment has dozens of inputs and if
>> say only parameter or scheduling info needs to be changes, cloning makes it
>> useful. The challenge though is how to communicate what all needs to be
>> changed? Should we assume anything explicitly not passed remains as
>> original experiment and the ones passed are overridden?
>> >
>> > I think the word clone seems fine and also aligns with the Java Clone
>> interpretation [1].
>> >
>> > This brings up another question, should there be only create, launch,
>> clone and terminate experiments or should we also have a configure
>> experiment? The purpose of configure is to let the client slowly load up
>> the object as it has the information and only launch it when it is ready.
>> That way portals need not have an intermediate persistence for these
>> objects and facilitate users to build an experiment in long sessions.
>> Thought?
>> >
>> > Suresh
>> > [1] -
>> http://docs.oracle.com/javase/7/docs/api/java/lang/Object.html#clone()
>> >
>> > On Jan 17, 2014, at 2:05 PM, Saminda Wijeratne <samindaw@gmail.com>
>> wrote:
>> >
>> > > an experiment will not define new descriptors but rather point to an
>> existing descriptor(s). IMO (correct me if I'm wrong),
>> > >
>> > > Experiment = Application + Input value(s) for application +
>> Configuration data for managing job
>> > >
>> > > Application = Service Descriptor + Host Descriptor + Application
>> Descriptor
>> > >
>> > > Thus for an experiment it involves quite the amount of data of which
>> needs to be specified. Thus it is easier to make a copy of it rather than
>> asking the user to specify all of the data again when only there are very
>> few changes compared to original experiment. Perhaps the confusion here is
>> the word "clone"?
>> > >
>> > >
>> > > On Fri, Jan 17, 2014 at 10:20 AM, Amila Jayasekara <
>> thejaka.amila@gmail.com> wrote:
>> > > This seems like adding new experiment definition. (i.e. new
>> descriptors).
>> > > As far as I understood this should be handled at UI layer (?). For
>> the backend it will just be new descriptor definitions (?).
>> > > Maybe I am missing something.
>> > >
>> > > - AJ
>> > >
>> > >
>> > > On Fri, Jan 17, 2014 at 1:15 PM, Saminda Wijeratne <
>> samindaw@gmail.com> wrote:
>> > > This was in accordance with the CIPRES usecase scenario where users
>> would want to rerun their tasks but with subset of slightly different
>> parameters/input. This is particularly useful for them because their tasks
>> can include more than 20-30 parameters most of the time.
>> > >
>> > >
>> > > On Fri, Jan 17, 2014 at 6:49 AM, Sachith Withana <swsachith@gmail.com>
>> wrote:
>> > > Hi Amila,
>> > >
>> > > The use of the word "cloning" is misleading.
>> > >
>> > > Saminda suggested that, we would need to run the application in a
>> different host ( based on the users intuition of the host availability/
>> efficiency) keeping all the other variables constant( inputs changes are
>> also allowed). As an example: if a job keeps failing on one host, the user
>> should be allowed to submit the job to another host.
>> > >
>> > > We should come up with a different name for the scenario..
>> > >
>> > >
>> > > On Thu, Jan 16, 2014 at 11:36 PM, Amila Jayasekara <
>> thejaka.amila@gmail.com> wrote:
>> > >
>> > >
>> > >
>> > > On Thu, Jan 16, 2014 at 10:58 AM, Sachith Withana <
>> swsachith@gmail.com> wrote:
>> > > Hi All,
>> > >
>> > > This is the summary of the meeting we had Wednesday( 01/16/14) on the
>> Orchestrator.
>> > >
>> > > Orchestrator Overview
>> > > I Introduced the Orchestrator and I have attached the presentation
>> herewith.
>> > >
>> > > Adding Job Cloning capability to the Orchestrator API
>> > > Saminda suggested that we should have a way to clone an existing job
>> and run it with different inputs or on a different host or both. Here's the
>> Jira for that.[1]
>> > >
>> > > I didnt quite understand what cloning does. Once descriptors are
>> setup we can run experiment with different inputs, many times we want. So
>> what is the actual need to have cloning ?
>> > >
>> > > Thanks
>> > > Thejaka Amila
>> > >
>> > >
>> > > Gfac embedded vs Gfac as a service
>> > > We have implemented the embedded Gfac and decided to use it for now.
>> > > Gfac as a service is a long term goal to have. Until we get the
>> Orchestrator complete we will use the embedded Gfac.
>> > >
>> > > Job statuses for the Orchestrator and the Gfac
>> > > We need to come up with multi-level job statuses. User-level,
>> Orchestartor-level and the Gfac-level statuses. Also the mapping between
>> them is open for discussion. We didn't come to a conclusion on the matter.
>> We will discuss this topic in an upcoming meeting.
>> > >
>> > >
>> > > [1] https://issues.apache.org/jira/browse/AIRAVATA-989
>> > >
>> > > --
>> > > Thanks,
>> > > Sachith Withana
>> > >
>> > >
>> > >
>> > >
>> > >
>> > > --
>> > > Thanks,
>> > > Sachith Withana
>> > >
>> > >
>> > >
>> > >
>> >
>> >
>>
>>
>
>
> --
> System Analyst Programmer
> PTI Lab
> Indiana University
>

Mime
View raw message