gump-general mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Adam R. B. Jack" <>
Subject Re: [RT] Gump GUI development parallel to Gump Remote Agents (Sites)
Date Mon, 29 Mar 2004 20:26:08 GMT
> At the present time, it shows a run that started just after noon on
> Sunday, and completed at 9 am the following day.  This happens to show a
> complete run.  Check back in a few hours, and depending on when you
> check, you will find partial results, either on the checkout side or on
> the build side.
> I believe that this is a very valuable thing to have.

Ok, I finally hear you. Since the "remote agent" long runs are so long you'd
like folks to be able to see progress. Understood, and yes -- I've used

> Note that when you run classic Gump from the command line, much of this
> information goes to stdout instead.  So, I say "build gump" and what I
> get is pretty much identical to what I would get if I were to type in
> "ant" from the proper subdirectory, and with all the right things in my

With the Python Gump scripts (running locally) we have (clearly) the same
activity, the outputs are going to memory and/or files though. No reason we
can't display them (to stdout) as they are produced. No reason we need them
to go to a file if to screen is fine. I typically run with --debug to watch
progress of local scripts.

With the Python Gump script (running remote) we can publish them upon
production, and have some 'status' page (perhaps the buildLog.html) that we
(re)write with what we've done so far.

> This is also very valuable when inviting others into helping debug an
> integration problem.  Simply give them a login on the machine, tell them
> to make whatever changes they like (it will get wiped out next cycle
> anyway), and then type "build <project>" to see if it helps.

We do have these scripts, albeit not identical calling interface:

Note: Most of this is working, some (like --optimize, only compile if
updated) is wishful thinking/planning:

> I was able to identify exactly where breakages was introduced on
> projects I didn't know by repeatedly issuing "cvs update -D <date>" and
> "build <project>", doing a binary search over time space until I had
> identified the exact commit that caused the problem.

Yup, this is what Stefan does a lot. I completed this use case (with --quick
... do the list not the full dependency stack) recently. Much of
GumpRunOptions is what drives these paths through GumpEngine.

> Yes, we could split the gui into a client and server and solve this (or
> simply rely on XWindows), but there is value in being able to do this
> from the command line.

Ok, this is a miscommunication -- which I now understand. I was talking
about personal (local) builds here, not remote ones. I was saying that the
local use case is clearly visible with a GUI paradygm, and I know you had
the GUI with a 'build' button. I was saying that the classes are there for
both paths, and that the GUI is a good environment for exploring what we can
do with listeners (write HTML at server, GUI mark color on Icon and update a
progress bar, etc.) [I also think the GUI brings out issues that scripts
don't, such as being able to re-use loaded metadata for more than one run,
without poluting it, etc.]

[I wasn't trying to say have the GUI talk to the server, although that is an
interesting side conversation for another time.

BTW: See we have Gump talking to other Gumps via XML? At end of run (right
now) a server.xml is written, containing all states, and other Gumps
download it, parse it & have a lookup table in memory.

Nick suggested we use Atom API to a central server, which is intruiging. It
could be overkill, but it could be a very nice distributed solution for what
you are thinking, i.e. offloading the documentation to microcontent at a
remote server to allow notification/timely reading/etc. A bit too 'out
there' for now, and demands that we solve this first, but (IMHO) it has

> > The reason I care about this distinction is that I feel we don't need a
> > major re-work in order to satisfy targeted runs. I think this is most
> > clearly seen if we bring the GUI, that Sam/Nicola worked on, out of
> > mothballs. I'd love to see that done, 'cos I think it brings good issues
> > the surface. With the GUI I think we can allow a user to pick N
> > and can perform an update or a run, or whatever.
> A GUI is a nice thing to have, but not as a replacement for a command
> line interface.

We agree, hence my focus has been command line. I focused on the main run,
then on the local scripts (which are like GUI in being able to select
choices other than 'all').

> While I like the GUI and can invest some cycles there, I will remain
> fundamentally a command line person for 95% of my usage.

Understood. Again, my purpose was to have a use case where Forrest was
clearly not an option (graphical not HTML) and hence have a good balance on
the inner workings so we flesh out/mature such code paths.

> So, my RT
> would be for us to flush out what would be required in order to
> implement a dynamic forrest approach.

I like this (and the rest of your approach). I am trying to complete this to
give us a level set of where we stand, so we can see how we migrate it:



To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message