gump-general mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sam Ruby <ru...@apache.org>
Subject Re: [RT] Gump Architecture
Date Sat, 27 Mar 2004 11:18:59 GMT
One thing that is possible to do in Python that would have been 
difficult to do in shell script is to parallelize some of the processing.

In a daily build, there are three major components: cvs/svn update, 
copy/rsync, and build.  My recommendation would be to not overdesign 
this (too many processes/threads is not a good thing for performance 
either), but perhaps having exactly three operating as a pipeline would 
be a good thing.

Let's look at two extremes:

1) Fast network, slow CPU.  In this case, after the first few cvs/svn 
updates, a period would occur where one component is building, a 
move/copy for a subsequent next component will be in progress, and a 
cvs/svn update for a third would be running.  The net effect would be 
that the overall time would approximate the time of the builds, plus the 
small increment necessary to "prime the pump", and whatever penalty 
overlapping the I/O with the builds adds.

2) Slow network, fast CPU.  Essentially the reverse of the above, 
whereby the overall time is the time of checkouts plus a small delta.

- Sam Ruby

---------------------------------------------------------------------
To unsubscribe, e-mail: general-unsubscribe@gump.apache.org
For additional commands, e-mail: general-help@gump.apache.org


Mime
View raw message