river-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Gregg Wonderly <gr...@wonderly.org>
Subject Re: Benchmark organization
Date Tue, 01 Mar 2011 05:56:32 GMT
On 2/28/2011 11:47 AM, Dan Creswell wrote:
> I feel you're on a completely different thought path - we're not (IMHO)
> talking about two versions of the same thing from one build.

That was one of the choices I mentioned, and I felt it probably wasn't the case. 
  If you decide to create a few test classes and a trial reimplementation of 
something that already exists, people do sometimes just wire things together in 
the same build.  I'm not so fond of doing that because I usually forget to do 
something important waste my time chasing down something I didn't need to.

> - we're talking
> about comparing two different implementations alongside each other of some
> small sub-element.

So do we want to have little directories of "experiments" hanging out under the 
production branch/tree, or do we want to put them off to the side somewhere or 
what?  I think the stuff needs to be fairly easy for people to pull and test if 
we want community participation, and that's why I'm going on about branching 
given the current state of the repository.

If we modularize, then there still needs to be a way to take something that 
bridges the module boundaries and be able to build it and run the appropriate 
benchmarks.

For example, something that tests the speed of a new transaction implementation 
needs to build leasing and transaction stuff.

I'm just having a hard time with imagining a small thing when in the end, we do 
have a monolithic codebase without a great deal of prune-able bits at this point.

After some level of modularization, there is still the need for multi-module 
based benchmarks, and so, for me, that sill indicates a need to version modules 
with something similar to a "branch".

I just find it a lot easier to create reproducible builds and trackable change 
sets when I branch so that everything is clearly delimited and I don't have to 
remember what to rename or exchange in-place etc.

> SCM-level tools and the kinds of hoops you're talking about feel like an
> awful big and costly hammer for such work.

I guess I am used to perforce.  Branches are free, they only reference files you 
haven't modified.  So, the depot doesn't get out of hand, and you can always use 
the obliterate command to take out the stuff that is no longer relevant.

I'm not suggesting that branches end up being "easy" or "fun", because in the 
end, it is another "thing" to track.  But, I'm just trying to suggest something 
that I find to be easy to use when I need a workspace that encompasses more than 
a few files in a single module.

Gregg

> On 28 February 2011 16:07, Gregg Wonderly<gregg@wonderly.org>  wrote:
>
>> On 2/28/2011 9:33 AM, Dan Creswell wrote:
>>
>>> Think the nub of the issue is two-fold:
>>>
>>> (1) Dealing with namespace clashes (see Patricia's FastList conflict
>>> discussion).
>>>
>>
>> Okay, but, when I am doing something to replace code/classes with the same
>> name, I most often do this in a branch that I will later reintegrate into
>> the main branch.  Sometimes, I will do it in place in an active change-list
>> that I may not submit for some time.  There have been times that I've just
>> used the SCM to preserve the changes, in place, by submitting them and then
>> synching to the old version and then submitting that older version over the
>> top of my investigation.  Then, I go back later and revisit that idea
>> without losing it, and provide something that others could check out to look
>> at.
>>
>> I think there are lots of SCM based mechanisms that can be used to "hide"
>> or "manage" these name space based conflicts.  Which of these things are
>> people most often using as their reason for being frustrated by testing
>> capabilities?
>>
>> 1) We build this one thing and want to test two version from that single
>> build.
>> 2) Don't have space to have two copies.
>> 3) Can't test locally, so have to submit something to get it into the
>> offline test facilities.
>> 4) Something else I've missed.
>>
>>
>>   (2) Partitioning of tests and fitting them efficiently into development or
>>> build/release cycles.
>>>
>>
>> This points back to 3) above I think.  Does that mean that we are just not
>> "testing" the right branch?  Shouldn't contributors largely be testing in
>> development branches and then the project integrating and accepting
>> "approved" changes into the production branch that are then tested one more
>> time with a broader integration suite?
>>
>> In the discussions on the use of branches I didn't really note whether this
>> was something people liked or had experience with.  Is there something about
>> this that seems not so attractive?  Do many disparate changes can cause
>> integration to be a little more taxing, but that's just a fact of life in
>> some cases where you can't work together, or don't know about another party
>> doing anything in parallel with your work.  I do like perforce's centralized
>> store for this reason, because it does make it possible to see where your
>> coworkers are tweaking things so you can expect conflicts or ask them about
>> what things you need to be aware of.
>>
>> Gregg
>>
>>   On 28 February 2011 15:29, Gregg Wonderly<gregg@wonderly.org
>>> <mailto:gregg@wonderly.org>>  wrote:
>>>
>>>     So, maybe I am not understanding the real issue.  When I run testing on
>>> some
>>>     new development, I do it in the branch or changelist that I am working
>>> on,
>>>     and record the results.  If I feel like I need to adjust the test
>>> strategy,
>>>     I do that as a separate change on a separate branch/changelist that I
>>> can
>>>     use to run against the existing code.
>>>
>>>     I can checkout/copy (rsync is my friend) stuff to an appropriate place
>>> to do
>>>     longer term testing.
>>>
>>>     Is the real issue that communicating the "test stuff" (source etc.)
>>> really
>>>     requires a "submission" so that it can go over to the test servers
>>> because
>>>     the type of change that Patricia wants to test can't be tested in her
>>> local
>>>     environment?  I'd guess it is because of the parallel discussion about
>>>     modularity.
>>>
>>>     It seems like we have a "different" type of development model that the
>>>     source tree was designed to support vs what the test environment we
>>> have
>>>     supports?
>>>
>>>     Gregg Wonderly
>>>
>>>
>>>     On 2/28/2011 9:08 AM, Dan Creswell wrote:
>>>
>>>         On 28 February 2011 14:50, Patricia Shanahan<pats@acm.org
>>>         <mailto:pats@acm.org>>   wrote:
>>>
>>>
>>>             Dennis Reedy wrote:
>>>
>>>                 On Feb 28, 2011, at 1247AM, Patricia Shanahan wrote:
>>>
>>>                     How would you propose handling a case like
>>> outrigger.FastList?
>>>
>>>                     It is package access only, so changing its interface to
>>> the
>>>                     rest of
>>>                     outrigger did not affect any public API. Several
>>> classes
>>>                     needed to be
>>>                     changed to handle the interface change.
>>>
>>>
>>>                 If I understand your question correctly, I think it should
>>> be fairly
>>>                 straightforward. Following module conventions, we would
>>> have a
>>>                 structure
>>>                 that would look (something) like:
>>>
>>>                 outrigger/src/main/java/org/apache/river/outrigger
>>>                 outrigger/src/test/java/org/apache/river/outrigger
>>>
>>>                 The test (or benchmark) code would be in the same package,
>>> just in a
>>>                 different directory. You would be able to accommodate your
>>>                 package access
>>>                 only requirement.
>>>
>>>
>>>
>>>             I don't see how that answers the problem of a possible
>>> intra-package
>>>             interface change that needs to be benchmarked *before* the
>>> changes
>>>             to the
>>>             rest of the package that would be needed to integrate the class
>>>             under test
>>>             with the rest of what would be its package if it wins the
>>> benchmark.
>>>
>>>             If I had initially named my new FastList implementation
>>>             "com.sun.jini.outrigger.FastList" I could not have compiled
>>>             outrigger in its
>>>             presence. It is not a drop-in replacement for the old FastList.
>>>
>>>             If it had turned out to be slower than the existing FastList I
>>> would
>>>             still
>>>             have wanted to preserve it, and the relevant benchmark, because
>>> of the
>>>             possibility that future java.util.concurrent changes would make
>>> it
>>>             better.
>>>             On the other hand, I would not have done the changes to the
>>> rest of
>>>             outrigger.
>>>
>>>
>>>
>>>         So I think we're coming down to the new FastList implementation
>>> having to be
>>>         called something else for benchmarking purposes to avoid conflict
>>> with old
>>>         FastList. Or the new implementation needs to be an inner class of
>>> the
>>>         benchmark and that could live in the same package as original
>>> FastList. Of
>>>         course, still packaging and source organisation concerns to
>>> conquer.
>>>
>>>
>>>
>>>
>>
>


Mime
View raw message