river-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dan Creswell <dan.cresw...@gmail.com>
Subject Re: Benchmark organization
Date Mon, 28 Feb 2011 17:47:43 GMT
I feel you're on a completely different thought path - we're not (IMHO)
talking about two versions of the same thing from one build - we're talking
about comparing two different implementations alongside each other of some
small sub-element.

SCM-level tools and the kinds of hoops you're talking about feel like an
awful big and costly hammer for such work.

On 28 February 2011 16:07, Gregg Wonderly <gregg@wonderly.org> wrote:

> On 2/28/2011 9:33 AM, Dan Creswell wrote:
>
>> Think the nub of the issue is two-fold:
>>
>> (1) Dealing with namespace clashes (see Patricia's FastList conflict
>> discussion).
>>
>
> Okay, but, when I am doing something to replace code/classes with the same
> name, I most often do this in a branch that I will later reintegrate into
> the main branch.  Sometimes, I will do it in place in an active change-list
> that I may not submit for some time.  There have been times that I've just
> used the SCM to preserve the changes, in place, by submitting them and then
> synching to the old version and then submitting that older version over the
> top of my investigation.  Then, I go back later and revisit that idea
> without losing it, and provide something that others could check out to look
> at.
>
> I think there are lots of SCM based mechanisms that can be used to "hide"
> or "manage" these name space based conflicts.  Which of these things are
> people most often using as their reason for being frustrated by testing
> capabilities?
>
> 1) We build this one thing and want to test two version from that single
> build.
> 2) Don't have space to have two copies.
> 3) Can't test locally, so have to submit something to get it into the
> offline test facilities.
> 4) Something else I've missed.
>
>
>  (2) Partitioning of tests and fitting them efficiently into development or
>> build/release cycles.
>>
>
> This points back to 3) above I think.  Does that mean that we are just not
> "testing" the right branch?  Shouldn't contributors largely be testing in
> development branches and then the project integrating and accepting
> "approved" changes into the production branch that are then tested one more
> time with a broader integration suite?
>
> In the discussions on the use of branches I didn't really note whether this
> was something people liked or had experience with.  Is there something about
> this that seems not so attractive?  Do many disparate changes can cause
> integration to be a little more taxing, but that's just a fact of life in
> some cases where you can't work together, or don't know about another party
> doing anything in parallel with your work.  I do like perforce's centralized
> store for this reason, because it does make it possible to see where your
> coworkers are tweaking things so you can expect conflicts or ask them about
> what things you need to be aware of.
>
> Gregg
>
>  On 28 February 2011 15:29, Gregg Wonderly <gregg@wonderly.org
>> <mailto:gregg@wonderly.org>> wrote:
>>
>>    So, maybe I am not understanding the real issue.  When I run testing on
>> some
>>    new development, I do it in the branch or changelist that I am working
>> on,
>>    and record the results.  If I feel like I need to adjust the test
>> strategy,
>>    I do that as a separate change on a separate branch/changelist that I
>> can
>>    use to run against the existing code.
>>
>>    I can checkout/copy (rsync is my friend) stuff to an appropriate place
>> to do
>>    longer term testing.
>>
>>    Is the real issue that communicating the "test stuff" (source etc.)
>> really
>>    requires a "submission" so that it can go over to the test servers
>> because
>>    the type of change that Patricia wants to test can't be tested in her
>> local
>>    environment?  I'd guess it is because of the parallel discussion about
>>    modularity.
>>
>>    It seems like we have a "different" type of development model that the
>>    source tree was designed to support vs what the test environment we
>> have
>>    supports?
>>
>>    Gregg Wonderly
>>
>>
>>    On 2/28/2011 9:08 AM, Dan Creswell wrote:
>>
>>        On 28 February 2011 14:50, Patricia Shanahan<pats@acm.org
>>        <mailto:pats@acm.org>>  wrote:
>>
>>
>>            Dennis Reedy wrote:
>>
>>                On Feb 28, 2011, at 1247AM, Patricia Shanahan wrote:
>>
>>                    How would you propose handling a case like
>> outrigger.FastList?
>>
>>                    It is package access only, so changing its interface to
>> the
>>                    rest of
>>                    outrigger did not affect any public API. Several
>> classes
>>                    needed to be
>>                    changed to handle the interface change.
>>
>>
>>                If I understand your question correctly, I think it should
>> be fairly
>>                straightforward. Following module conventions, we would
>> have a
>>                structure
>>                that would look (something) like:
>>
>>                outrigger/src/main/java/org/apache/river/outrigger
>>                outrigger/src/test/java/org/apache/river/outrigger
>>
>>                The test (or benchmark) code would be in the same package,
>> just in a
>>                different directory. You would be able to accommodate your
>>                package access
>>                only requirement.
>>
>>
>>
>>            I don't see how that answers the problem of a possible
>> intra-package
>>            interface change that needs to be benchmarked *before* the
>> changes
>>            to the
>>            rest of the package that would be needed to integrate the class
>>            under test
>>            with the rest of what would be its package if it wins the
>> benchmark.
>>
>>            If I had initially named my new FastList implementation
>>            "com.sun.jini.outrigger.FastList" I could not have compiled
>>            outrigger in its
>>            presence. It is not a drop-in replacement for the old FastList.
>>
>>            If it had turned out to be slower than the existing FastList I
>> would
>>            still
>>            have wanted to preserve it, and the relevant benchmark, because
>> of the
>>            possibility that future java.util.concurrent changes would make
>> it
>>            better.
>>            On the other hand, I would not have done the changes to the
>> rest of
>>            outrigger.
>>
>>
>>
>>        So I think we're coming down to the new FastList implementation
>> having to be
>>        called something else for benchmarking purposes to avoid conflict
>> with old
>>        FastList. Or the new implementation needs to be an inner class of
>> the
>>        benchmark and that could live in the same package as original
>> FastList. Of
>>        course, still packaging and source organisation concerns to
>> conquer.
>>
>>
>>
>>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message