ant-ivy-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Tim Brown <tpbr...@gmail.com>
Subject Re: instrumented vs non-instrumented artifacts with shared dependencies
Date Tue, 30 Nov 2010 06:28:43 GMT
Hi Phil,

I started down a similar path a while back, but decided against it...

For each module:
>        * a non-instrumented runtime  jar is published under the “deploy”
> configuration
> , and
>        * an instrumented runtime jar is published under the “instrumented”
> configuration.
>
> Dev uses the instrumented jars for unit test code coverage, and QA uses
> them for
> their testing code coverage.  The non-instrumented jars are used in
> production.
>
>
A couple of quick thoughts on the above  [I've had a bit o' scotch tonight,
so pardon the ramblings]:

If developers are writing unit tests, those tests are for the particular
module they're bundled with.   Since it's the module they're working on,
it's available as source in the IDE [for reviewing coverage as they work],
and later as part of your build process [for aggregation of the metric
'formally'].

Wouldn't counting coverage of the instrumented jars fall under incidental
coverage, and as such not be counted?

As for the QA / functional test code coverage.  We opted to optionally
instrument the deployable [WAR, whatever] as part of our deployment process
for a couple of reasons:
1) It "feels" more like we're producing one binary for all environments.
Yes, it's not necessarily true since we're technically changing it via
instrumentation, but we use a multi-stage functional test approach.
 Instrumentation is only collected on the first stage [non-integrated;
integration points are stubbed dynamically].
1.1) Side note - don't collect functional test code coverage in a shared
environment. Exploratory and manual testing will munge your coverage data.
Ditto if you're doing concurrent test runs from parallel suites.
2) Instrumentation, at least EMMA's, conflicted with other instrumentations
that were necessary -- namely Oracle/Tangosol Coherence. [We also had issues
with it and AspectJ, but we dropped AJ, for other reasons].

Point #2 was really the inflection point for us.  The app was instrumented
for Coherence in Prod, but we didn't want (and shouldn't have) it enabled in
the first functional test stage.  Since the app was already technically
different than prod, we chose to keep the stages when the app was different
to a minimum, and near the start of the pipeline.  [and FWIW, we actually
"de-Tangosol" the binary in this stage, then EMMA instrument it].

If it's helpful, the configurations we ended up with are:
- config - externalized configuration
- runtime - things bundled in the deployable and available at runtime
(extends config)
- compile - things necessary at compile time [but usually are container
provided thus not inside the deployable] (extends runtime)
- test-public - A "special" config for when we actually need the test
artifacts of another module for our 'test' config [factories, etc.].  yes..
this one is a big smell.  (extends runtime)
- test - things necessary for test compilation or execution, but not in
compile/runtime (private. extends compile, test-public)

Configuration mapping goes something
like: runtime->runtime(*);config->config(*);test->runtime(*);compile->runtime(*);"
 [test-public would require a explicitly mapped dependency]

hth,

~Tim

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message