jackrabbit-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jukka Zitting <jukka.zitt...@gmail.com>
Subject Re: JSR-283 official TCK and the 'jackrabbit-jcr-tests'
Date Fri, 03 Aug 2012 16:51:33 GMT
Hi,

On Fri, Aug 3, 2012 at 5:21 PM, Randall Hauch <rhauch@gmail.com> wrote:
> The Jackrabbit TCK page [2] states that Jackrabbit's JCR Tests module is
> used within the official JSR-283 TCK [3] but is not the official TCK test.
> Obviously the former have been maintained and continue to improve, but the
> official JSR-283 TCK tests appear to have been released last on August 19,
> 2009. Is it possible that the official JSR-283 TCK tests can be updated, and
> would having a separate release schedule help with this in any way?

The only way to update the official JSR-283 TCK is through the JCP
maintenance process [1].

> (If the official TCK tests cannot be updated, then I presume any project wanting
> to claim JSR-283 compliance would have to run the official TCKs and appeal each
> of the older incorrect tests by stating the issue and perhaps using the
> updated/corrected tests Jackrabbit JCR Tests module as "corrections".

Yes, the appeal process is documented in [2] and relies on an exclude
list of buggy tests maintained by the spec lead. In practice, if you
have pointers to relevant jackrabbit-jcr-tests bug reports and can
demonstrate that your implementation passes the test after that issue
is fixed, the spec lead will be happy to put the test case on the
exclude list for you.

> Is there anything that makes this easier with a non-updated TCK?

It's a more light-weight process than doing a maintenance release through JCP.

> How does Jackrabbit show spec and TCK compliance?

We rely on the appeals process and exclude list as described above. In
practice that means that we're good as long as Jackrabbit passes the
latest version of the tests in jackrabbit-jcr-tests.

> The official TCK and Jackrabbit's JCR tests unsurprisingly do not completely
> cover the specification, and doing so would require a significant amount of
> effort. However, there may be opportunity over time for projects to propose
> adding new tests.

Agreed. Especially with multiple active JCR implementations I think
there's a shared incentive to maintain compatibility beyond that of
what the official TCK tests. There's still plenty of place for healthy
competition on performance, scalability, maintainability and various
other features and "-ilities" that fall outside the scope of JCR.

Thinking further, a standalone test codebase would also be a great
place to cooperate on things like performance benchmarks or other
tests that go beyond the scope of the JCR spec but that would still
add value to implementors of content repositories.

> If the JCR Tests' release cycle were independent from the Jackrabbit's
> release cycle, then the process of adding new tests and releasing updated
> JCR tests might be easier. On the other hand, verification that the tests are
> valid may become a bit harder.

Not necessarily. If the tests were maintained outside the main
Jackrabbit trunk, it would be easier set up a CI build that runs all
updates to the TCK codebase against the official JCR RI and latest
stable versions of Jackrabbit, ModeShape and other JCR implementations
(with excludes for tests that are known issues for each particular
implementation). That should make it much easier than now to catch
problems where the TCK codebase makes assumptions based on just a
single implementation.

> For example, the ModeShape project has around 70 additional tests [4] that
> we'd be willing to donate. Most of these are around verifying administration
> privileges (e.g., registering namespaces, node types, etc., especially for
> anonymous users), versioning, and locking.

That would be awesome!

We also have some extra generic JCR tests, written for the jcr2spi
component, that could and ideally should be used as a part of the TCK.

> Additionally, the Oak effort is already running the JCR tests and has
> recently found/fixed issues, too. Can those participating in Oak offer how
> much they might expect to simply reuse vs. add new tests?

For now our goal is just to pass the already existing TCK tests, but
I'm sure that over time we'll encounter cases where existing clients
that assume only standard JCR functionality fail on Oak because of
some problems in our code. Capturing such cases as TCK tests would be
quite useful.

[1] http://jcp.org/en/procedures/jcp2#5
[2] http://www.day.com/content/day/en/products/jcr/jsr-283/_jcr_content/par/download_1/file.res/jsr-283-tck-appeal.pdf

BR,

Jukka Zitting

Mime
View raw message