cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sylvain Lebresne (Commented) (JIRA)" <>
Subject [jira] [Commented] (CASSANDRA-3531) Fix crack-smoking in ConsistencyLevelTest
Date Tue, 03 Jan 2012 14:21:41 GMT


Sylvain Lebresne commented on CASSANDRA-3531:

The deeper change I have in mind consists roughly in removing that test. It's trying to tests
the result of WriteHandler.assureSufficientLiveNodes() but that method depends on the result
of the FailureDetector. The problem is that I don't think we really have a good way to create
real multi-nodes cluster in the unit test. Maybe we can "fake" live nodes but I'm not sure
how and in the end it makes me wonder what the tests is really testing if we're starting to
fake too much stuff. It seems to me that the distributed tests are probably a better place
to do that kind of thing.

In any case, It's really annoying to have unit tests failure, especially in the 1.0 branch.
And as said in the description, that test never really worked anyway so any opposition to
at least commenting it for now? 
> Fix crack-smoking in ConsistencyLevelTest 
> ------------------------------------------
>                 Key: CASSANDRA-3531
>                 URL:
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Tests
>    Affects Versions: 1.0.4
>            Reporter: Sylvain Lebresne
>            Priority: Minor
>             Fix For: 1.0.7
> First, let's note that this test fails in current 1.0 branch. It was "broken" (emphasis
on the quotes) by CASSANDRA-3529. But it's not CASSANDRA-3529 fault, it's only that the use
of NonBlockingHashMap changed the order of the tables returned by Schema.instance.getNonSystemTables().
*And*,  it turns out that ConsistencyLevelTest bails out as soon as it has found one keyspace
with rf >= 2 due to a misplaced return. So it use to be that ConsistencyLevelTest was only
ran for Keyspace5 (whose RF is 2) for which the test work. But for any RF > 2, the test
> The reason of this failing is that the test creates a 3 node cluster for whom only 1
node is alive as far as the failure detector is concerned. So for RF=3 and CL=QUORUM, the
writes are unavailable (the failure detector is queried), while for reads we "pretend" two
nodes are alive so we end up with a case where isWriteUnavailable != isReadUnavailable.

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:!default.jspa
For more information on JIRA, see:


View raw message