cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Philip Thompson (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (CASSANDRA-10858) test_copy_to_with_more_failures_than_max_attempts is failing
Date Mon, 14 Dec 2015 20:18:46 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-10858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15056647#comment-15056647
] 

Philip Thompson commented on CASSANDRA-10858:
---------------------------------------------

It appears to be happening to {{cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_copy_to_with_child_process_crashing}}
on 2.2+ as well.

> test_copy_to_with_more_failures_than_max_attempts is failing
> ------------------------------------------------------------
>
>                 Key: CASSANDRA-10858
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-10858
>             Project: Cassandra
>          Issue Type: Sub-task
>          Components: Testing, Tools
>            Reporter: Philip Thompson
>             Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> {{cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_copy_to_with_more_failures_than_max_attempts}}
which was introduced for CASSANDRA-9306 is failing when run on clusters without vnodes. To
reproduce, simply run the test with the environment variable DISABLE_VNODES=true. 
> Here is the entire debug output:
> {code}
> dtest: DEBUG: cluster ccm directory: /var/folders/v3/z4wf_34n1q506_xjdy49gb780000gn/T/dtest-iJQECY
> dtest: DEBUG: removing ccm cluster test at: /var/folders/v3/z4wf_34n1q506_xjdy49gb780000gn/T/dtest-iJQECY
> dtest: DEBUG: clearing ssl stores from [/var/folders/v3/z4wf_34n1q506_xjdy49gb780000gn/T/dtest-iJQECY]
directory
> dtest: DEBUG: cluster ccm directory: /var/folders/v3/z4wf_34n1q506_xjdy49gb780000gn/T/dtest-CnOgA8
> dtest: DEBUG: Running stress
> dtest: DEBUG: Exporting to csv file: /var/folders/v3/z4wf_34n1q506_xjdy49gb780000gn/T/tmpZYqB01
with {"failing_range": {"start": 0, "end": 5000000000000000000, "num_failures": 5}} and 3
max attempts
> dtest: DEBUG:
> Starting copy of keyspace1.standard1 with columns ['key', 'C0', 'C1', 'C2', 'C3', 'C4'].
> Processed 10000 rows; Written: 10503.508303 rows/s
> Processed 20000 rows; Written: 11860.954046 rows/s
> Processed 30000 rows; Written: 13068.388704 rows/s
> Processed 40000 rows; Written: 16941.628006 rows/s
> Processed 50000 rows; Written: 17609.109488 rows/s
> Processed 60000 rows; Written: 19475.156238 rows/s
> Processed 70000 rows; Written: 19976.978154 rows/s
> Processed 80000 rows; Written: 19992.329090 rows/s
> Processed 90000 rows; Written: 20623.291907 rows/s
> Processed 100000 rows; Written: 21644.815363 rows/s
> Processed 100000 rows; Written: 10822.407682 rows/s
> 100000 rows exported in 5.816 seconds.
> {code}
> I assume this is related to the failure injection code in cqlsh not handling fewer token
ranges.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message