cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Brandon Williams (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (CASSANDRA-7567) when the commit_log disk for a single node is overwhelmed the entire cluster slows down
Date Fri, 18 Jul 2014 16:40:04 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-7567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14066495#comment-14066495
] 

Brandon Williams commented on CASSANDRA-7567:
---------------------------------------------

I wasn't able to reproduce this, even using ionice to downgrade java and promote dd.  I also
ran a 'dummy' insert in cqlsh with tracing on while this was running, and if we examine a
trace:

{noformat}
 activity                                                          | timestamp    | source
       | source_elapsed
-------------------------------------------------------------------+--------------+---------------+----------------
                                                execute_cql3_query | 16:26:14,860 |  10.208.8.123
|              0
 Parsing INSERT INTO "Standard1" (key, "C0") VALUES ( 0xf0, 0xf0); | 16:26:14,860 |  10.208.8.123
|            100
                               Message received from /10.208.8.123 | 16:26:14,860 | 10.208.35.225
|             54
                                               Preparing statement | 16:26:14,860 |  10.208.8.123
|            326
                                    Acquiring switchLock read lock | 16:26:14,860 | 10.208.35.225
|            170
                                 Determining replicas for mutation | 16:26:14,860 |  10.208.8.123
|            491
                                            Appending to commitlog | 16:26:14,860 | 10.208.35.225
|            199
                                    Acquiring switchLock read lock | 16:26:14,860 |  10.208.8.123
|            674
                                            Appending to commitlog | 16:26:14,860 |  10.208.8.123
|            701
                                 Sending message to /10.208.35.225 | 16:26:14,860 |  10.208.8.123
|            705
                                      Adding to Standard1 memtable | 16:26:14,860 |  10.208.8.123
|            752
                                      Adding to Standard1 memtable | 16:26:14,861 | 10.208.35.225
|            275
                               Enqueuing response to /10.208.8.123 | 16:26:14,861 | 10.208.35.225
|            373
                              Sending message to bw-1/10.208.8.123 | 16:26:14,861 | 10.208.35.225
|            538
                              Message received from /10.208.35.225 | 16:26:14,862 |  10.208.8.123
|           null
                           Processing response from /10.208.35.225 | 16:26:14,862 |  10.208.8.123
|           null
                                   Sending message to /10.208.8.63 | 16:26:14,863 |  10.208.8.123
|           null
                               Message received from /10.208.8.123 | 16:26:14,867 |   10.208.8.63
|             51
                                    Acquiring switchLock read lock | 16:26:14,867 |   10.208.8.63
|            169
                                Message received from /10.208.8.63 | 16:26:15,345 |  10.208.8.123
|           null
                                            Appending to commitlog | 16:26:15,345 |   10.208.8.63
|         478510
                             Processing response from /10.208.8.63 | 16:26:15,345 |  10.208.8.123
|           null
                                      Adding to Standard1 memtable | 16:26:15,345 |   10.208.8.63
|         478552
                               Enqueuing response to /10.208.8.123 | 16:26:15,345 |   10.208.8.63
|         478755
                              Sending message to bw-1/10.208.8.123 | 16:26:15,345 |   10.208.8.63
|         478967
                                                  Request complete | 16:26:14,860 |  10.208.8.123
|            907
{noformat}

But isolate it to just the coordinator used and remove unnecessary network messages:

{noformat}
 activity                                                          | timestamp    | source
       | source_elapsed
-------------------------------------------------------------------+--------------+---------------+----------------
                                                execute_cql3_query | 16:26:14,860 |  10.208.8.123
|              0
 Parsing INSERT INTO "Standard1" (key, "C0") VALUES ( 0xf0, 0xf0); | 16:26:14,860 |  10.208.8.123
|            100
                                               Preparing statement | 16:26:14,860 |  10.208.8.123
|            326
                                 Determining replicas for mutation | 16:26:14,860 |  10.208.8.123
|            491
                                    Acquiring switchLock read lock | 16:26:14,860 |  10.208.8.123
|            674
                                            Appending to commitlog | 16:26:14,860 |  10.208.8.123
|            701
                                      Adding to Standard1 memtable | 16:26:14,860 |  10.208.8.123
|            752
                                                  Request complete | 16:26:14,860 |  10.208.8.123
|            907
{noformat}

We can see it did exactly the right thing, added it to its own CL/memtable, then returned
the response to the client and the rest happened asynchronously.  The node running dd dropped
mutations as I would expect too, so I'm at a loss what to try next here.

> when the commit_log disk for a single node is overwhelmed the entire cluster slows down
> ---------------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-7567
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-7567
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>         Environment: debian 7.5, bare metal, 14 nodes, 64CPUs, 64GB RAM, commit_log disk
sata, data disk SSD, vnodes, leveled compaction strategy
>            Reporter: David O'Dell
>            Assignee: Brandon Williams
>         Attachments: 7567.logs.bz2, write_request_latency.png
>
>
> We've run into a situation where a single node out of 14 is experiencing high disk io.
This can happen when a node is being decommissioned or after it joins the ring and runs into
the bug cassandra-6621.
> When this occurs the write latency for the entire cluster spikes.
> From 0.3ms to 170ms.
> To simulate this simply run dd on the commit_log disk (dd if=/dev/zero of=/tmp/foo bs=1024)
and you will see that instantly all nodes in the cluster have slowed down.
> BTW overwhelming the data disk does not have this same effect.
> Also I've tried this where the overwhelmed node isn't being connected directly from the
client and it still has the same effect.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message