cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ryan Svihla (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (CASSANDRA-11170) Uneven load can be created by cross DC mutation propagations, as remote coordinator is not randomly picked
Date Tue, 16 Feb 2016 15:23:18 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-11170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15148798#comment-15148798
] 

Ryan Svihla commented on CASSANDRA-11170:
-----------------------------------------

Fat partitions and bad data models exist, no need to make them worse by pinning all write
load to one unfortunate unlucky node until it dies. Going to a single node just lowers the
bar for the data model falling apart, I get RF writes will happen anyway, but I'm assuming
coordinator work is non trivial (especially on higher levels of CL) and I know from observation
that hint handling and replay is non trivial especially at certain points (I'm certain improved
with file based hints but I'm also certain not free).  Final point, if you think of the stereotypical
time series bucket data model the "stick to the primary token owner" approach will generate
more hints than SOME strategy of balancing the load.

> Uneven load can be created by cross DC mutation propagations, as remote coordinator is
not randomly picked
> ----------------------------------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-11170
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-11170
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Coordination
>            Reporter: Wei Deng
>
> I was looking at the o.a.c.service.StorageProxy code and realized that it seems to be
always picking the first IP in the remote DC target list as the destination, whenever it needs
to send the mutation to a remote DC. See these lines in the code:
> https://github.com/apache/cassandra/blob/1944bf507d66b5c103c136319caeb4a9e3767a69/src/java/org/apache/cassandra/service/StorageProxy.java#L1280-L1301
> This could cause one node in the remote DC receiving more mutation messages than the
other nodes, and hence uneven workload distribution.
> A trivial test (with TRACE logging level enabled) on a 3+3 node cluster proved the problem,
see the system.log entries below:
> {code}
> INFO  [RMI TCP Connection(18)-54.173.227.52] 2016-02-13 09:54:55,948  StorageService.java:3353
- set log level to TRACE for classes under 'org.apache.cassandra.service.StorageProxy' (if
the level doesn't look like 'TRACE' then the logger couldn't parse 'TRACE')
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:15,148  StorageProxy.java:1284 - Adding
FWD message to 8996@/52.53.215.74
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:15,149  StorageProxy.java:1284 - Adding
FWD message to 8997@/54.183.23.201
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:15,149  StorageProxy.java:1289 - Sending
message to 8998@/54.183.209.219
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:22,939  StorageProxy.java:1284 - Adding
FWD message to 9032@/52.53.215.74
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:22,940  StorageProxy.java:1284 - Adding
FWD message to 9033@/54.183.23.201
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:22,941  StorageProxy.java:1289 - Sending
message to 9034@/54.183.209.219
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:28,975  StorageProxy.java:1284 - Adding
FWD message to 9064@/52.53.215.74
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:28,976  StorageProxy.java:1284 - Adding
FWD message to 9065@/54.183.23.201
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:28,977  StorageProxy.java:1289 - Sending
message to 9066@/54.183.209.219
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:33,464  StorageProxy.java:1284 - Adding
FWD message to 9094@/52.53.215.74
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:33,465  StorageProxy.java:1284 - Adding
FWD message to 9095@/54.183.23.201
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:33,478  StorageProxy.java:1289 - Sending
message to 9096@/54.183.209.219
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:39,243  StorageProxy.java:1284 - Adding
FWD message to 9121@/52.53.215.74
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:39,244  StorageProxy.java:1284 - Adding
FWD message to 9122@/54.183.23.201
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:39,244  StorageProxy.java:1289 - Sending
message to 9123@/54.183.209.219
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:44,248  StorageProxy.java:1284 - Adding
FWD message to 9145@/52.53.215.74
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:44,249  StorageProxy.java:1284 - Adding
FWD message to 9146@/54.183.23.201
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:44,249  StorageProxy.java:1289 - Sending
message to 9147@/54.183.209.219
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:49,731  StorageProxy.java:1284 - Adding
FWD message to 9170@/52.53.215.74
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:49,734  StorageProxy.java:1284 - Adding
FWD message to 9171@/54.183.23.201
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:49,735  StorageProxy.java:1289 - Sending
message to 9172@/54.183.209.219
> INFO  [RMI TCP Connection(22)-54.173.227.52] 2016-02-13 09:56:19,545  StorageService.java:3353
- set log level to INFO for classes under 'org.apache.cassandra.service.StorageProxy' (if
the level doesn't look like 'INFO' then the logger couldn't parse 'INFO')
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message