Return-Path: Delivered-To: apmail-cassandra-user-archive@www.apache.org Received: (qmail 7319 invoked from network); 7 Apr 2010 12:56:42 -0000 Received: from unknown (HELO mail.apache.org) (140.211.11.3) by 140.211.11.9 with SMTP; 7 Apr 2010 12:56:42 -0000 Received: (qmail 24644 invoked by uid 500); 7 Apr 2010 12:56:41 -0000 Delivered-To: apmail-cassandra-user-archive@cassandra.apache.org Received: (qmail 24615 invoked by uid 500); 7 Apr 2010 12:56:40 -0000 Mailing-List: contact user-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@cassandra.apache.org Delivered-To: mailing list user@cassandra.apache.org Received: (qmail 24606 invoked by uid 99); 7 Apr 2010 12:56:40 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 07 Apr 2010 12:56:40 +0000 X-ASF-Spam-Status: No, hits=0.4 required=10.0 tests=AWL,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of MJones@imagehawk.com designates 67.63.148.114 as permitted sender) Received: from [67.63.148.114] (HELO ihedge.imagehawk.com) (67.63.148.114) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 07 Apr 2010 12:56:34 +0000 Received: from ihcomm.ImageHawk.local (192.168.21.2) by mail.imagehawk.com (192.168.21.3) with Microsoft SMTP Server (TLS) id 8.0.813.0; Wed, 7 Apr 2010 07:56:58 -0500 Received: from ihcomm.ImageHawk.local ([192.168.20.2]) by ihcomm.ImageHawk.local ([192.168.20.2]) with mapi; Wed, 7 Apr 2010 07:53:17 -0500 From: Mark Jones To: "user@cassandra.apache.org" CC: "jbellis@gmail.com" Date: Wed, 7 Apr 2010 07:55:53 -0500 Subject: RE: Slow Responses from 2 of 3 nodes in RC1 Thread-Topic: Slow Responses from 2 of 3 nodes in RC1 Thread-Index: AcrVBbwT1KJ1v4aqRfiR92FbTl1i4AA59H0A Message-ID: References: <1270221758.8771.95.camel@erebus.lan> In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: acceptlanguage: en-US Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 I've rerun the tests with a few changes. The memory I ordered came in, so = all 3 machines now have 8GB and I've increased Xmx to 6GB on each node. (I= rarely see memory usage > than 2GB during the loading process, but during = checking/reading, it skyrockets as expected) Each client thread has about ~1,000,000 (+/-5%) items to check. The client= machine is 79-95% idle (more often 95%) I'm using thrift C++ as the API: org::apache::cassandra::ColumnOrSuperColumn result; client.get(result, strID, columnPath, m_nConsistencyLevel); The data, is one column ~400-600 bytes. Later, there could be other column= s, but for this test there isn't. UserID: (SuperColumn) { Action: (Column) { ActionData: 400-500 bytes Tags: none-yet Other stuff as needed later } } So I'm grouping all the actions of a user under their ID. I loaded ~38 million records into the system yesterday and they were distri= buted like so: Address Status Load Range = Ring 167730615856220406399741259265091647= 472 192.168.1.116 Up 4.81 GB 548807629185910207759628439658397615= 29 |<--| *Seed Cassdb2 192.168.1.119 Up 12.96 GB 160455137948102479104219052453775170= 160 | | Cassdb3 192.168.1.12 Up 8.98 GB 167730615856220406399741259265091647= 472 |-->| Cassdb1 Last night when I started the checker on cassdb1 (all 40 threads reading fr= om cassdb1), I was getting 1800-1900 gets/second and when I ran it against = cassdb2, I was getting < 10. I started writing this e-mail and went back t= o run some short numbers to copy and paste into it. All of a sudden cassdb= 2 started producing results in the 1300 gets/second range. So I left the t= est running all night and when I got up this morning and restarted the test= , the speed was down to 98/second. And the speed against cassdb1 is still = in the 1700/second range. The performance for cassdb3 is less than 1/second, with numerous timeouts. = After waiting several minutes, I restarted the test against cassdb3 and it= is now returning 985/second. (Just in the process of writing the next two= paragraphs, the rate has dropped to 500/second, and 20 minutes later, to 1= 49/second) All of this is without any change to any of the machines (not e= ven a restart of any processes). What is really puzzling about this is the fact that I'm using a consistency= level of QUORUM which means all three nodes are involved in returning resu= lts (supposedly). Another thing I observed last night, was that while reading from cassdb2 @ = 1300 gets/second I started a second check program against cassdb1. Cassdb1 = immediately began returning 1800+/second and within seconds, cassdb2 began = timing out, and returning to 2-3/second, even though moments earlier it had= been running well. I'm beginning to think that the communication between multiple nodes someho= w takes precedence over client requests, although I'm not sure how you pull= that off to this degree if you wanted to. And why, if that is the case, d= oes cassdb1 never show this kind of problem? In fact, when I start reading= from it, does the performance of the other 2 machines suffer at its expens= e? What I would really like are some hints about what to look at to see where = the inefficiencies lie. They are all running mmap (auto detected) for disk= access, and storage-conf.xml looks like: 512 64 32 8 64 64 256 0.3 60 8 32 periodic 10000 864000 If someone has some details about using JMX to monitor this, could you emai= l them, or post them to the wiki? -----Original Message----- From: Jonathan Ellis [mailto:jbellis@gmail.com] Sent: Monday, April 05, 2010 4:22 PM To: user@cassandra.apache.org Subject: Re: Slow Responses from 2 of 3 nodes in RC1 When you're saying you can check 50 or 100 per second, how many rows and columns does a check involve? What query api are you using? Your cassandra nodes look mostly idle. Is each client thread getting the same amount of work or are some finishing sooner than others? Is your client cpu or disk perhaps the bottleneck? On Fri, Apr 2, 2010 at 2:39 PM, Mark Jones wrote: > To further complicate matters, > when I read only from cassdb1, I can check about 100/second/thread (40 t= hreads) > when I read only from cassdb2, I can check about 50/second/thread (40 th= reads) > when I read only from cassdb3, I can check about 50/second/thread (40 th= reads) > > This is with a consistency level of ONE, ALL, or QUORUM.... All 3 levels= return about the same read rate (~5/second), yet 2 nodes return them at 1/= 2 speed of the other node. > > I don't understand how this could be since QUORUM or ALL would require 2 = of the 3 to respond in ALL cases, so you would expect the read rate to the = 50/second/thread or 100/second/thread, regardless of who does the proxy. > > -----Original Message----- > From: Mark Jones [mailto:MJones@imagehawk.com] > Sent: Friday, April 02, 2010 1:38 PM > To: user@cassandra.apache.org > Subject: Slow Responses from 2 of 3 nodes in RC1 > > I have a 3 node cassandra cluster I'm trying to work with: > > All three machines are about the same: > 6-8GB per machine (fastest machine has 8GB, JavaVM limited to 5GB) > separate spindle for cassandra data and commit log > > I wrote ~7 Million items to Cassandra, now, I'm trying to read them back,= the ones that are missing, might be troubling, but I'm not worried about t= hat yet. Part of the reason I only have ~7 million items in, is that 2 of = the nodes are NOT pulling their weight: > > > I've used "nodetool loadbalance" on them, to get the data evened out, it = was terribly imbalanced after ingestion, but it now looks like this: > > Address Status Load Range = Ring > 16921443789473307301729527433069620= 0891 > 192.168.1.116 Up 1.88 GB 8337283236338569673757707579140798= 5563 |<--| (cassdb2) > 192.168.1.119 Up 2.59 GB 1677325453819048882702522564438388= 55184 | | (cassdb3) > 192.168.1.12 Up 2.5 GB 1692144378947330730172952743306962= 00891 |-->| (cassdb1) > > This is a summary report from my checking program(c++). It runs one thre= ad per file (files contain the originally ingested data), checking to see i= f the data inserted is present and the same as when it was inserted. Each = thread has its own thrift and Cassandra connection setup. Connection point = is randomly chosen at startup and that connection is reused by that thread = until the end of the test. All the threads are running simultaneously and = I would expect similar results, but one node is beating the pants off the o= ther two nodes for performance. > > In the logs, there are nothing but INFO lines like these (there are other= s that give less info about performance), no exceptions, warnings: > cassdb1: > INFO [COMPACTION-POOL:1] 2010-04-02 08:20:35,339 CompactionManager.java (= line 326) Compacted to /cassandra/data/bumble/Contacts-15-Data.db. 2622793= 45/243198299 bytes for 324378 keys. Time: 16488ms. > > cassdb2: > INFO [COMPACTION-POOL:1] 2010-04-02 08:20:16,448 CompactionManager.java (= line 326) Compacted to /cassandra/data/bumble/Contacts-5-Data.db. 25108615= 3/234535924 bytes for 284088 keys. Time: 22805ms. > > cassdb3: > INFO [COMPACTION-POOL:1] 2010-04-02 08:20:24,429 CompactionManager.java (= line 326) Compacted to /cassandra/data/bumble/Contacts-20-Data.db. 2664514= 19/248084737 bytes for 347531 keys. Time: 25094ms. > > > How do I go about figuring out what is going on in this setup? > > Iostat -x data is at the bottom > > cassdb1 Checked: 9773 Good: 9770 Missing: 3 Miscompared: = 0 '/tmp/QUECD0000000005 > cassdb1 Checked: 9818 Good: 9817 Missing: 1 Miscompared: = 0 '/tmp/QUEDE0000000005 > cassdb1 Checked: 9820 Good: 9820 Missing: 0 Miscompared: = 0 '/tmp/QUEQ0000000005 > cassdb1 Checked: 9836 Good: 9836 Missing: 0 Miscompared: = 0 '/tmp/QUEJ0000000005 > cassdb1 Checked: 9843 Good: 9843 Missing: 0 Miscompared: = 0 '/tmp/QUEFG0000000005 > cassdb1 Checked: 9883 Good: 9883 Missing: 0 Miscompared: = 0 '/tmp/QUENO0000000005 > cassdb1 Checked: 9884 Good: 9883 Missing: 1 Miscompared: = 0 '/tmp/QUEIJ0000000005 > cassdb1 Checked: 9890 Good: 9890 Missing: 0 Miscompared: = 0 '/tmp/QUER0000000005 > cassdb1 Checked: 9915 Good: 9913 Missing: 2 Miscompared: = 0 '/tmp/QUEMN0000000005 > cassdb1 Checked: 9962 Good: 9962 Missing: 0 Miscompared: = 0 '/tmp/QUEF0000000005 > cassdb1 Checked: 10120 Good: 10120 Missing: 0 Miscompared: = 0 '/tmp/QUEH0000000005 > cassdb1 Checked: 10123 Good: 10123 Missing: 0 Miscompared: = 0 '/tmp/QUEM0000000005 > cassdb1 Checked: 10280 Good: 10280 Missing: 0 Miscompared: = 0 '/tmp/QUEP0000000005 > cassdb1 Checked: 10490 Good: 10490 Missing: 0 Miscompared: = 0 '/tmp/QUEL0000000005 > > cassdb2 Checked: 0 Good: 0 Missing: 0 Miscompared: = 0 '/tmp/QUEC0000000005 > cassdb2 Checked: 1 Good: 1 Missing: 0 Miscompared: = 0 '/tmp/QUEN0000000005 > cassdb2 Checked: 2 Good: 2 Missing: 0 Miscompared: = 0 '/tmp/QUEEF0000000005 > cassdb2 Checked: 2 Good: 2 Missing: 0 Miscompared: = 0 '/tmp/QUEG0000000005 > cassdb2 Checked: 3 Good: 3 Missing: 0 Miscompared: = 0 '/tmp/QUEV0000000005 > cassdb2 Checked: 3 Good: 3 Missing: 0 Miscompared: = 0 '/tmp/QUEX0000000005 > cassdb2 Checked: 4 Good: 4 Missing: 0 Miscompared: = 0 '/tmp/QUEB0000000005 > cassdb2 Checked: 4 Good: 4 Missing: 0 Miscompared: = 0 '/tmp/QUEBC0000000005 > cassdb2 Checked: 5 Good: 5 Missing: 0 Miscompared: = 0 '/tmp/QUEAB0000000005 > cassdb2 Checked: 5 Good: 5 Missing: 0 Miscompared: = 0 '/tmp/QUET0000000005 > cassdb2 Checked: 6 Good: 6 Missing: 0 Miscompared: = 0 '/tmp/QUEJK0000000005 > cassdb2 Checked: 7 Good: 7 Missing: 0 Miscompared: = 0 '/tmp/QUEO0000000005 > cassdb2 Checked: 9 Good: 9 Missing: 0 Miscompared: = 0 '/tmp/QUED0000000005 > > cassdb2 Checked: 10 Good: 10 Missing: 0 Miscompared: = 0 '/tmp/QUEK0000000005 > cassdb3 Checked: 13 Good: 13 Missing: 0 Miscompared: = 0 '/tmp/QUEHI0000000005 > cassdb3 Checked: 17 Good: 17 Missing: 0 Miscompared: = 0 '/tmp/QUES0000000005 > cassdb3 Checked: 18 Good: 18 Missing: 0 Miscompared: = 0 '/tmp/QUEI0000000005 > cassdb3 Checked: 19 Good: 19 Missing: 0 Miscompared: = 0 '/tmp/QUEW0000000005 > cassdb3 Checked: 20 Good: 20 Missing: 0 Miscompared: = 0 '/tmp/QUEE0000000005 > cassdb3 Checked: 20 Good: 20 Missing: 0 Miscompared: = 0 '/tmp/QUEY0000000005 > cassdb3 Checked: 21 Good: 21 Missing: 0 Miscompared: = 0 '/tmp/QUEA0000000005 > cassdb3 Checked: 21 Good: 21 Missing: 0 Miscompared: = 0 '/tmp/QUELM0000000005 > cassdb3 Checked: 21 Good: 21 Missing: 0 Miscompared: = 0 '/tmp/QUEU0000000005 > cassdb3 Checked: 23 Good: 23 Missing: 0 Miscompared: = 0 '/tmp/QUEGH0000000005 > cassdb3 Checked: 23 Good: 23 Missing: 0 Miscompared: = 0 '/tmp/QUEKL0000000005 > cassdb3 Checked: 23 Good: 23 Missing: 0 Miscompared: = 0 '/tmp/QUEZ0000000005 > > cassdb1: > Linux 2.6.31-14-generic (record) 04/02/2010 _x86_64_ (= 2 CPU) > > avg-cpu: %user %nice %system %iowait %steal %idle > 6.81 0.09 2.23 0.36 0.00 90.51 > > Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-s= z avgqu-sz await svctm %util > sda 0.27 25.61 0.89 0.77 27.80 210.90 143.6= 4 0.05 27.35 4.67 0.78 > sdb 0.02 0.00 0.05 0.00 0.78 0.02 15.2= 9 0.00 10.96 10.95 0.06 > sdc 0.02 35.43 0.01 0.75 0.15 718.84 947.7= 0 0.28 371.59 4.36 0.33 > > cassdb2: > Linux 2.6.31-14-generic (ec2) 04/02/2010 _x86_64_ (2 CPU) > > avg-cpu: %user %nice %system %iowait %steal %idle > 9.34 0.01 1.50 3.85 0.00 85.31 > > Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-s= z avgqu-sz await svctm %util > sda 2.51 0.47 3.35 0.69 176.95 11.07 46.5= 0 0.32 79.83 5.22 2.11 > sdb 0.06 0.00 0.02 0.00 0.60 0.00 32.1= 7 0.00 2.83 2.83 0.01 > sdc 0.21 0.00 23.43 0.01 1581.80 1.74 67.5= 5 0.45 19.18 3.67 8.61 > > cassdb3: > Linux 2.6.31-14-generic (ec1) 04/02/2010 _x86_64_ (2 CPU) > > avg-cpu: %user %nice %system %iowait %steal %idle > 12.57 0.12 1.91 0.35 0.00 85.06 > > Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-s= z avgqu-sz await svctm %util > sda 0.45 32.56 0.83 0.89 31.44 267.46 173.9= 0 0.04 20.99 4.45 0.77 > sdb 0.02 37.61 1.35 0.79 26.30 674.95 327.9= 8 0.28 133.03 3.36 0.72 >