From user-return-61884-archive-asf-public=cust-asf.ponee.io@cassandra.apache.org Tue Aug 7 15:30:44 2018 Return-Path: X-Original-To: archive-asf-public@cust-asf.ponee.io Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by mx-eu-01.ponee.io (Postfix) with SMTP id E05D1180657 for ; Tue, 7 Aug 2018 15:30:41 +0200 (CEST) Received: (qmail 25545 invoked by uid 500); 7 Aug 2018 13:30:40 -0000 Mailing-List: contact user-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@cassandra.apache.org Delivered-To: mailing list user@cassandra.apache.org Received: (qmail 25535 invoked by uid 99); 7 Aug 2018 13:30:40 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd4-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 07 Aug 2018 13:30:40 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd4-us-west.apache.org (ASF Mail Server at spamd4-us-west.apache.org) with ESMTP id 83BD4C05DC for ; Tue, 7 Aug 2018 13:30:39 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd4-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 2.388 X-Spam-Level: ** X-Spam-Status: No, score=2.388 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=2, KAM_NUMSUBJECT=0.5, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.001, SPF_PASS=-0.001, T_DKIMWL_WL_MED=-0.01] autolearn=disabled Authentication-Results: spamd4-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd4-us-west.apache.org [10.40.0.11]) (amavisd-new, port 10024) with ESMTP id zqlXqieg5fDD for ; Tue, 7 Aug 2018 13:30:33 +0000 (UTC) Received: from mail-wr1-f67.google.com (mail-wr1-f67.google.com [209.85.221.67]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTPS id 56F9D5F493 for ; Tue, 7 Aug 2018 13:30:32 +0000 (UTC) Received: by mail-wr1-f67.google.com with SMTP id g6-v6so15822598wrp.0 for ; Tue, 07 Aug 2018 06:30:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:in-reply-to:references:from:date:message-id:subject:to; bh=ykE0AIbyQpkQE959dH95DpSpYXCTH2UHJMpvujGmzvk=; b=icfHmWhuynxzw8Uqqv5PJZs06pmWAcx4s6K4h8E0xLsyvSiOTF9ul+M5Vtrb2MK9tX lzWGIYnmpbyja3Sx6F5S/ziuGRC2C4pjWaN5T3oFMbvpbTuZWPg8FeeZ/UL7iuiBbWF7 Gtu2+zKuA/vJ5cTMPlEiq9Rpcqpj+wDqdu1Z19HTbZcNOhiUw1WYOBapD0SvD1PJVxFt hGeT8+3XuB6o8L+E2Mksks3g4FAFy+v6NOuMHrJv0TFYx6imT6LLRtM7Qau+9aTpHx0E 8umkd8qwuXEzhD08yy2WQGsjmfnHosRp9RFxu3JT++pwIqLejnAKlCU5cKYHeF+BETpz Morg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to; bh=ykE0AIbyQpkQE959dH95DpSpYXCTH2UHJMpvujGmzvk=; b=jrM3x9b7btizAJQvnGemZt/1U+3KVCQx8Q+UqJuo9KoFtULh8FZOSPEmth8Gx4tzm7 2AoVUmGCp+mLyMktjWemoPMEVRwARLTFEuEaisRgpCJnv9oPwzWidXHC7NzH2s7kpz93 vQ8jWGeE+3J0+FZgyl3Jxd4K305LUQsoZr6agn0TM8EFQcv4Tox157Va/F397WZmKNYj 7+TMcM/0XasOXRtl6Y47E13Ru1SXbekXYV6BKXAa2ORA+nD1svHA7kzc7NbLPxW2OjOo zclF6NGNjcMt8hthTCRviX+zynGmA2NJlm/kUpj/nshCxFpETvC6RcVFy/PoYyVy6kki T79g== X-Gm-Message-State: AOUpUlETslvm0irXSZ5071GbY/YSDIUKthM4bH/ZbSJGWNliCVhwBBt2 9WDhTVJXfIq7pbUo7Hgrg8yprFfkQb3SZA23ErAZIE6d X-Google-Smtp-Source: AAOMgpdDIBNy333oJrp5pwQgcPOZlMKqvducHRcMvVk8JZ4Xu0WVsG6BGelfWHT8xhLGX/bcjPA+Hhm55WXueVj084w= X-Received: by 2002:adf:ef03:: with SMTP id e3-v6mr11929803wro.182.1533648625614; Tue, 07 Aug 2018 06:30:25 -0700 (PDT) MIME-Version: 1.0 Received: by 2002:a5d:620e:0:0:0:0:0 with HTTP; Tue, 7 Aug 2018 06:30:24 -0700 (PDT) In-Reply-To: <5EBBE5F9-EEB2-42DC-934C-6F6B04C04FBA@gmail.com> References: <5EBBE5F9-EEB2-42DC-934C-6F6B04C04FBA@gmail.com> From: Laszlo Szabo Date: Tue, 7 Aug 2018 09:30:24 -0400 Message-ID: Subject: Re: Bootstrap OOM issues with Cassandra 3.11.1 To: user@cassandra.apache.org Content-Type: multipart/alternative; boundary="0000000000008a45880572d8654f" --0000000000008a45880572d8654f Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Hi, Thanks for the fast response! We are not using any materialized views, but there are several indexes. I don't have a recent heap dump, and it will be about 24 before I can generate an interesting one, but most of the memory was allocated to byte buffers, so not entirely helpful. nodetool cfstats is also below. I also see a lot of flushing happening, but it seems like there are too many small allocations to be effective. Here are the messages I see, DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,459 ColumnFamilyStore.java:1305 > - Flushing largest CFS(Keyspace=3D'userinfo', ColumnFamily=3D'gpsmessages= ') to > free up room. Used total: 0.54/0.05, live: 0.00/0.00, flushing: 0.40/0.04= , > this: 0.00/0.00 DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,459 ColumnFamilyStore.java:915 > - Enqueuing flush of gpsmessages: 0.000KiB (0%) on-heap, 0.014KiB (0%) > off-heap DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,460 ColumnFamilyStore.java:1305 > - Flushing largest CFS(Keyspace=3D'userinfo', ColumnFamily=3D'user_histor= y') to > free up room. Used total: 0.54/0.05, live: 0.00/0.00, flushing: 0.40/0.04= , > this: 0.00/0.00 DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,461 ColumnFamilyStore.java:915 > - Enqueuing flush of user_history: 0.000KiB (0%) on-heap, 0.011KiB (0%) > off-heap DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,465 ColumnFamilyStore.java:1305 > - Flushing largest CFS(Keyspace=3D'userinfo', ColumnFamily=3D'tweets') to= free > up room. Used total: 0.54/0.05, live: 0.00/0.00, flushing: 0.40/0.04, thi= s: > 0.00/0.00 DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,465 ColumnFamilyStore.java:915 > - Enqueuing flush of tweets: 0.000KiB (0%) on-heap, 0.188KiB (0%) off-hea= p DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,470 ColumnFamilyStore.java:1305 > - Flushing largest CFS(Keyspace=3D'userinfo', ColumnFamily=3D'user_histor= y') to > free up room. Used total: 0.54/0.05, live: 0.00/0.00, flushing: 0.40/0.04= , > this: 0.00/0.00 DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,470 ColumnFamilyStore.java:915 > - Enqueuing flush of user_history: 0.000KiB (0%) on-heap, 0.024KiB (0%) > off-heap DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,470 ColumnFamilyStore.java:1305 > - Flushing largest CFS(Keyspace=3D'userinfo', ColumnFamily=3D'tweets') to= free > up room. Used total: 0.54/0.05, live: 0.00/0.00, flushing: 0.40/0.04, thi= s: > 0.00/0.00 DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,470 ColumnFamilyStore.java:915 > - Enqueuing flush of tweets: 0.000KiB (0%) on-heap, 0.188KiB (0%) off-hea= p DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,472 ColumnFamilyStore.java:1305 > - Flushing largest CFS(Keyspace=3D'userinfo', ColumnFamily=3D'gpsmessages= ') to > free up room. Used total: 0.54/0.05, live: 0.00/0.00, flushing: 0.40/0.04= , > this: 0.00/0.00 DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,472 ColumnFamilyStore.java:915 > - Enqueuing flush of gpsmessages: 0.000KiB (0%) on-heap, 0.013KiB (0%) > off-heap > Stack traces from errors are below. > java.io.IOException: Broken pipe at sun.nio.ch.FileDispatcherImpl.write0(Native Method) > ~[na:1.8.0_181] at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) > ~[na:1.8.0_181] at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) > ~[na:1.8.0_181] at sun.nio.ch.IOUtil.write(IOUtil.java:51) ~[na:1.8.0_181] at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471) > ~[na:1.8.0_181] at > org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.doFlush(Buffere= dDataOutputStreamPlus.java:323) > ~[apache-cassandra-3.11.1.jar:3.11.1] at > org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.flush(BufferedD= ataOutputStreamPlus.java:331) > ~[apache-cassandra-3.11.1.jar:3.11.1] at > org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.s= endMessage(ConnectionHandler.java:409) > [apache-cassandra-3.11.1.jar:3.11.1] at > org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.r= un(ConnectionHandler.java:380) > [apache-cassandra-3.11.1.jar:3.11.1] at java.lang.Thread.run(Thread.java:748) [na:1.8.0_181] ERROR [MutationStage-226] 2018-08-06 07:16:08,236 > JVMStabilityInspector.java:142 - JVM state determined to be unstable. > Exiting forcefully due to: java.lang.OutOfMemoryError: Direct buffer memory at java.nio.Bits.reserveMemory(Bits.java:694) ~[na:1.8.0_181] at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) > ~[na:1.8.0_181] at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) > ~[na:1.8.0_181] at > org.apache.cassandra.utils.memory.SlabAllocator.getRegion(SlabAllocator.j= ava:139) > ~[apache-cassandra-3.11.1.jar:3.11.1] at > org.apache.cassandra.utils.memory.SlabAllocator.allocate(SlabAllocator.ja= va:104) > ~[apache-cassandra-3.11.1.jar:3.11.1] at > org.apache.cassandra.utils.memory.ContextAllocator.allocate(ContextAlloca= tor.java:57) > ~[apache-cassandra-3.11.1.jar:3.11.1] at > org.apache.cassandra.utils.memory.ContextAllocator.clone(ContextAllocator= .java:47) > ~[apache-cassandra-3.11.1.jar:3.11.1] at > org.apache.cassandra.utils.memory.MemtableBufferAllocator.clone(MemtableB= ufferAllocator.java:40) > ~[apache-cassandra-3.11.1.jar:3.11.1] at org.apache.cassandra.db.Memtable.put(Memtable.java:269) > ~[apache-cassandra-3.11.1.jar:3.11.1] at > org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:13= 32) > ~[apache-cassandra-3.11.1.jar:3.11.1] at > org.apache.cassandra.db.Keyspace.applyInternal(Keyspace.java:618) > ~[apache-cassandra-3.11.1.jar:3.11.1] at org.apache.cassandra.db.Keyspace.applyFuture(Keyspace.java:425) > ~[apache-cassandra-3.11.1.jar:3.11.1] at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:222) > ~[apache-cassandra-3.11.1.jar:3.11.1] at > org.apache.cassandra.db.MutationVerbHandler.doVerb(MutationVerbHandler.ja= va:68) > ~[apache-cassandra-3.11.1.jar:3.11.1] at > org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java= :66) > ~[apache-cassandra-3.11.1.jar:3.11.1] at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > ~[na:1.8.0_181] at > org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureT= ask.run(AbstractLocalAwareExecutorService.java:162) > ~[apache-cassandra-3.11.1.jar:3.11.1] at > org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSe= ssionFutureTask.run(AbstractLocalAwareExecutorService.java:134) > [apache-cassandra-3.11.1.jar:3.11.1] at > org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) > [apache-cassandra-3.11.1.jar:3.11.1] at java.lang.Thread.run(Thread.java:748) [na:1.8.0_181] DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,459 ColumnFamilyStore.java:1305 > - Flushing largest CFS(Keyspace=3D'userinfo', ColumnFamily=3D'gpsmessages= ') to > free up room. Used total: 0.54/0.05, live: 0.00/0.00, flushing: 0.40/0.04= , > this: 0.00/0.00 DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,459 ColumnFamilyStore.java:915 > - Enqueuing flush of gpsmessages: 0.000KiB (0%) on-heap, 0.014KiB (0%) > off-heap DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,460 ColumnFamilyStore.java:1305 > - Flushing largest CFS(Keyspace=3D'userinfo', ColumnFamily=3D'user_histor= y') to > free up room. Used total: 0.54/0.05, live: 0.00/0.00, flushing: 0.40/0.04= , > this: 0.00/0.00 DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,461 ColumnFamilyStore.java:915 > - Enqueuing flush of user_history: 0.000KiB (0%) on-heap, 0.011KiB (0%) > off-heap DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,465 ColumnFamilyStore.java:1305 > - Flushing largest CFS(Keyspace=3D'userinfo', ColumnFamily=3D'tweets') to= free > up room. Used total: 0.54/0.05, live: 0.00/0.00, flushing: 0.40/0.04, thi= s: > 0.00/0.00 DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,465 ColumnFamilyStore.java:915 > - Enqueuing flush of tweets: 0.000KiB (0%) on-heap, 0.188KiB (0%) off-hea= p DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,470 ColumnFamilyStore.java:1305 > - Flushing largest CFS(Keyspace=3D'userinfo', ColumnFamily=3D'user_histor= y') to > free up room. Used total: 0.54/0.05, live: 0.00/0.00, flushing: 0.40/0.04= , > this: 0.00/0.00 DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,470 ColumnFamilyStore.java:915 > - Enqueuing flush of user_history: 0.000KiB (0%) on-heap, 0.024KiB (0%) > off-heap DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,470 ColumnFamilyStore.java:1305 > - Flushing largest CFS(Keyspace=3D'userinfo', ColumnFamily=3D'tweets') to= free > up room. Used total: 0.54/0.05, live: 0.00/0.00, flushing: 0.40/0.04, thi= s: > 0.00/0.00 DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,470 ColumnFamilyStore.java:915 > - Enqueuing flush of tweets: 0.000KiB (0%) on-heap, 0.188KiB (0%) off-hea= p DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,472 ColumnFamilyStore.java:1305 > - Flushing largest CFS(Keyspace=3D'userinfo', ColumnFamily=3D'gpsmessages= ') to > free up room. Used total: 0.54/0.05, live: 0.00/0.00, flushing: 0.40/0.04= , > this: 0.00/0.00 DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,472 ColumnFamilyStore.java:915 > - Enqueuing flush of gpsmessages: 0.000KiB (0%) on-heap, 0.013KiB (0%) > off-heap Total number of tables: 40 ---------------- Keyspace : userinfo Read Count: 143301 Read Latency: 14.945587623254548 ms. Write Count: 2754603904 Write Latency: 0.020883145284324698 ms. Pending Flushes: 0 Table (index): gpsmessages.gpsmessages_addresscount_idxgpsmessages.gpsmessages_addresscoun= t_idx SSTable count: 9 Space used (live): 19043463189 Space used (total): 19043463189 Space used by snapshots (total): 0 Off heap memory used (total): 6259448 SSTable Compression Ratio: 0.3704785164266614 Number of partitions (estimate): 1025 Memtable cell count: 309066 Memtable data size: 13602774 Memtable off heap memory used: 0 Memtable switch count: 0 Local read count: 0 Local read latency: NaN ms Local write count: 46025778 Local write latency: 0.034 ms Pending flushes: 0 Percent repaired: 0.0 Bloom filter false positives: 0 Bloom filter false ratio: 0.00000 Bloom filter space used: 2504 Bloom filter off heap memory used: 2432 Index summary off heap memory used: 320 Compression metadata off heap memory used: 6256696 Compacted partition minimum bytes: 43 Compacted partition maximum bytes: 44285675122 Compacted partition mean bytes: 30405277 Average live cells per slice (last five minutes): NaN Maximum live cells per slice (last five minutes): 0 Average tombstones per slice (last five minutes): NaN Maximum tombstones per slice (last five minutes): 0 Dropped Mutations: 0 Table (index): gpsmessages.gpsmessages_addresses_idxgpsmessages.gpsmessages_addresses_idx SSTable count: 18 Space used (live): 409514565570 Space used (total): 409514565570 Space used by snapshots (total): 0 Off heap memory used (total): 153405673 SSTable Compression Ratio: 0.4447731157134059 Number of partitions (estimate): 6013125 Memtable cell count: 1110334 Memtable data size: 67480140 Memtable off heap memory used: 0 Memtable switch count: 0 Local read count: 0 Local read latency: NaN ms Local write count: 147639252 Local write latency: 0.015 ms Pending flushes: 0 Percent repaired: 0.0 Bloom filter false positives: 0 Bloom filter false ratio: 0.00000 Bloom filter space used: 34175400 Bloom filter off heap memory used: 34175256 Index summary off heap memory used: 7432177 Compression metadata off heap memory used: 111798240 Compacted partition minimum bytes: 61 Compacted partition maximum bytes: 322381140 Compacted partition mean bytes: 36692 Average live cells per slice (last five minutes): NaN Maximum live cells per slice (last five minutes): 0 Average tombstones per slice (last five minutes): NaN Maximum tombstones per slice (last five minutes): 0 Dropped Mutations: 0 Table (index): gpsmessages.addressreceivedtime_idxgpsmessages.addressreceivedtime_idx SSTable count: 10 Space used (live): 52738155477 Space used (total): 52738155477 Space used by snapshots (total): 0 Off heap memory used (total): 1909362628 SSTable Compression Ratio: 0.4106961621795128 Number of partitions (estimate): 1338730016 Memtable cell count: 308990 Memtable data size: 13410867 Memtable off heap memory used: 0 Memtable switch count: 0 Local read count: 0 Local read latency: NaN ms Local write count: 46012614 Local write latency: 0.012 ms Pending flushes: 0 Percent repaired: 0.0 Bloom filter false positives: 0 Bloom filter false ratio: 0.00000 Bloom filter space used: 1687550888 Bloom filter off heap memory used: 1687550808 Index summary off heap memory used: 213249180 Compression metadata off heap memory used: 8562640 Compacted partition minimum bytes: 36 Compacted partition maximum bytes: 2759 Compacted partition mean bytes: 54 Average live cells per slice (last five minutes): NaN Maximum live cells per slice (last five minutes): 0 Average tombstones per slice (last five minutes): NaN Maximum tombstones per slice (last five minutes): 0 Dropped Mutations: 0 Table: gpsmessages SSTable count: 13 Space used (live): 337974446627 Space used (total): 337974446627 Space used by snapshots (total): 0 Off heap memory used (total): 77833540 SSTable Compression Ratio: 0.5300637241381126 Number of partitions (estimate): 22034 Memtable cell count: 308904 Memtable data size: 72074512 Memtable off heap memory used: 0 Memtable switch count: 110 Local read count: 0 Local read latency: NaN ms Local write count: 45996652 Local write latency: 0.281 ms Pending flushes: 0 Percent repaired: 0.0 Bloom filter false positives: 0 Bloom filter false ratio: 0.00000 Bloom filter space used: 67904 Bloom filter off heap memory used: 67800 Index summary off heap memory used: 11756 Compression metadata off heap memory used: 77753984 Compacted partition minimum bytes: 73 Compacted partition maximum bytes: 1155149911 Compacted partition mean bytes: 13158224 Average live cells per slice (last five minutes): NaN Maximum live cells per slice (last five minutes): 0 Average tombstones per slice (last five minutes): NaN Maximum tombstones per slice (last five minutes): 0 Dropped Mutations: 13699 Table: user_history SSTable count: 17 Space used (live): 116361158882 Space used (total): 116361158882 Space used by snapshots (total): 0 Off heap memory used (total): 29562319 SSTable Compression Ratio: 0.5683114352331539 Number of partitions (estimate): 1337206 Memtable cell count: 773277 Memtable data size: 40623368 Memtable off heap memory used: 0 Memtable switch count: 57 Local read count: 209 Local read latency: NaN ms Local write count: 145853733 Local write latency: 0.020 ms Pending flushes: 0 Percent repaired: 0.0 Bloom filter false positives: 0 Bloom filter false ratio: 0.00000 Bloom filter space used: 3844416 Bloom filter off heap memory used: 3844280 Index summary off heap memory used: 800991 Compression metadata off heap memory used: 24917048 Compacted partition minimum bytes: 61 Compacted partition maximum bytes: 464228842 Compacted partition mean bytes: 72182 Average live cells per slice (last five minutes): NaN Maximum live cells per slice (last five minutes): 0 Average tombstones per slice (last five minutes): NaN Maximum tombstones per slice (last five minutes): 0 Dropped Mutations: 66702 Table: users SSTable count: 3 Space used (live): 89945186 Space used (total): 89945186 Space used by snapshots (total): 0 Off heap memory used (total): 2092053 SSTable Compression Ratio: 0.5712127629253333 Number of partitions (estimate): 1365645 Memtable cell count: 3556 Memtable data size: 150903 Memtable off heap memory used: 0 Memtable switch count: 42 Local read count: 143087 Local read latency: 6.094 ms Local write count: 250971 Local write latency: 0.024 ms Pending flushes: 0 Percent repaired: 0.0 Bloom filter false positives: 0 Bloom filter false ratio: 0.00000 Bloom filter space used: 1709848 Bloom filter off heap memory used: 1709824 Index summary off heap memory used: 372125 Compression metadata off heap memory used: 10104 Compacted partition minimum bytes: 36 Compacted partition maximum bytes: 310 Compacted partition mean bytes: 66 Average live cells per slice (last five minutes): 1.0 Maximum live cells per slice (last five minutes): 1 Average tombstones per slice (last five minutes): 1.0 Maximum tombstones per slice (last five minutes): 1 Dropped Mutations: 114 Table: tweets SSTable count: 18 Space used (live): 1809145656486 Space used (total): 1809145656486 Space used by snapshots (total): 0 Off heap memory used (total): 435915908 SSTable Compression Ratio: 0.5726200929451171 Number of partitions (estimate): 26217889 Memtable cell count: 710146 Memtable data size: 31793929 Memtable off heap memory used: 0 Memtable switch count: 399 Local read count: 5 Local read latency: NaN ms Local write count: 2322829524 Local write latency: 0.019 ms Pending flushes: 0 Percent repaired: 0.0 Bloom filter false positives: 0 Bloom filter false ratio: 0.00000 Bloom filter space used: 35019224 Bloom filter off heap memory used: 35019080 Index summary off heap memory used: 16454076 Compression metadata off heap memory used: 384442752 Compacted partition minimum bytes: 104 Compacted partition maximum bytes: 3379391 Compacted partition mean bytes: 124766 Average live cells per slice (last five minutes): NaN Maximum live cells per slice (last five minutes): 0 Average tombstones per slice (last five minutes): NaN Maximum tombstones per slice (last five minutes): 0 Dropped Mutations: 697696 ---------------- On Mon, Aug 6, 2018 at 8:57 PM, Jeff Jirsa wrote: > > > Upgrading to 3.11.3 May fix it (there were some memory recycling bugs > fixed recently), but analyzing the heap will be the best option > > If you can print out the heap histogram and stack trace or open a heap > dump in your kit or visualvm or MAT and show us what=E2=80=99s at the top= of the > reclaimed objects, we may be able to figure out what=E2=80=99s going on > > -- > Jeff Jirsa > > > On Aug 6, 2018, at 5:42 PM, Jeff Jirsa wrote: > > Are you using materialized views or secondary indices? > > -- > Jeff Jirsa > > > On Aug 6, 2018, at 3:49 PM, Laszlo Szabo > wrote: > > Hello All, > > I'm having JVM unstable / OOM errors when attempting to auto bootstrap a > 9th node to an existing 8 node cluster (256 tokens). Each machine has 24 > cores 148GB RAM and 10TB (2TB used). Under normal operation the 8 nodes > have JVM memory configured with Xms35G and Xmx35G, and handle 2-4 billion > inserts per day. There are never updates, deletes, or sparsely populated > rows. > > For the bootstrap node, I've tried memory values from 35GB to 135GB in > 10GB increments. I've tried using both memtable_allocation_types > (heap_buffers and offheap_buffers). I've not tried modifying the > memtable_cleanup_threshold but instead have tried memtable_flush_writers > from 2 to 8. I've tried memtable_(off)heap_space_in_mb from 20000 to > 60000. I've tried both CMS and G1 garbage collection with various > settings. > > Typically, after streaming about ~2TB of data, CPU load will hit a > maximum, and the "nodetool info" heap memory will, over the course of an > hour, approach the maximum. At that point, CPU load will drop to a singl= e > thread with minimal activity until the system becomes unstable and > eventually the OOM error occurs. > > Excerpt of the system log is below, and what I consistently see is the > MemtableFlushWriter and the MemtableReclaimMemory pending queues grow as > the memory becomes depleted, but the number of completed seems to stop > changing a few minutes after the CPU load spikes. > > One other data point is there seems to be a huge number of mutations that > occur after most of the stream has occured. Concurrent_writes is set at > 256 with the queue getting as high as 200K before dropping down. > > Any suggestions for yaml changes or jvm changes? JVM.options is currentl= y > the default with the memory set to the max, the current YAML file is belo= w. > > Thanks! > > > INFO [ScheduledTasks:1] 2018-08-06 17:49:26,329 StatusLogger.java:51 - >> MutationStage 1 2 191498052 0 >> 0 >> INFO [ScheduledTasks:1] 2018-08-06 17:49:26,331 StatusLogger.java:51 - >> ViewMutationStage 0 0 0 0 >> 0 >> INFO [Service Thread] 2018-08-06 17:49:26,338 StatusLogger.java:51 - >> PerDiskMemtableFlushWriter_0 0 0 5865 = 0 >> 0 >> INFO [ScheduledTasks:1] 2018-08-06 17:49:26,343 StatusLogger.java:51 - >> ReadStage 0 0 0 0 >> 0 >> INFO [Service Thread] 2018-08-06 17:49:26,347 StatusLogger.java:51 - >> ValidationExecutor 0 0 0 0 >> 0 >> INFO [ScheduledTasks:1] 2018-08-06 17:49:26,360 StatusLogger.java:51 - >> RequestResponseStage 0 0 8 0 >> 0 >> INFO [Service Thread] 2018-08-06 17:49:26,380 StatusLogger.java:51 - >> Sampler 0 0 0 0 >> 0 >> INFO [Service Thread] 2018-08-06 17:49:26,382 StatusLogger.java:51 - *M= emtableFlushWriter >> 8 74293 4716 0 * 0 >> INFO [ScheduledTasks:1] 2018-08-06 17:49:26,388 StatusLogger.java:51 - >> ReadRepairStage 0 0 0 0 >> 0 >> INFO [ScheduledTasks:1] 2018-08-06 17:49:26,389 StatusLogger.java:51 - >> CounterMutationStage 0 0 0 0 >> 0 >> INFO [ScheduledTasks:1] 2018-08-06 17:49:26,404 StatusLogger.java:51 - >> MiscStage 0 0 0 0 >> 0 >> INFO [ScheduledTasks:1] 2018-08-06 17:49:26,407 StatusLogger.java:51 - >> CompactionExecutor 8 13 493 0 >> 0 >> INFO [Service Thread] 2018-08-06 17:49:26,410 StatusLogger.java:51 - >> InternalResponseStage 0 0 16 0 >> 0 >> INFO [ScheduledTasks:1] 2018-08-06 17:49:26,413 StatusLogger.java:51 - = *MemtableReclaimMemory >> 1 6066 356 0 * 0 >> INFO [Service Thread] 2018-08-06 17:49:26,421 StatusLogger.java:51 - >> AntiEntropyStage 0 0 0 0 >> 0 >> INFO [Service Thread] 2018-08-06 17:49:26,430 StatusLogger.java:51 - >> CacheCleanupExecutor 0 0 0 0 >> 0 >> INFO [ScheduledTasks:1] 2018-08-06 17:49:26,431 StatusLogger.java:51 - >> PendingRangeCalculator 0 0 9 0 >> 0 >> INFO [Service Thread] 2018-08-06 17:49:26,436 StatusLogger.java:61 - >> CompactionManager 8 19 > > > > > Current Yaml > > num_tokens: 256 > > hinted_handoff_enabled: true > > hinted_handoff_throttle_in_kb: 10240 > > max_hints_delivery_threads: 8 > > hints_flush_period_in_ms: 10000 > > max_hints_file_size_in_mb: 128 > > batchlog_replay_throttle_in_kb: 10240 > > authenticator: AllowAllAuthenticator > > authorizer: AllowAllAuthorizer > > role_manager: CassandraRoleManager > > roles_validity_in_ms: 2000 > > permissions_validity_in_ms: 2000 > > credentials_validity_in_ms: 2000 > > partitioner: org.apache.cassandra.dht.Murmur3Partitioner > > data_file_directories: > > - /data/cassandra/data > > commitlog_directory: /data/cassandra/commitlog > > cdc_enabled: false > > disk_failure_policy: stop > > commit_failure_policy: stop > > prepared_statements_cache_size_mb: > > thrift_prepared_statements_cache_size_mb: > > key_cache_size_in_mb: > > key_cache_save_period: 14400 > > row_cache_size_in_mb: 0 > > row_cache_save_period: 0 > > counter_cache_size_in_mb: > > counter_cache_save_period: 7200 > > saved_caches_directory: /data/cassandra/saved_caches > > commitlog_sync: periodic > > commitlog_sync_period_in_ms: 10000 > > commitlog_segment_size_in_mb: 32 > > seed_provider: > > - class_name: org.apache.cassandra.locator.SimpleSeedProvider > > parameters: > > - seeds: "10.1.1.11,10.1.1.12,10.1.1.13" > > concurrent_reads: 128 > > concurrent_writes: 256 > > concurrent_counter_writes: 96 > > concurrent_materialized_view_writes: 32 > > disk_optimization_strategy: spinning > > memtable_heap_space_in_mb: 61440 > > memtable_offheap_space_in_mb: 61440 > > memtable_allocation_type: heap_buffers > > commitlog_total_space_in_mb: 81920 > > memtable_flush_writers: 8 > > > --0000000000008a45880572d8654f Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Hi,

Thanks for the fast response!
=

We are not using any materialized views, but there are = several indexes.=C2=A0 I don't have a recent heap dump, and it will be = about 24 before I can generate an interesting one, but most of the memory w= as allocated to byte buffers, so not entirely helpful.=C2=A0=C2=A0

=
nodetool cfstats is also below.

I also = see a lot of flushing happening, but it seems like there are too many small= allocations to be effective.=C2=A0 Here are the messages I see,
=
DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,459 ColumnFamilyStor= e.java:1305 - Flushing largest CFS(Keyspace=3D'userinfo', ColumnFam= ily=3D'gpsmessages') to free up room. Used total: 0.54/0.05, live: = 0.00/0.00, flushing: 0.40/0.04, this: 0.00/0.00
DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,459 C= olumnFamilyStore.java:915 - Enqueuing flush of gpsmessages: 0.000KiB (0%) o= n-heap, 0.014KiB (0%) off-heap
DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,460 ColumnFamilyStore.j= ava:1305 - Flushing largest CFS(Keyspace=3D'userinfo', ColumnFamily= =3D'user_history') to free up room. Used total: 0.54/0.05, live: 0.= 00/0.00, flushing: 0.40/0.04, this: 0.00/0.00
DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,461 Colu= mnFamilyStore.java:915 - Enqueuing flush of user_history: 0.000KiB (0%) on-= heap, 0.011KiB (0%) off-heap
DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,465 ColumnFamilyStore.jav= a:1305 - Flushing largest CFS(Keyspace=3D'userinfo', ColumnFamily= =3D'tweets') to free up room. Used total: 0.54/0.05, live: 0.00/0.0= 0, flushing: 0.40/0.04, this: 0.00/0.00
DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,465 ColumnFami= lyStore.java:915 - Enqueuing flush of tweets: 0.000KiB (0%) on-heap, 0.188K= iB (0%) off-heap
DEBUG = [SlabPoolCleaner] 2018-08-06 07:16:08,470 ColumnFamilyStore.java:1305 - Flu= shing largest CFS(Keyspace=3D'userinfo', ColumnFamily=3D'user_h= istory') to free up room. Used total: 0.54/0.05, live: 0.00/0.00, flush= ing: 0.40/0.04, this: 0.00/0.00
DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,470 ColumnFamilyStore.= java:915 - Enqueuing flush of user_history: 0.000KiB (0%) on-heap, 0.024KiB= (0%) off-heap
DEBUG = [SlabPoolCleaner] 2018-08-06 07:16:08,470 ColumnFamilyStore.java:1305 - Flu= shing largest CFS(Keyspace=3D'userinfo', ColumnFamily=3D'tweets= ') to free up room. Used total: 0.54/0.05, live: 0.00/0.00, flushing: 0= .40/0.04, this: 0.00/0.00
DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,470 ColumnFamilyStore.java:9= 15 - Enqueuing flush of tweets: 0.000KiB (0%) on-heap, 0.188KiB (0%) off-he= ap
DEBUG [SlabPoolClean= er] 2018-08-06 07:16:08,472 ColumnFamilyStore.java:1305 - Flushing largest = CFS(Keyspace=3D'userinfo', ColumnFamily=3D'gpsmessages') to= free up room. Used total: 0.54/0.05, live: 0.00/0.00, flushing: 0.40/0.04,= this: 0.00/0.00
DEBUG = [SlabPoolCleaner] 2018-08-06 07:16:08,472 ColumnFamilyStore.java:915 - Enqu= euing flush of gpsmessages: 0.000KiB (0%) on-heap, 0.013KiB (0%) off-heap



Stack traces from errors are below.

java.io.IOException: Broken pipe=
= =C2=A0 =C2=A0 =C2=A0 =C2=A0 at sun.nio.ch.FileDispatcherImpl.write0(Native = Method) ~[na:1.8.0_181]
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at sun.nio.ch.SocketDispa= tcher.write(SocketDispatcher.java:47) ~[na:1.8.0_181]
=C2=A0 =C2=A0 =C2=A0 = =C2=A0 at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) ~[na:1.8.= 0_181]
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at sun.nio.ch.IOUtil.write(IOUtil.java:51)= ~[na:1.8.0_181]
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at sun.nio.ch.SocketChannelImpl.= write(SocketChannelImpl.java:471) ~[na:1.8.0_181]
=C2=A0 =C2=A0 =C2=A0 =C2=A0= at org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.doFlush(Buffe= redDataOutputStreamPlus.java:323) ~[apache-cassandra-3.11.1.jar:3.11.1]
=C2= =A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.cassandra.io.util.BufferedDataOutput= StreamPlus.flush(BufferedDataOutputStreamPlus.java:331) ~[apache-cassandra-= 3.11.1.jar:3.11.1]
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.cassandra.stream= ing.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.= java:409) [apache-cassandra-3.11.1.jar:3.11.1]
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at= org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.ru= n(ConnectionHandler.java:380) [apache-cassandra-3.11.1.jar:3.11.1]
=C2=A0 =C2= =A0 =C2=A0 =C2=A0 at java.lang.Thread.run(Thread.java:748) [na:1.8.0_181]
java.lang.OutOfMemoryError: Direct buffer memory
=
=C2=A0 =C2=A0 =C2=A0 =C2= =A0 at java.nio.Bits.reserveMemory(Bits.java:694) ~[na:1.8.0_181]
=C2=A0 =C2= =A0 =C2=A0 =C2=A0 at java.nio.DirectByteBuffer.<init>(DirectByteBuffe= r.java:123) ~[na:1.8.0_181]
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at java.nio.ByteBuffe= r.allocateDirect(ByteBuffer.java:311) ~[na:1.8.0_181]
=C2=A0 =C2=A0 =C2=A0 = =C2=A0 at org.apache.cassandra.utils.memory.SlabAllocator.getRegion(SlabAll= ocator.java:139) ~[apache-cassandra-3.11.1.jar:3.11.1]
=C2=A0 =C2=A0 =C2=A0 = =C2=A0 at org.apache.cassandra.utils.memory.SlabAllocator.allocate(SlabAllo= cator.java:104) ~[apache-cassandra-3.11.1.jar:3.11.1]
=C2=A0 =C2=A0 =C2=A0 = =C2=A0 at org.apache.cassandra.utils.memory.ContextAllocator.allocate(Conte= xtAllocator.java:57) ~[apache-cassandra-3.11.1.jar:3.11.1]
=C2=A0 =C2=A0 =C2= =A0 =C2=A0 at org.apache.cassandra.utils.memory.ContextAllocator.clone(Cont= extAllocator.java:47) ~[apache-cassandra-3.11.1.jar:3.11.1]
=C2=A0 =C2=A0 =C2= =A0 =C2=A0 at org.apache.cassandra.utils.memory.MemtableBufferAllocator.clo= ne(MemtableBufferAllocator.java:40) ~[apache-cassandra-3.11.1.jar:3.11.1]
=C2= =A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.cassandra.db.Memtable.put(Memtable.j= ava:269) ~[apache-cassandra-3.11.1.jar:3.11.1]
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at= org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:133= 2) ~[apache-cassandra-3.11.1.jar:3.11.1]
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.a= pache.cassandra.db.Keyspace.applyInternal(Keyspace.java:618) ~[apache-cassa= ndra-3.11.1.jar:3.11.1]
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.cassandra.d= b.Keyspace.applyFuture(Keyspace.java:425) ~[apache-cassandra-3.11.1.jar:3.1= 1.1]
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.cassandra.db.Mutation.applyFut= ure(Mutation.java:222) ~[apache-cassandra-3.11.1.jar:3.11.1]
=C2=A0 =C2=A0 = =C2=A0 =C2=A0 at org.apache.cassandra.db.MutationVerbHandler.doVerb(Mutatio= nVerbHandler.java:68) ~[apache-cassandra-3.11.1.jar:3.11.1]
=C2=A0 =C2=A0 =C2= =A0 =C2=A0 at org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliv= eryTask.java:66) ~[apache-cassandra-3.11.1.jar:3.11.1]
=C2=A0 =C2=A0 =C2=A0 = =C2=A0 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.jav= a:511) ~[na:1.8.0_181]
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.cassandra.co= ncurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwar= eExecutorService.java:162) ~[apache-cassandra-3.11.1.jar:3.11.1]
=C2=A0 =C2= =A0 =C2=A0 =C2=A0 at org.apache.cassandra.concurrent.AbstractLocalAwareExec= utorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.ja= va:134) [apache-cassandra-3.11.1.jar:3.11.1]
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at o= rg.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) [apache-ca= ssandra-3.11.1.jar:3.11.1]
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at java.lang.Thread.ru= n(Thread.java:748) [na:1.8.0_181]
<= /blockquote>
DEBUG [SlabPoolCleaner] 2018-08-06 07= :16:08,459 ColumnFamilyStore.java:1305 - Flushing largest CFS(Keyspace=3D&#= 39;userinfo', ColumnFamily=3D'gpsmessages') to free up room. Us= ed total: 0.54/0.05, live: 0.00/0.00, flushing: 0.40/0.04, this: 0.00/0.00<= /blockquote>
DE= BUG [SlabPoolCleaner] 2018-08-06 07:16:08,459 ColumnFamilyStore.java:915 - = Enqueuing flush of gpsmessages: 0.000KiB (0%) on-heap, 0.014KiB (0%) off-he= ap
DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,460 ColumnFamilyStore.java:130= 5 - Flushing largest CFS(Keyspace=3D'userinfo', ColumnFamily=3D'= ;user_history') to free up room. Used total: 0.54/0.05, live: 0.00/0.00= , flushing: 0.40/0.04, this: 0.00/0.00
DEBUG [SlabPoolCleaner] 2018-08-06 07:= 16:08,461 ColumnFamilyStore.java:915 - Enqueuing flush of user_history: 0.0= 00KiB (0%) on-heap, 0.011KiB (0%) off-heap
DEBUG [SlabPoolCleaner] 2018-08-06= 07:16:08,465 ColumnFamilyStore.java:1305 - Flushing largest CFS(Keyspace= =3D'userinfo', ColumnFamily=3D'tweets') to free up room. Us= ed total: 0.54/0.05, live: 0.00/0.00, flushing: 0.40/0.04, this: 0.00/0.00<= /blockquote>
DE= BUG [SlabPoolCleaner] 2018-08-06 07:16:08,465 ColumnFamilyStore.java:915 - = Enqueuing flush of tweets: 0.000KiB (0%) on-heap, 0.188KiB (0%) off-heap
DEBU= G [SlabPoolCleaner] 2018-08-06 07:16:08,470 ColumnFamilyStore.java:1305 - F= lushing largest CFS(Keyspace=3D'userinfo', ColumnFamily=3D'user= _history') to free up room. Used total: 0.54/0.05, live: 0.00/0.00, flu= shing: 0.40/0.04, this: 0.00/0.00
DEBUG [SlabPoolCleaner] 2018-08-06 07:16:0= 8,470 ColumnFamilyStore.java:915 - Enqueuing flush of user_history: 0.000Ki= B (0%) on-heap, 0.024KiB (0%) off-heap
DEBUG [SlabPoolCleaner] 2018-08-06 07:= 16:08,470 ColumnFamilyStore.java:1305 - Flushing largest CFS(Keyspace=3D= 9;userinfo', ColumnFamily=3D'tweets') to free up room. Used tot= al: 0.54/0.05, live: 0.00/0.00, flushing: 0.40/0.04, this: 0.00/0.00
DEBUG [S= labPoolCleaner] 2018-08-06 07:16:08,470 ColumnFamilyStore.java:915 - Enqueu= ing flush of tweets: 0.000KiB (0%) on-heap, 0.188KiB (0%) off-heap
DEBUG [Sla= bPoolCleaner] 2018-08-06 07:16:08,472 ColumnFamilyStore.java:1305 - Flushin= g largest CFS(Keyspace=3D'userinfo', ColumnFamily=3D'gpsmessage= s') to free up room. Used total: 0.54/0.05, live: 0.00/0.00, flushing: = 0.40/0.04, this: 0.00/0.00
DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,472 Co= lumnFamilyStore.java:915 - Enqueuing flush of gpsmessages: 0.000KiB (0%) on= -heap, 0.013KiB (0%) off-heap

Total number of tables: 40
----------------
=
Keyspace : userinfo
= Read Count: 143301
Read L= atency: 14.945587623254548 ms.
<= /span>Write Count: 2754603904
Write Latency: 0.020883145284324698 ms.
Pending Flushes: 0
Table (index): gpsmessages.gpsmessages_addresscount_idxgpsme= ssages.gpsmessages_addresscount_idx
SSTable count: 9
Space used (live): 19043463189
Space used (total): 19043463189
Space used by snapshots (total): 0
Off heap memory used (total): 6259448
<= div> SSTable Compression Ratio: 0.3= 704785164266614
Number o= f partitions (estimate): 1025
<= /span>Memtable cell count: 309066
Memtable data size: 13602774
Memtable off heap memory used: 0
Memtable switch count: 0
Local read count: 0
Local read latency: NaN ms
Local write count: 46025778
Local write latency: 0.034 ms
Pending flushes: 0
Percent repaired: 0.0
Bloom filter false positives: 0
Bloom filter false ratio: 0.00000
Bloom filter space used: 25= 04
Bloom filter off heap= memory used: 2432
Index= summary off heap memory used: 320
Compression metadata off heap memory used: 6256696
Compacted partition minimum bytes: 43=
Compacted partition max= imum bytes: 44285675122
= Compacted partition mean bytes: 30405277
Average live cells per slice (last five minutes): NaN
Maximum live cells per slice= (last five minutes): 0
= Average tombstones per slice (last five minutes): NaN
Maximum tombstones per slice (last five minut= es): 0
Dropped Mutations= : 0

Table= (index): gpsmessages.gpsmessages_addresses_idxgpsmessages.gpsmessages_addr= esses_idx
SSTable count:= 18
Space used (live): 4= 09514565570
Space used (= total): 409514565570
Spa= ce used by snapshots (total): 0
= Off heap memory used (total): 153405673
SSTable Compression Ratio: 0.4447731157134059
=
Number of partitions (estimat= e): 6013125
Memtable cel= l count: 1110334
Memtabl= e data size: 67480140
Me= mtable off heap memory used: 0
= Memtable switch count: 0
= Local read count: 0
Local read latency: NaN ms
= Local write count: 147639252
Local write latency: 0.015 ms
Pending flushes: 0
Percent repaired: 0.0
Bloom filter false positives: 0
Bloom filter false ratio: 0.00000
Bloom filter space used: 34175400
<= span style=3D"white-space:pre"> Bloom filter off heap memory used: = 34175256
Index summary o= ff heap memory used: 7432177
Compression metadata off heap memory used: 111798240
Compacted partition minimum bytes: 61
Compacted partition maximum= bytes: 322381140
Compac= ted partition mean bytes: 36692
= Average live cells per slice (last five minutes): NaN
Maximum live cells per slice (last fi= ve minutes): 0
Average t= ombstones per slice (last five minutes): NaN
Maximum tombstones per slice (last five minutes): 0
Dropped Mutations: 0
=

Table (index):= gpsmessages.addressreceivedtime_idxgpsmessages.addressreceivedtime_idx
SSTable count: 10
Space used (live): 52738155477
Space used (total): 527381= 55477
Space used by snap= shots (total): 0
Off hea= p memory used (total): 1909362628
SSTable Compression Ratio: 0.4106961621795128
Number of partitions (estimate): 1338730016=
Memtable cell count: 30= 8990
Memtable data size:= 13410867
Memtable off h= eap memory used: 0
Memta= ble switch count: 0
Loca= l read count: 0
Local re= ad latency: NaN ms
Local= write count: 46012614
L= ocal write latency: 0.012 ms
Pending flushes: 0
= Percent repaired: 0.0
Bl= oom filter false positives: 0
<= /span>Bloom filter false ratio: 0.00000
Bloom filter space used: 1687550888
Bloom filter off heap memory used: 1687550808=
Index summary off heap = memory used: 213249180
C= ompression metadata off heap memory used: 8562640
Compacted partition minimum bytes: 36
<= span style=3D"white-space:pre"> Compacted partition maximum bytes: = 2759
Compacted partition= mean bytes: 54
Average = live cells per slice (last five minutes): NaN
Maximum live cells per slice (last five minutes): 0
Average tombstones per sl= ice (last five minutes): NaN
Maximum tombstones per slice (last five minutes): 0
Dropped Mutations: 0

<= div> Table: gpsmessages
<= span style=3D"white-space:pre"> SSTable count: 13
Space used (live): 337974446627
Space used (total): 337974446627<= /div>
Space used by snapshots = (total): 0
Off heap memo= ry used (total): 77833540
SSTable Compression Ratio: 0.5300637241381126
Number of partitions (estimate): 22034
Memtable cell count: 308904
Memtable data size: 72074512
Memtable off heap memory us= ed: 0
Memtable switch co= unt: 110
Local read coun= t: 0
Local read latency:= NaN ms
Local write coun= t: 45996652
Local write = latency: 0.281 ms
Pendin= g flushes: 0
Percent rep= aired: 0.0
Bloom filter = false positives: 0
Bloom= filter false ratio: 0.00000
Bloom filter space used: 67904
Bloom filter off heap memory used: 67800
Index summary off heap memory used: 11756
Compression metadata off he= ap memory used: 77753984
Compacted partition minimum bytes: 73
Compacted partition maximum bytes: 1155149911
Compacted partition mean bytes: 131582= 24
Average live cells pe= r slice (last five minutes): NaN
= Maximum live cells per slice (last five minutes): 0
Average tombstones per slice (last fiv= e minutes): NaN
Maximum = tombstones per slice (last five minutes): 0
Dropped Mutations: 13699

Table: user_history
SSTable count: 17
Space used (live): 116361158882
Space used (total): 116361158882
Space used by snapshots (total):= 0
Off heap memory used = (total): 29562319
SSTabl= e Compression Ratio: 0.5683114352331539
Number of partitions (estimate): 1337206
Memtable cell count: 773277
Memtable data size: 40623368
Memtable off heap memory used: 0<= /div>
Memtable switch count: 5= 7
Local read count: 209<= /div>
Local read latency: NaN = ms
Local write count: 14= 5853733
Local write late= ncy: 0.020 ms
Pending fl= ushes: 0
Percent repaire= d: 0.0
Bloom filter fals= e positives: 0
Bloom fil= ter false ratio: 0.00000
Bloom filter space used: 3844416
Bloom filter off heap memory used: 3844280
Index summary off heap memory used: 800991
Compression metadata off h= eap memory used: 24917048
Compacted partition minimum bytes: 61
Compacted partition maximum bytes: 464228842
Compacted partition mean bytes: 72182<= /div>
Average live cells per s= lice (last five minutes): NaN
<= /span>Maximum live cells per slice (last five minutes): 0
Average tombstones per slice (last five m= inutes): NaN
Maximum tom= bstones per slice (last five minutes): 0
Dropped Mutations: 66702

Table: users
SSTable count: 3
Space used (live): 89945186
Space used (total): 89945186
Space used by snapshots (total): 0
Off heap memory used (total): 2092053
SSTable Compression Ratio:= 0.5712127629253333
Numb= er of partitions (estimate): 1365645
Memtable cell count: 3556
Memtable data size: 150903
Memtable off heap memory used: 0
Memtable switch count: 42
Local read count: 143087
Local read latency: 6.094 ms
Local write count: 250971
Local write latency: 0.024 ms
Pending flushes: 0
Percent repaired: 0.0
Bloom filter false positives: 0
<= div> Bloom filter false ratio: 0.00= 000
Bloom filter space u= sed: 1709848
Bloom filte= r off heap memory used: 1709824
= Index summary off heap memory used: 372125
Compression metadata off heap memory used: 10104=
Compacted partition min= imum bytes: 36
Compacted= partition maximum bytes: 310
<= /span>Compacted partition mean bytes: 66
Average live cells per slice (last five minutes): 1.0
Maximum live cells per slice= (last five minutes): 1
= Average tombstones per slice (last five minutes): 1.0
Maximum tombstones per slice (last five minut= es): 1
Dropped Mutations= : 114

Tab= le: tweets
SSTable count= : 18
Space used (live): = 1809145656486
Space used= (total): 1809145656486
= Space used by snapshots (total): 0
Off heap memory used (total): 435915908
SSTable Compression Ratio: 0.5726200929451171
Number of partitions (esti= mate): 26217889
Memtable= cell count: 710146
Memt= able data size: 31793929
Memtable off heap memory used: 0
Memtable switch count: 399
Local read count: 5
= Local read latency: NaN ms
Local write count: 2322829524
Local write latency: 0.019 ms
Pending flushes: 0
Percent repaired: 0.0
Bloom filter false positives: 0
Bloom filter false ratio: 0.00000
Bloom filter space used: 35019224
<= div> Bloom filter off heap memory u= sed: 35019080
Index summ= ary off heap memory used: 16454076
Compression metadata off heap memory used: 384442752
<= span style=3D"white-space:pre"> Compacted partition minimum bytes: = 104
Compacted partition = maximum bytes: 3379391
C= ompacted partition mean bytes: 124766
Average live cells per slice (last five minutes): NaN
Maximum live cells per slice (l= ast five minutes): 0
Ave= rage tombstones per slice (last five minutes): NaN
Maximum tombstones per slice (last five minutes)= : 0
Dropped Mutations: 6= 97696

----------------

<= /div>

On Mon= , Aug 6, 2018 at 8:57 PM, Jeff Jirsa <jjirsa@gmail.com> wrote= :


Upgrading to 3.11.3 May fix it (there were some memory recycling bugs fi= xed recently), but analyzing the heap will be the best option
If you can print out the heap histogram and stack trace or open= a heap dump in your kit or visualvm or MAT and show us what=E2=80=99s at t= he top of the reclaimed objects, we may be able to figure out what=E2=80=99= s going on

--=C2=A0
Jeff Jirsa


On Aug 6, 20= 18, at 5:42 PM, Jeff Jirsa <jjirsa@gmail.com> wrote:

Are you using materialized views or secondary indices?=C2=A0=

--=C2=A0
Je= ff Jirsa


On Aug 6, 2018, at 3:49 PM, Las= zlo Szabo <laszlo.viktor.szabo@gmail.com> wrote:

Hello All,

I'm having JVM unstable / OOM errors when= attempting to auto bootstrap a 9th node to an existing 8 node cluster (256= tokens).=C2=A0 Each machine has 24 cores 148GB RAM and 10TB (2TB used).=C2= =A0 Under normal operation the 8 nodes have JVM memory configured with Xms3= 5G and Xmx35G, and handle 2-4 billion inserts per day.=C2=A0 There are neve= r updates, deletes, or sparsely populated rows.=C2=A0=C2=A0

For the bootstrap node, I've tried me= mory values from 35GB to 135GB in 10GB increments. I've tried using bot= h memtable_allocation_types (heap_buffers and offheap_buffers).=C2=A0 I'= ;ve not tried modifying the memtable_cleanup_threshold but instead have tri= ed memtable_flush_writers from 2 to 8.=C2=A0 I've tried memtable_(off)h= eap_space_in_mb from 20000 to 60000.=C2=A0 I've tried both CMS and G1 g= arbage collection with various settings.=C2=A0=C2=A0
=
Typically, after streaming about ~2TB of data, C= PU load will hit a maximum, and the "nodetool info" heap memory w= ill, over the course of an hour, approach the maximum.=C2=A0 At that point,= CPU load will drop to a single thread with minimal activity until the syst= em becomes unstable and eventually the OOM error occurs.

Excerpt of the system log is below, and what= I consistently see is the MemtableFlushWriter and the MemtableReclaimMemor= y pending queues grow as the memory becomes depleted, but the number of com= pleted seems to stop changing a few minutes after the CPU load spikes.

One other data point is there = seems to be a huge number of mutations that occur after most of the stream = has occured.=C2=A0 Concurrent_writes is set at 256 with the queue getting a= s high as 200K before dropping down.=C2=A0=C2=A0

=
Any suggestions for yaml changes or jvm changes?=C2= =A0 JVM.options is currently the default with the memory set to the max, th= e current YAML file is below.

Thanks!


INFO=C2=A0 [ScheduledTasks:1] 2018-08-06 17:49:26= ,329 StatusLogger.java:51 - MutationStage=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A01=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A02=C2=A0 =C2=A0 =C2=A0 191498052=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00=C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00
IN= FO=C2=A0 [ScheduledTasks:1] 2018-08-06 17:49:26,331 StatusLogger.java:51 - = ViewMutationStage=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A00=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 0=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00
INFO=C2=A0 [Servic= e Thread] 2018-08-06 17:49:26,338 StatusLogger.java:51 - PerDiskMemtableFlu= shWriter_0=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A00=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A05865=C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A00
INFO=C2=A0 [ScheduledTasks:1] 2018-08-06 17:49:26,343 St= atusLogger.java:51 - ReadStage=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A00=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 0=C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A00
INFO=C2=A0 [Service Thread] 2018-08-06 17:49:26,347 S= tatusLogger.java:51 - ValidationExecutor=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 0=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 0=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00=C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00
IN= FO=C2=A0 [ScheduledTasks:1] 2018-08-06 17:49:26,360 StatusLogger.java:51 - = RequestResponseStage=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 0=C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 8=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00=
INFO=C2=A0 [Service Thread] = 2018-08-06 17:49:26,380 StatusLogger.java:51 - Sampler=C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A00=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 0=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00
INFO=C2=A0 [Service T= hread] 2018-08-06 17:49:26,382 StatusLogger.java:51 -=C2=A0= MemtableFlushWriter=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A08= =C2=A0 =C2=A0 =C2=A074293=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A04716=C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00=C2=A0=C2=A0=C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00
INFO=C2=A0 [Schedu= ledTasks:1] 2018-08-06 17:49:26,388 StatusLogger.java:51 - ReadRepairStage= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00=C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 0=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00=
INFO=C2=A0 [ScheduledTasks:1= ] 2018-08-06 17:49:26,389 StatusLogger.java:51 - CounterMutationStage=C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 0=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A00=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 0=C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A00
INFO=C2=A0 [ScheduledTasks:1] 2018-08-06 17:49:26,404 St= atusLogger.java:51 - MiscStage=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A00=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 0=C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A00
INFO=C2=A0 [ScheduledTasks:1] 2018-08-06 17:49:26,407= StatusLogger.java:51 - CompactionExecutor=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 8=C2=A0 =C2=A0 =C2=A0 =C2=A0 13=C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 493=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00=C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00
INFO=C2= =A0 [Service Thread] 2018-08-06 17:49:26,410 StatusLogger.java:51 - Interna= lResponseStage=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00=C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A016= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A00
INFO=C2=A0 [ScheduledTasks:1] 2018-= 08-06 17:49:26,413 StatusLogger.java:51 -=C2=A0MemtableRecl= aimMemory=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A01=C2=A0 =C2=A0 =C2= =A0 6066=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 356=C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A0=C2=A0=C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A00
INFO=C2=A0 [Service Thread] 2018-08= -06 17:49:26,421 StatusLogger.java:51 - AntiEntropyStage=C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 0=C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A00=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 0=C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A00
INFO=C2=A0 [Service Thread] 2018-08-06 17:49:26,430 S= tatusLogger.java:51 - CacheCleanupExecutor=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 0=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 0=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00=C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00
INFO=C2= =A0 [ScheduledTasks:1] 2018-08-06 17:49:26,431 StatusLogger.java:51 - Pendi= ngRangeCalculator=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 0=C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 9=C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A00
= INFO=C2=A0 [Service Thread] 2018-08-06 = 17:49:26,436 StatusLogger.java:61 - CompactionManager=C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A08=C2=A0 =C2=A0 =C2=A0 =C2=A0 19



=C2=A0Current Yaml
num_tokens: 256
hinted_han= doff_enabled: true
hinted_handoff_= throttle_in_kb: 10240=C2=A0
max_hi= nts_delivery_threads: 8
hints_flus= h_period_in_ms: 10000
max_hints_fi= le_size_in_mb: 128
batchlog_replay= _throttle_in_kb: 10240
authen= ticator: AllowAllAuthenticator
aut= horizer: AllowAllAuthorizer
role_m= anager: CassandraRoleManager
roles= _validity_in_ms: 2000
permissions_= validity_in_ms: 2000
credentials_v= alidity_in_ms: 2000
partitioner: o= rg.apache.cassandra.dht.Murmur3Partitioner
data_file_directories:
=C2=A0 =C2=A0 - /data/cassandra/data
commitlog_directory: /data/cassandra/commitlog
cdc_enabled: false
disk_failure_policy: stop
comm= it_failure_policy: stop
prepared_s= tatements_cache_size_mb:
thri= ft_prepared_statements_cache_size_mb:
key_cache_size_in_mb:
key= _cache_save_period: 14400
row_ca= che_size_in_mb: 0
row_cache_save_p= eriod: 0
counter_cache_size_in_mb:=
counter_cache_save_period: 7200
saved_caches_directory: /data/cassa= ndra/saved_caches
commitlog_sync: = periodic
commitlog_sync_period_in_= ms: 10000
commitlog_segment_size_i= n_mb: 32
seed_provider:
=C2=A0 =C2=A0 - class_name: org.apache.cassa= ndra.locator.SimpleSeedProvider
=C2=A0 =C2=A0 =C2=A0 parameters:
concurrent_reads= : 128
concurrent_writes: 256
= concurrent_counter_writes: 96
concurrent_materialized_view_writes: 3= 2
disk_optimization_strategy: spin= ning
memtable_heap_space_in_mb: 6= 1440
memtable_offheap_space_in_mb= : 61440
memtable_allocation_type: = heap_buffers
commitlog_total_space= _in_mb: 81920
memtable_flush_write= rs: 8

=

--0000000000008a45880572d8654f--