From user-return-64338-archive-asf-public=cust-asf.ponee.io@cassandra.apache.org Thu Aug 8 16:31:33 2019 Return-Path: X-Original-To: archive-asf-public@cust-asf.ponee.io Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [207.244.88.153]) by mx-eu-01.ponee.io (Postfix) with SMTP id 61487180642 for ; Thu, 8 Aug 2019 18:31:32 +0200 (CEST) Received: (qmail 39759 invoked by uid 500); 8 Aug 2019 16:31:29 -0000 Mailing-List: contact user-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@cassandra.apache.org Delivered-To: mailing list user@cassandra.apache.org Received: (qmail 39749 invoked by uid 99); 8 Aug 2019 16:31:29 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd3-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 08 Aug 2019 16:31:29 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd3-us-west.apache.org (ASF Mail Server at spamd3-us-west.apache.org) with ESMTP id 9AD95180EC2 for ; Thu, 8 Aug 2019 16:31:28 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd3-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 3.252 X-Spam-Level: *** X-Spam-Status: No, score=3.252 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FROM_EXCESS_BASE64=0.001, HTML_MESSAGE=2, KAM_LINEPADDING=1.2, KAM_LOTSOFHASH=0.25, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=disabled Authentication-Results: spamd3-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx1-he-de.apache.org ([10.40.0.8]) by localhost (spamd3-us-west.apache.org [10.40.0.10]) (amavisd-new, port 10024) with ESMTP id nS8IrEtOyWX6 for ; Thu, 8 Aug 2019 16:31:24 +0000 (UTC) Received-SPF: Pass (mailfrom) identity=mailfrom; client-ip=2607:f8b0:4864:20::32c; helo=mail-ot1-x32c.google.com; envelope-from=philipcondon@gmail.com; receiver= Received: from mail-ot1-x32c.google.com (mail-ot1-x32c.google.com [IPv6:2607:f8b0:4864:20::32c]) by mx1-he-de.apache.org (ASF Mail Server at mx1-he-de.apache.org) with ESMTPS id 08FE97D3FC for ; Thu, 8 Aug 2019 16:31:24 +0000 (UTC) Received: by mail-ot1-x32c.google.com with SMTP id x21so27743125otq.12 for ; Thu, 08 Aug 2019 09:31:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to; bh=uipa6Xlw9RKjjCY4BWjpG7CG7X/+G1uAFp4uJYTP2JA=; b=ijO9CunfyZ9KWo7R6TStDFdMYJ91TIhyAX4UPUpj/mSJ3dxiaIZonjerBo809O9hAb LEElSrk+yW36ZD0eWRUsmHs5zfszlQNJ1bJpo9YPgDLqWY9c8sx9xE+ndxVovnTQdceL UQUg6NQF1igD5glw5MZL5k0YZZMLyMSeOL8vRURHV7uaIeOQHucT15Az8eQJm7LbSLP5 l3MOF7UgndQ3U0eiYEQM6d/iRMBwMR0PWSBIMr+Pkd7Da8iTR9miYIGtvN5xwYBZE9Ib D8BMt0DV18wrCaFJXnEGo+6smpQe/VjBf+f/eYkhnzGRKJsu2z+qa3yVclN/IqH8x5iK TAgA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to; bh=uipa6Xlw9RKjjCY4BWjpG7CG7X/+G1uAFp4uJYTP2JA=; b=m0DcnjuPjK7n2IMQ0sN0EVDTP6AzFhEU6xYn6I4nhkDRQ4pwkwRva6qbMf2M9MNNFj KDzEhxV8pjel/al23B7PZnwIc+OKOnpfukuYfVrikUiCfwv4zZL+pj0gAe+Br8C3yopQ VzhNlq829mezZT5CNKseZ+PUirYFabjvpa/xOP5eosL9ze1ss7loZJCb54SF7BK9h2s/ e/8MpSOlTEgbFk4GR/cwniJRSELoW6JTpbgYATFEls5SDD/cOltfUEKAvmFVdw8oUDnU udKLBnkuxT0EcoIMRjTrsfhQZgymUePO0ha7WKJE9OmaPJLhGXFcq9ZjSiJYW+TX1+r/ ZkxA== X-Gm-Message-State: APjAAAW5qTRle8h06m+zCgIcsmtTWygqMcZFjCpSjjsvRmA3YNiH2Muv eVflbjbmV2kBKXM0voGqfX1kKNy+/O8IvQRs0dApM3LeQA== X-Google-Smtp-Source: APXvYqzOqP4UTfJgP2LwLMSgTtLJadTVfbrbZD6UJM1V+TgIgTHaX1u8GQjbR85cgC3rtKW9gkfbHYJyNPPvRCvEAAM= X-Received: by 2002:a05:6830:157:: with SMTP id j23mr14390433otp.198.1565281881761; Thu, 08 Aug 2019 09:31:21 -0700 (PDT) MIME-Version: 1.0 References: <34D1EBF4-96A5-46B9-AA3E-46CFBF01866B@gmail.com> <84E4EDDD-69E0-452A-92B1-7ABDDE811B22@gmail.com> <053247A8CBB6754B8345743B8F18D68D525C8A85@MOSTLS1MSGUSRFA.ITServices.sbc.com> In-Reply-To: <053247A8CBB6754B8345743B8F18D68D525C8A85@MOSTLS1MSGUSRFA.ITServices.sbc.com> From: =?UTF-8?B?UGhpbGlwIMOTIENvbmTDumlu?= Date: Thu, 8 Aug 2019 17:31:04 +0100 Message-ID: Subject: Re: Datafile Corruption To: user@cassandra.apache.org Content-Type: multipart/alternative; boundary="000000000000892f66058f9d962c" --000000000000892f66058f9d962c Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable *@Jeff *- If it was hardware that would explain it all, but do you think it's possible to have every server in the cluster with a hardware issue? The data is sensitive and the customer would lose their mind if I sent it off-site which is a pity cause I could really do with the help. The corruption is occurring irregularly on every server and instance and column family in the cluster. Out of 72 instances, we are getting maybe 10 corrupt files per day. We are using vnodes (256) and it is happening in both DC's *@Asad *- internode compression is set to ALL on every server. I have checked the packets for the private interconnect and I can't see any dropped packets, there are dropped packets for other interfaces, but not for the private ones, I will get the network team to double-check this. The corruption is only on the application schema, we are not getting corruption on any system or cass keyspaces. Corruption is happening in both DC's. We are getting corruption for the 1 application schema we have across all tables in the keyspace, it's not limited to one table. Im not sure why the app team decided to not use default compression, I must ask them. I have been checking the /var/log/messages today going back a few weeks and can see a serious amount of broken pipe errors across all servers and instances. Here is a snippet from one server but most pipe errors are similar: Jul 9 03:00:08 cassandra: INFO 02:00:08 Writing Memtable-sstable_activity@1126262628(43.631KiB serialized bytes, 18072 ops, 0%/0% of on/off-heap limit) Jul 9 03:00:13 kernel: fnic_handle_fip_timer: 8 callbacks suppressed Jul 9 03:00:19 kernel: fnic_handle_fip_timer: 8 callbacks suppressed Jul 9 03:00:22 cassandra: ERROR 02:00:22 Got an IOException during write! Jul 9 03:00:22 cassandra: java.io.IOException: Broken pipe Jul 9 03:00:22 cassandra: at sun.nio.ch.FileDispatcherImpl.write0(Native Method) ~[na:1.8.0_172] Jul 9 03:00:22 cassandra: at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) ~[na:1.8.0_172] Jul 9 03:00:22 cassandra: at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) ~[na:1.8.0_172] Jul 9 03:00:22 cassandra: at sun.nio.ch.IOUtil.write(IOUtil.java:65) ~[na:1.8.0_172] Jul 9 03:00:22 cassandra: at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471) ~[na:1.8.0_172] Jul 9 03:00:22 cassandra: at org.apache.thrift.transport.TNonblockingSocket.write(TNonblockingSocket.jav= a:165) ~[libthrift-0.9.2.jar:0.9.2] Jul 9 03:00:22 cassandra: at com.thinkaurelius.thrift.util.mem.Buffer.writeTo(Buffer.java:104) ~[thrift-server-0.3.7.jar:na] Jul 9 03:00:22 cassandra: at com.thinkaurelius.thrift.util.mem.FastMemoryOutputTransport.streamTo(FastMe= moryOutputTransport.java:112) ~[thrift-server-0.3.7.jar:na] Jul 9 03:00:22 cassandra: at com.thinkaurelius.thrift.Message.write(Message.java:222) ~[thrift-server-0.3.7.jar:na] Jul 9 03:00:22 cassandra: at com.thinkaurelius.thrift.TDisruptorServer$SelectorThread.handleWrite(TDisru= ptorServer.java:598) [thrift-server-0.3.7.jar:na] Jul 9 03:00:22 cassandra: at com.thinkaurelius.thrift.TDisruptorServer$SelectorThread.processKey(TDisrup= torServer.java:569) [thrift-server-0.3.7.jar:na] Jul 9 03:00:22 cassandra: at com.thinkaurelius.thrift.TDisruptorServer$AbstractSelectorThread.select(TDi= sruptorServer.java:423) [thrift-server-0.3.7.jar:na] Jul 9 03:00:22 cassandra: at com.thinkaurelius.thrift.TDisruptorServer$AbstractSelectorThread.run(TDisru= ptorServer.java:383) [thrift-server-0.3.7.jar:na] Jul 9 03:00:25 kernel: fnic_handle_fip_timer: 8 callbacks suppressed Jul 9 03:00:30 cassandra: ERROR 02:00:30 Got an IOException during write! Jul 9 03:00:30 cassandra: java.io.IOException: Broken pipe Jul 9 03:00:30 cassandra: at sun.nio.ch.FileDispatcherImpl.write0(Native Method) ~[na:1.8.0_172] Jul 9 03:00:30 cassandra: at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) ~[na:1.8.0_172] Jul 9 03:00:30 cassandra: at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) ~[na:1.8.0_172] Jul 9 03:00:30 cassandra: at sun.nio.ch.IOUtil.write(IOUtil.java:65) ~[na:1.8.0_172] Jul 9 03:00:30 cassandra: at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471) ~[na:1.8.0_172] Jul 9 03:00:30 cassandra: at org.apache.thrift.transport.TNonblockingSocket.write(TNonblockingSocket.jav= a:165) ~[libthrift-0.9.2.jar:0.9.2] Jul 9 03:00:30 cassandra: at com.thinkaurelius.thrift.util.mem.Buffer.writeTo(Buffer.java:104) ~[thrift-server-0.3.7.jar:na] Jul 9 03:00:30 cassandra: at com.thinkaurelius.thrift.util.mem.FastMemoryOutputTransport.streamTo(FastMe= moryOutputTransport.java:112) ~[thrift-server-0.3.7.jar:na] Jul 9 03:00:30 cassandra: at com.thinkaurelius.thrift.Message.write(Message.java:222) ~[thrift-server-0.3.7.jar:na] Jul 9 03:00:30 cassandra: at com.thinkaurelius.thrift.TDisruptorServer$SelectorThread.handleWrite(TDisru= ptorServer.java:598) [thrift-server-0.3.7.jar:na] Jul 9 03:00:30 cassandra: at com.thinkaurelius.thrift.TDisruptorServer$SelectorThread.processKey(TDisrup= torServer.java:569) [thrift-server-0.3.7.jar:na] Jul 9 03:00:30 cassandra: at com.thinkaurelius.thrift.TDisruptorServer$AbstractSelectorThread.select(TDi= sruptorServer.java:423) [thrift-server-0.3.7.jar:na] Jul 9 03:00:30 cassandra: at com.thinkaurelius.thrift.TDisruptorServer$AbstractSelectorThread.run(TDisru= ptorServer.java:383) [thrift-server-0.3.7.jar:na] Jul 9 03:00:31 kernel: fnic_handle_fip_timer: 8 callbacks suppressed Jul 9 03:00:37 kernel: fnic_handle_fip_timer: 8 callbacks suppressed Jul 9 03:00:43 kernel: fnic_handle_fip_timer: 8 callbacks suppressed On Thu, 8 Aug 2019 at 15:42, ZAIDI, ASAD A wrote: > Did you check if packets are NOT being dropped for network interfaces > Cassandra instances are consuming (ifconfig =E2=80=93a) internode compres= sion is > set for all endpoint =E2=80=93 may be network is playing any role here? > > is this corruption limited so certain keyspace/table | DCs or is that wid= e > spread =E2=80=93 the log snippet you shared it looked like only specific > keyspace/table is affected =E2=80=93 is that correct? > > When you remove corrupted sstable of a certain table, I guess you verifie= s > all nodes for corrupted sstables for same table (may be with with nodetoo= l > scrub tool) so to limit spread of corruptions =E2=80=93 right? > > Just curious to know =E2=80=93 you=E2=80=99re not using lz4/default compr= essor for all > tables there must be some reason for it. > > > > > > > > *From:* Philip =C3=93 Cond=C3=BAin [mailto:philipoconduin@gmail.com] > *Sent:* Thursday, August 08, 2019 6:20 AM > *To:* user@cassandra.apache.org > *Subject:* Re: Datafile Corruption > > > > Hi All, > > Thank you so much for the replies. > > Currently, I have the following list that can potentially cause some sort > of corruption in a Cassandra cluster. > > - Sudden Power cut - *We have had no power cuts in the datacenters* > - Network Issues - *no network issues from what I can tell* > - Disk full - *I don't think this is an issue for us, see disks below.= * > - An issue in Casandra version like Cassandra-13752 -* couldn't find > any Jira issues similar to ours.* > - Bit Flips -* we have compression enabled so I don't think this > should be an issue.* > - Repair during upgrade has caused corruption too -* we have not > upgraded* > - Dropping and adding columns with the same name but a different type > - *I will need to ask the apps team how they are using the database.* > > > > Ok, let me try and explain the issue we are having, I am under a lot of > pressure from above to get this fixed and I can't figure it out. > > This is a PRE-PROD environment. > > - 2 datacenters. > - 9 physical servers in each datacenter > - 4 Cassandra instances on each server > - 72 Cassandra instances across the 2 data centres, 36 in site A, 36 > in site B. > > > We also have 2 Reaper Nodes we use for repair. One reaper node in each > datacenter each running with its own Cassandra back end in a cluster > together. > > OS Details [Red Hat Linux] > cass_a@x 0 10:53:01 ~ $ uname -a > Linux x 3.10.0-957.5.1.el7.x86_64 #1 SMP Wed Dec 19 10:46:58 EST 2018 > x86_64 x86_64 x86_64 GNU/Linux > > cass_a@x 0 10:57:31 ~ $ cat /etc/*release > NAME=3D"Red Hat Enterprise Linux Server" > VERSION=3D"7.6 (Maipo)" > ID=3D"rhel" > > Storage Layout > cass_a@xx 0 10:46:28 ~ $ df -h > Filesystem Size Used Avail Use% Mounted on > /dev/mapper/vg01-lv_root 20G 2.2G 18G 11% / > devtmpfs 63G 0 63G 0% /dev > tmpfs 63G 0 63G 0% /dev/shm > tmpfs 63G 4.1G 59G 7% /run > tmpfs 63G 0 63G 0% /sys/fs/cgroup > >> 4 cassandra instances > /dev/sdd 1.5T 802G 688G 54% /data/ssd4 > /dev/sda 1.5T 798G 692G 54% /data/ssd1 > /dev/sdb 1.5T 681G 810G 46% /data/ssd2 > /dev/sdc 1.5T 558G 932G 38% /data/ssd3 > > Cassandra load is about 200GB and the rest of the space is snapshots > > CPU > cass_a@x 127 10:58:47 ~ $ lscpu | grep -E '^Thread|^Core|^Socket|^CPU\(' > CPU(s): 64 > Thread(s) per core: 2 > Core(s) per socket: 16 > Socket(s): 2 > > *Description of problem:* > During repair of the cluster, we are seeing multiple corruptions in the > log files on a lot of instances. There seems to be no pattern to the > corruption. It seems that the repair job is finding all the corrupted > files for us. The repair will hang on the node where the corrupted file = is > found. To fix this we remove/rename the datafile and bounce the Cassandr= a > instance. Our hardware/OS team have stated there is no problem on their > side. I do not believe it the repair causing the corruption. > > We have maintenance scripts that run every night running compactions and > creating snapshots, I decided to turn these off, fix any corruptions we > currently had and ran major compaction on all nodes, once this was done w= e > had a "clean" cluster and we left the cluster for a few days. After the > process we noticed one corruption in the cluster, this datafile was creat= ed > after I turned off the maintenance scripts so my theory of the scripts > causing the issue was wrong. We then kicked off another repair and start= ed > to find more corrupt files created after the maintenance script was turne= d > off. > > > So let me give you an example of a corrupted file and maybe someone might > be able to work through it with me? > > When this corrupted file was reported in the log it looks like it was the > repair that found it. > > $ journalctl -u cassmeta-cass_b.service --since "2019-08-07 22:25:00" > --until "2019-08-07 22:45:00" > > Aug 07 22:30:33 cassandra[34611]: INFO 21:30:33 Writing > Memtable-compactions_in_progress@830377457(0.008KiB serialized bytes, 1 > ops, 0%/0% of on/off-heap limit) > Aug 07 22:30:33 cassandra[34611]: ERROR 21:30:33 Failed creating a merkle > tree for [repair #9587a200-b95a-11e9-8920-9f72868b8375 on > KeyspaceMetadata/x, (-1476350953672479093,-1474461 > Aug 07 22:30:33 cassandra[34611]: ERROR 21:30:33 Exception in thread > Thread[ValidationExecutor:825,1,main] > Aug 07 22:30:33 cassandra[34611]: org.apache.cassandra.io.FSReadError: > org.apache.cassandra.io.sstable.CorruptSSTableException: Corrupted: > /x/ssd2/data/KeyspaceMetadata/x-1e453cb0 > Aug 07 22:30:33 cassandra[34611]: at > org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessRea= der.java:365) > ~[apache-cassandra-2.2.13.jar:2.2.13] > Aug 07 22:30:33 cassandra[34611]: at > org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:361) > ~[apache-cassandra-2.2.13.jar:2.2.13] > Aug 07 22:30:33 cassandra[34611]: at > org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferU= til.java:340) > ~[apache-cassandra-2.2.13.jar:2.2.13] > Aug 07 22:30:33 cassandra[34611]: at > org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(A= bstractCType.java:382) > ~[apache-cassandra-2.2.13.jar:2.2.13] > Aug 07 22:30:33 cassandra[34611]: at > org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(A= bstractCType.java:366) > ~[apache-cassandra-2.2.13.jar:2.2.13] > Aug 07 22:30:33 cassandra[34611]: at > org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDi= skAtom.java:81) > ~[apache-cassandra-2.2.13.jar:2.2.13] > Aug 07 22:30:33 cassandra[34611]: at > org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:52) > ~[apache-cassandra-2.2.13.jar:2.2.13] > Aug 07 22:30:33 cassandra[34611]: at > org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:46) > ~[apache-cassandra-2.2.13.jar:2.2.13] > Aug 07 22:30:33 cassandra[34611]: at > com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractItera= tor.java:143) > ~[guava-16.0.jar:na] > Aug 07 22:30:33 cassandra[34611]: at > com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:= 138) > ~[guava-16.0.jar:na] > Aug 07 22:30:33 cassandra[34611]: at > org.apache.cassandra.io.sstable.SSTableIdentityIterator.hasNext(SSTableId= entityIterator.java:169) > ~[apache-cassandra-2.2.13.jar:2.2.13] > Aug 07 22:30:33 cassandra[34611]: at > org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterat= or.java:202) > ~[apache-cassandra-2.2.13.jar:2.2.13] > Aug 07 22:30:33 cassandra[34611]: at > com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractItera= tor.java:143) > ~[guava-16.0.jar:na] > Aug 07 22:30:33 cassandra[34611]: at > com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:= 138) > ~[guava-16.0.jar:na] > Aug 07 22:30:33 cassandra[34611]: at > com.google.common.collect.Iterators$7.computeNext(Iterators.java:645) > ~[guava-16.0.jar:na] > Aug 07 22:30:33 cassandra[34611]: at > com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractItera= tor.java:143) > ~[guava-16.0.jar:na] > Aug 07 22:30:33 cassandra[34611]: at > com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:= 138) > ~[guava-16.0.jar:na] > Aug 07 22:30:33 cassandra[34611]: at > org.apache.cassandra.db.ColumnIndex$Builder.buildForCompaction(ColumnInde= x.java:174) > ~[apache-cassandra-2.2.13.jar:2.2.13] > Aug 07 22:30:33 cassandra[34611]: at > org.apache.cassandra.db.compaction.LazilyCompactedRow.update(LazilyCompac= tedRow.java:187) > ~[apache-cassandra-2.2.13.jar:2.2.13] > Aug 07 22:30:33 cassandra[34611]: at > org.apache.cassandra.repair.Validator.rowHash(Validator.java:201) > ~[apache-cassandra-2.2.13.jar:2.2.13] > Aug 07 22:30:33 cassandra[34611]: at > org.apache.cassandra.repair.Validator.add(Validator.java:150) > ~[apache-cassandra-2.2.13.jar:2.2.13] > Aug 07 22:30:33 cassandra[34611]: at > org.apache.cassandra.db.compaction.CompactionManager.doValidationCompacti= on(CompactionManager.java:1166) > ~[apache-cassandra-2.2.13.jar:2.2.13] > Aug 07 22:30:33 cassandra[34611]: at > org.apache.cassandra.db.compaction.CompactionManager.access$600(Compactio= nManager.java:76) > ~[apache-cassandra-2.2.13.jar:2.2.13] > Aug 07 22:30:33 cassandra[34611]: at > org.apache.cassandra.db.compaction.CompactionManager$10.call(CompactionMa= nager.java:736) > ~[apache-cassandra-2.2.13.jar:2.2.13] > Aug 07 22:30:33 cassandra[34611]: at > java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_172] > Aug 07 22:30:33 cassandra[34611]: at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java= :1149) > ~[na:1.8.0_172] > Aug 07 22:30:33 cassandra[34611]: at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.jav= a:624) > [na:1.8.0_172] > Aug 07 22:30:33 cassandra[34611]: at java.lang.Thread.run(Thread.java:748= ) > [na:1.8.0_172] > Aug 07 22:30:33 cassandra[34611]: Caused by: > org.apache.cassandra.io.sstable.CorruptSSTableException: Corrupted: > /data/ssd2/data/KeyspaceMetadata/x-x/l > Aug 07 22:30:33 cassandra[34611]: at > org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBufferMma= p(CompressedRandomAccessReader.java:216) > ~[apache-cassandra-2.2.13.jar:2.2.13] > Aug 07 22:30:33 cassandra[34611]: at > org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(Co= mpressedRandomAccessReader.java:226) > ~[apache-cassandra-2.2.13.jar:2.2.13] > Aug 07 22:30:33 cassandra[34611]: at > org.apache.cassandra.io.compress.CompressedThrottledReader.reBuffer(Compr= essedThrottledReader.java:42) > ~[apache-cassandra-2.2.13.jar:2.2.13] > Aug 07 22:30:33 cassandra[34611]: at > org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessRea= der.java:352) > ~[apache-cassandra-2.2.13.jar:2.2.13] > Aug 07 22:30:33 cassandra[34611]: ... 27 common frames omitted > Aug 07 22:30:33 cassandra[34611]: Caused by: > org.apache.cassandra.io.compress.CorruptBlockException: > (/data/ssd2/data/KeyspaceMetadata/x-x/lb-26203-big > Aug 07 22:30:33 cassandra[34611]: at > org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBufferMma= p(CompressedRandomAccessReader.java:185) > ~[apache-cassandra-2.2.13.jar:2.2.13] > Aug 07 22:30:33 cassandra[34611]: ... 30 common frames omitted > Aug 07 22:30:33 cassandra[34611]: INFO 21:30:33 Not a global repair, wil= l > not do anticompaction > Aug 07 22:30:33 cassandra[34611]: ERROR 21:30:33 Stopping gossiper > Aug 07 22:30:33 cassandra[34611]: WARN 21:30:33 Stopping gossip by > operator request > Aug 07 22:30:33 cassandra[34611]: INFO 21:30:33 Announcing shutdown > Aug 07 22:30:33 cassandra[34611]: INFO 21:30:33 Node /10.2.57.37 > > state jump to shutdown > > > So I went to the file system to see when this corrupt file was created an= d > it was created on July 30th at 15.55 > > root@x 0 01:14:03 ~ # ls -l > /data/ssd2/data/KeyspaceMetadata/x-x/lb-26203-big-Data.db > -rw-r--r-- 1 cass_b cass_b 3182243670 Jul 30 15:55 > /data/ssd2/data/KeyspaceMetadata/x-x/lb-26203-big-Data.db > > > > So I checked /var/log/messages for errors around that time > The only thing that stands out to me is the message "Cannot perform a ful= l > major compaction as repaired and unrepaired sstables cannot be compacted > together", I'm not sure if this would be an issue though and cause > corruption. > > Jul 30 15:55:06 x systemd: Created slice User Slice of root. > Jul 30 15:55:06 x systemd: Started Session c165280 of user root. > Jul 30 15:55:06 x audispd: node=3Dx. type=3DUSER_START > msg=3Daudit(1564498506.021:457933): pid=3D17533 uid=3D0 auid=3D4294967295 > ses=3D4294967295 msg=3D'op=3DPAM:session_open > grantors=3Dpam_keyinit,pam_limits,pam_keyinit,pam_limits,pam_tty_audit,pa= m_systemd,pam_unix > acct=3D"root" exe=3D"/usr/bin/sudo" hostname=3D? addr=3D? terminal=3D? re= s=3Dsuccess' > Jul 30 15:55:06 x systemd: Removed slice User Slice of root. > Jul 30 15:55:14 x tag_audit_log: type=3DUSER_CMD > msg=3Daudit(1564498506.013:457932): pid=3D17533 uid=3D509 auid=3D42949672= 95 > ses=3D4294967295 msg=3D'cwd=3D"/" > cmd=3D2F7573722F7362696E2F69706D692D73656E736F7273202D2D71756965742D63616= 36865202D2D7364722D63616368652D7265637265617465202D2D696E746572707265742D6F= 656D2D64617461202D2D6F75747075742D73656E736F722D7374617465202D2D69676E6F726= 52D6E6F742D617661696C61626C652D73656E736F7273202D2D6F75747075742D73656E736F= 722D7468726573686F6C6473 > terminal=3D? res=3Dsuccess' > Jul 30 15:55:14 x tag_audit_log: type=3DUSER_START > msg=3Daudit(1564498506.021:457933): pid=3D17533 uid=3D0 auid=3D4294967295 > ses=3D4294967295 msg=3D'op=3DPAM:session_open > grantors=3Dpam_keyinit,pam_limits,pam_keyinit,pam_limits,pam_tty_audit,pa= m_systemd,pam_unix > acct=3D"root" exe=3D"/usr/bin/sudo" hostname=3D? addr=3D? terminal=3D? re= s=3Dsuccess' > Jul 30 15:55:19 x cassandra: INFO 14:55:19 Writing > Memtable-compactions_in_progress@1462227999(0.008KiB serialized bytes, 1 > ops, 0%/0% of on/off-heap limit) > Jul 30 15:55:19 x cassandra: INFO 14:55:19 Cannot perform a full major > compaction as repaired and unrepaired sstables cannot be compacted > together. These two set of sstables will be compacted separately. > Jul 30 15:55:19 x cassandra: INFO 14:55:19 Writing > Memtable-compactions_in_progress@1198535528(1.002KiB serialized bytes, 57 > ops, 0%/0% of on/off-heap limit) > Jul 30 15:55:20 x cassandra: INFO 14:55:20 Writing > Memtable-compactions_in_progress@2039409834(0.008KiB serialized bytes, 1 > ops, 0%/0% of on/off-heap limit) > Jul 30 15:55:24 x audispd: node=3Dx. type=3DUSER_LOGOUT > msg=3Daudit(1564498524.409:457934): pid=3D46620 uid=3D0 auid=3D464400029 = ses=3D2747 > msg=3D'op=3Dlogin id=3D464400029 exe=3D"/usr/sbin/sshd" hostname=3D? addr= =3D? > terminal=3D/dev/pts/0 res=3Dsuccess' > Jul 30 15:55:24 x audispd: node=3Dx. type=3DUSER_LOGOUT > msg=3Daudit(1564498524.409:457935): pid=3D4878 uid=3D0 auid=3D464400029 s= es=3D2749 > msg=3D'op=3Dlogin id=3D464400029 exe=3D"/usr/sbin/sshd" hostname=3D? addr= =3D? > terminal=3D/dev/pts/1 res=3Dsuccess' > > Jul 30 15:55:57 x systemd: Created slice User Slice of root. > Jul 30 15:55:57 x systemd: Started Session c165288 of user root. > Jul 30 15:55:57 x audispd: node=3Dx. type=3DUSER_START > msg=3Daudit(1564498557.294:457958): pid=3D19687 uid=3D0 auid=3D4294967295 > ses=3D4294967295 msg=3D'op=3DPAM:session_open > grantors=3Dpam_keyinit,pam_limits,pam_keyinit,pam_limits,pam_tty_audit,pa= m_systemd,pam_unix > acct=3D"root" exe=3D"/usr/bin/sudo" hostname=3D? addr=3D? terminal=3D? re= s=3Dsuccess' > Jul 30 15:55:57 x audispd: node=3Dx. type=3DUSER_START > msg=3Daudit(1564498557.298:457959): pid=3D19690 uid=3D0 auid=3D4294967295 > ses=3D4294967295 msg=3D'op=3DPAM:session_open > grantors=3Dpam_keyinit,pam_systemd,pam_keyinit,pam_limits,pam_unix > acct=3D"cass_b" exe=3D"/usr/sbin/runuser" hostname=3D? addr=3D? terminal= =3D? > res=3Dsuccess' > Jul 30 15:55:58 x systemd: Removed slice User Slice of root. > Jul 30 15:56:02 x cassandra: INFO 14:56:02 Writing > Memtable-compactions_in_progress@1532791194(0.008KiB serialized bytes, 1 > ops, 0%/0% of on/off-heap limit) > Jul 30 15:56:02 x cassandra: INFO 14:56:02 Cannot perform a full major > compaction as repaired and unrepaired sstables cannot be compacted > together. These two set of sstables will be compacted separately. > Jul 30 15:56:02 x cassandra: INFO 14:56:02 Writing > Memtable-compactions_in_progress@1455399453(0.281KiB serialized bytes, 16 > ops, 0%/0% of on/off-heap limit) > Jul 30 15:56:04 x tag_audit_log: type=3DUSER_CMD > msg=3Daudit(1564498555.190:457951): pid=3D19294 uid=3D509 auid=3D42949672= 95 > ses=3D4294967295 msg=3D'cwd=3D"/" > cmd=3D72756E75736572202D73202F62696E2F62617368202D6C20636173735F62202D632= 063617373616E6472612D6D6574612F63617373616E6472612F62696E2F6E6F6465746F6F6C= 2074707374617473 > terminal=3D? res=3Dsuccess' > > > > We have checked a number of other things like NTP setting etc but nothing > is telling us what could cause so many corruptions across the entire > cluster. > Things were healthy with this cluster for months, the only thing I can > think is that we started loading data from a load of 20GB per instance up > to 200GB where it sits now, maybe this just highlighted the issue. > > > > Compaction and Compression on Keyspace CL's [mixture] > All CF's are using compression. > > AND compaction =3D {'min_threshold': '4', 'class': > 'org.apache.cassandra.db.compaction.*SizeTieredCompactionStrategy*', > 'max_threshold': '32'} > AND compression =3D {'sstable_compression': > 'org.apache.cassandra.io.compress.*SnappyCompressor*'} > > AND compaction =3D {'min_threshold': '4', 'class': > 'org.apache.cassandra.db.compaction.*SizeTieredCompactionStrategy*', > 'max_threshold': '32'} > AND compression =3D {'sstable_compression': > 'org.apache.cassandra.io.compress.*LZ4Compressor*'} > > AND compaction =3D {'class': 'org.apache.cassandra.db.compaction. > *LeveledCompactionStrategy*'} > AND compression =3D {'sstable_compression': > 'org.apache.cassandra.io.compress.*SnappyCompressor*'} > > --We are also using internode network compression: > internode_compression: all > > > > Does anyone have any idea what I should check next? > Our next theory is that there may be an issue with Checksum, but I'm not > sure where to go with this. > > > > Any help would be very much appreciated before I lose the last bit of hai= r > I have on my head. > > > > Kind Regards, > > Phil > > > > On Wed, 7 Aug 2019 at 20:51, Nitan Kainth wrote: > > Repair during upgrade have caused corruption too. > > > > Also, dropping and adding columns with same name but different type > > > > Regards, > > Nitan > > Cell: 510 449 9629 > > > On Aug 7, 2019, at 2:42 PM, Jeff Jirsa wrote: > > Is compression enabled? > > > > If not, bit flips on disk can corrupt data files and reads + repair may > send that corruption to other hosts in the cluster > > > On Aug 7, 2019, at 3:46 AM, Philip =C3=93 Cond=C3=BAin > wrote: > > Hi All, > > > > I am currently experiencing multiple datafile corruptions across most > nodes in my cluster, there seems to be no pattern to the corruption. I'm > starting to think it might be a bug, we're using Cassandra 2.2.13. > > > > Without going into detail about the issue I just want to confirm somethin= g. > > > > Can someone share with me a list of scenarios that would cause corruption= ? > > > > 1. OS failure > > 2. Cassandra disturbed during the writing > > > > etc etc. > > > > I need to investigate each scenario and don't want to leave any out. > > > > -- > > Regards, > > Phil > > > > > -- > > Regards, > > Phil > --=20 Regards, Phil --000000000000892f66058f9d962c Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
@Jeff - If it was hardware that would explain it all, but= do you think it's possible to have every server in the cluster with a = hardware issue?
The data is sensitive and the customer would lose their = mind if I sent it off-site which is a pity cause I could really do with the= help.
The corruption is occurring irregularly on every server and instance = and column family in the cluster.=C2=A0 Out of 72 instances, we are getting= maybe 10 corrupt files per day.
We are using vnodes (256) and it is happeni= ng in both DC's

@Asad - internode compression is set to ALL o= n every server.=C2=A0 I have checked the packets for the private interconne= ct and I can't see any dropped packets, there are dropped packets for o= ther interfaces, but not for the private ones, I will get the network team = to double-check this.=C2=A0
The corruption is only on the application sc= hema, we are not getting corruption on any system or cass keyspaces.=C2=A0 = Corruption is happening in both DC's.=C2=A0 We are getting corruption f= or the 1 application schema we have across all tables in the keyspace, it&#= 39;s not limited to one table.
Im not sure why the app team decided to not u= se default compression, I must ask them.



I have been checking th= e /var/log/messages today going back a few weeks and can see a serious amou= nt of broken pipe errors across all servers and instances.
Here is a snippe= t from one server but most pipe errors are similar:

Jul =C2=A09= 03:00:08 =C2=A0cassandra: INFO =C2=A002:00:08 Writing Memtable-sstable_act= ivity@1126262628(43.631KiB serialized bytes, 18072 ops, 0%/0% of on/off-hea= p limit)
Jul =C2=A09 03:00:13 =C2=A0kernel: fnic_handle_fip_timer: 8 cal= lbacks suppressed
Jul =C2=A09 03:00:19 =C2=A0kernel: fnic_handle_fip_tim= er: 8 callbacks suppressed
Jul =C2=A09 03:00:22 =C2=A0cassandra: ERROR 0= 2:00:22 Got an IOException during write!
Jul =C2=A09 03:00:22 =C2=A0cass= andra: java.io.IOException: Broken pipe
Jul =C2=A09 03:00:22 =C2=A0cassa= ndra: at sun.nio.ch.FileDispatcherImpl.write0(Native Method) ~[na:1.8.0_172= ]
Jul =C2=A09 03:00:22 =C2=A0cassandra: at sun.nio.ch.SocketDispatcher.w= rite(SocketDispatcher.java:47) ~[na:1.8.0_172]
Jul =C2=A09 03:00:22 =C2= =A0cassandra: at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) ~[= na:1.8.0_172]
Jul =C2=A09 03:00:22 =C2=A0cassandra: at sun.nio.ch.IOUtil= .write(IOUtil.java:65) ~[na:1.8.0_172]
Jul =C2=A09 03:00:22 =C2=A0cassan= dra: at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471) ~[na= :1.8.0_172]
Jul =C2=A09 03:00:22 =C2=A0cassandra: at org.apache.thrift.t= ransport.TNonblockingSocket.write(TNonblockingSocket.java:165) ~[libthrift-= 0.9.2.jar:0.9.2]
Jul =C2=A09 03:00:22 =C2=A0cassandra: at com.thinkaurel= ius.thrift.util.mem.Buffer.writeTo(Buffer.java:104) ~[thrift-server-0.3.7.j= ar:na]
Jul =C2=A09 03:00:22 =C2=A0cassandra: at com.thinkaurelius.thrift= .util.mem.FastMemoryOutputTransport.streamTo(FastMemoryOutputTransport.java= :112) ~[thrift-server-0.3.7.jar:na]
Jul =C2=A09 03:00:22 =C2=A0cassandra= : at com.thinkaurelius.thrift.Message.write(Message.java:222) ~[thrift-serv= er-0.3.7.jar:na]
Jul =C2=A09 03:00:22 =C2=A0cassandra: at com.thinkaurel= ius.thrift.TDisruptorServer$SelectorThread.handleWrite(TDisruptorServer.jav= a:598) [thrift-server-0.3.7.jar:na]
Jul =C2=A09 03:00:22 =C2=A0cassandra= : at com.thinkaurelius.thrift.TDisruptorServer$SelectorThread.processKey(TD= isruptorServer.java:569) [thrift-server-0.3.7.jar:na]
Jul =C2=A09 03:00:= 22 =C2=A0cassandra: at com.thinkaurelius.thrift.TDisruptorServer$AbstractSe= lectorThread.select(TDisruptorServer.java:423) [thrift-server-0.3.7.jar:na]=
Jul =C2=A09 03:00:22 =C2=A0cassandra: at com.thinkaurelius.thrift.TDisr= uptorServer$AbstractSelectorThread.run(TDisruptorServer.java:383) [thrift-s= erver-0.3.7.jar:na]
Jul =C2=A09 03:00:25 =C2=A0kernel: fnic_handle_fip_t= imer: 8 callbacks suppressed
Jul =C2=A09 03:00:30 =C2=A0cassandra: ERROR= 02:00:30 Got an IOException during write!
Jul =C2=A09 03:00:30 =C2=A0ca= ssandra: java.io.IOException: Broken pipe
Jul =C2=A09 03:00:30 =C2=A0cas= sandra: at sun.nio.ch.FileDispatcherImpl.write0(Native Method) ~[na:1.8.0_1= 72]
Jul =C2=A09 03:00:30 =C2=A0cassandra: at sun.nio.ch.SocketDispatcher= .write(SocketDispatcher.java:47) ~[na:1.8.0_172]
Jul =C2=A09 03:00:30 = =C2=A0cassandra: at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)= ~[na:1.8.0_172]
Jul =C2=A09 03:00:30 =C2=A0cassandra: at sun.nio.ch.IOU= til.write(IOUtil.java:65) ~[na:1.8.0_172]
Jul =C2=A09 03:00:30 =C2=A0cas= sandra: at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471) ~= [na:1.8.0_172]
Jul =C2=A09 03:00:30 =C2=A0cassandra: at org.apache.thrif= t.transport.TNonblockingSocket.write(TNonblockingSocket.java:165) ~[libthri= ft-0.9.2.jar:0.9.2]
Jul =C2=A09 03:00:30 =C2=A0cassandra: at com.thinkau= relius.thrift.util.mem.Buffer.writeTo(Buffer.java:104) ~[thrift-server-0.3.= 7.jar:na]
Jul =C2=A09 03:00:30 =C2=A0cassandra: at com.thinkaurelius.thr= ift.util.mem.FastMemoryOutputTransport.streamTo(FastMemoryOutputTransport.j= ava:112) ~[thrift-server-0.3.7.jar:na]
Jul =C2=A09 03:00:30 =C2=A0cassan= dra: at com.thinkaurelius.thrift.Message.write(Message.java:222) ~[thrift-s= erver-0.3.7.jar:na]
Jul =C2=A09 03:00:30 =C2=A0cassandra: at com.thinkau= relius.thrift.TDisruptorServer$SelectorThread.handleWrite(TDisruptorServer.= java:598) [thrift-server-0.3.7.jar:na]
Jul =C2=A09 03:00:30 =C2=A0cassan= dra: at com.thinkaurelius.thrift.TDisruptorServer$SelectorThread.processKey= (TDisruptorServer.java:569) [thrift-server-0.3.7.jar:na]
Jul =C2=A09 03:= 00:30 =C2=A0cassandra: at com.thinkaurelius.thrift.TDisruptorServer$Abstrac= tSelectorThread.select(TDisruptorServer.java:423) [thrift-server-0.3.7.jar:= na]
Jul =C2=A09 03:00:30 =C2=A0cassandra: at com.thinkaurelius.thrift.TD= isruptorServer$AbstractSelectorThread.run(TDisruptorServer.java:383) [thrif= t-server-0.3.7.jar:na]
Jul =C2=A09 03:00:31 =C2=A0kernel: fnic_handle_fi= p_timer: 8 callbacks suppressed
Jul =C2=A09 03:00:37 =C2=A0kernel: fnic_= handle_fip_timer: 8 callbacks suppressed
Jul =C2=A09 03:00:43 =C2=A0kern= el: fnic_handle_fip_timer: 8 callbacks suppressed



<= /div>
O= n Thu, 8 Aug 2019 at 15:42, ZAIDI, ASAD A <az192g@att.com> wrote:

Did you check if packets are NOT being dropp= ed for network interfaces Cassandra instances are consuming (ifconfig =E2= =80=93a) internode compression is set for all endpoint =E2=80=93 may be network is playing any role here?

is this corruption limited so certain keyspa= ce/table | DCs or is that wide spread =E2=80=93 the log snippet you shared = it looked like only specific keyspace/table is affected =E2=80=93 is that correct?

When you remove corrupted sstable of a certa= in table, I guess you verifies all nodes for corrupted sstables for same ta= ble (may be with with nodetool scrub tool) so to limit spread of corruptions =E2=80=93 right?

Just curious to know =E2=80=93 you=E2=80=99r= e not using lz4/default compressor for all tables there must be some reason= for it.

=C2=A0

=C2=A0

=C2=A0

From: Philip =C3=93 Cond=C3=BAin [mailto:philipoconduin@gmail.com]
Sent: Thursday, August 08, 2019 6:20 AM
To: u= ser@cassandra.apache.org
Subject: Re: Datafile Corruption

=C2=A0

Hi All,

Thank you so much for the rep= lies. =C2=A0

Currently, I have the followi= ng list that can potentially cause some sort of corruption in a Cassandra c= luster.

  • Sudden Power cut =C2=A0- =C2= =A0We have had no power cuts in the datacenters=
  • Network Issues - no networ= k issues from what I can tell
  • Disk full - I don't th= ink this is an issue for us, see disks below.
  • =
  • An issue in Casandra version = like Cassandra-13752 - couldn't find any Jira issues similar to ours= .
  • Bit Flips - we have compre= ssion enabled so I don't think this should be an issue.
  • Repair during upgrade has cau= sed corruption too - we have not upgraded
  • <= li class=3D"MsoNormal"> Dropping and adding columns w= ith the same name but a different type - I will need to ask the apps team how they are using the database.=



Ok, let me try and explain th= e issue we are having, I am under a lot of pressure from above to get this = fixed and I can't figure it out.

This is a PRE-PROD environmen= t.

  • 2 datacenters.<= /span>
  • 9 physical servers in each da= tacenter
  • 4 Cassandra instances on each= server
  • 72 Cassandra instances across= the 2 data centres, 36 in site A, 36 in site B.
  • <= /ul>


    We also have 2 Reaper Nodes w= e use for repair.=C2=A0 One reaper node in each datacenter each running wit= h its own Cassandra back end in a cluster together.

    OS Details [Red Hat Linux]
    cass_a@x 0 10:53:01 ~ $= uname -a
    Linux x 3.10.0-957.5.1.el7.x86_64 #1 SMP Wed Dec 19 10:46:58 EST 2018 x86_6= 4 x86_64 x86_64 GNU/Linux

    cass_a@x 0 10:57:31 ~ $= cat /etc/*release
    NAME=3D"Red Hat Enterprise Linux Server"
    VERSION=3D"7.6 (Maipo)"
    ID=3D"rhel"


    Storage Layout
    cass_a@xx 0 10:46:28 ~ = $ df -h
    Filesystem =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 Size =C2=A0Used Avail Use% Mounted on
    /dev/mapper/vg01-lv_root =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A020G =C2= =A02.2G =C2=A0 18G =C2=A011% /
    devtmpfs =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A063G =C2=A0 =C2=A0 0 =C2=A0 63G =C2=A0 0% /de= v
    tmpfs =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 63G =C2=A0 =C2=A0 0 =C2=A0 63G =C2=A0 0= % /dev/shm
    tmpfs =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 63G =C2=A04.1G =C2=A0 59G =C2=A0 7% /ru= n
    tmpfs =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 63G =C2=A0 =C2=A0 0 =C2=A0 63G =C2=A0 0= % /sys/fs/cgroup
    >> 4 cassandra instances
    /dev/sdd =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 1.5T =C2=A0802G =C2=A0688G =C2=A054% /data/ssd4 /dev/sda =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 1.5T =C2=A0798G =C2=A0692G =C2=A054% /data/ssd1 /dev/sdb =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 1.5T =C2=A0681G =C2=A0810G =C2=A046% /data/ssd2 /dev/sdc =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 1.5T =C2=A0558G =C2=A0932G =C2=A038% /data/ssd3

    Cassandra load is about 200GB= and the rest of the space is snapshots

    CPU
    cass_a@x 127 10:58:47 ~= $ lscpu | grep -E '^Thread|^Core|^Socket|^CPU\('
    CPU(s): =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A064
    Thread(s) per core: =C2=A0 =C2=A02
    Core(s) per socket: =C2=A0 =C2=A016
    Socket(s): =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 2

    Description of problem:
    During repair of the cluster,= we are seeing multiple corruptions in the log files on a lot of instances.= =C2=A0 There seems to be no pattern to the corruption.=C2=A0 It seems that = the repair job is finding all the corrupted files for us.=C2=A0 The repair will hang on the node where the corrupted f= ile is found.=C2=A0 To fix this we remove/rename the datafile and bounce th= e Cassandra instance.=C2=A0 Our hardware/OS team have stated there is no pr= oblem on their side.=C2=A0 I do not believe it the repair causing the corruption.

    We have maintenance scripts t= hat run every night running compactions and creating snapshots, I decided t= o turn these off, fix any corruptions we currently had and ran major compac= tion on all nodes, once this was done we had a "clean" cluster and we left the cluster for a few = days.=C2=A0 After the process we noticed one corruption in the cluster, thi= s datafile was created after I turned off the maintenance scripts so my the= ory of the scripts causing the issue was wrong.=C2=A0 We then kicked off another repair and started to find more corrupt files c= reated after the maintenance script was turned off.


    So let me give you an example= of a corrupted file and maybe someone might be able to work through it wit= h me?

    When this corrupted file was = reported in the log it looks like it was the repair that found it.
    $ journalctl -u cassmet= a-cass_b.service --since "2019-08-07 22:25:00" --until "2019= -08-07 22:45:00"

    Aug 07 22:30:33 cassandra[34611]: INFO =C2=A021:30:33 Writing Memtable-compactions_in_progress@830377457(0.008KiB serialized bytes, 1= ops, 0%/0% of on/off-heap limit)
    Aug 07 22:30:33 cassandra[34611]: ERROR 2= 1:30:33 Failed creating a merkle tree for [repair #9587a200-b95a-11e9-8920-= 9f72868b8375 on KeyspaceMetadata/x, (-1476350953672479093,-1474461
    Aug 07 22:30:33 cassandra[34611]: ERROR 21:30:33 Exception in thread Thread= [ValidationExecutor:825,1,main]
    Aug 07 22:30:33 cassandra[34611]: org.apache.cassandra.io.FSReadError: org.= apache.cassandra.io.sstable.CorruptSSTableException: Corrupted: /x/ssd2/dat= a/KeyspaceMetadata/x-1e453cb0

    Aug 07 22:30:33 cassandra[34611]: at org.apache.cassandra.io.util.RandomAcc= essReader.readBytes(RandomAccessReader.java:365) ~[apache-cassandra-2.2.13.= jar:2.2.13]
    Aug 07 22:30:33 cassandra[34611]: at org.apache.cassandra.utils.ByteBufferU= til.read(ByteBufferUtil.java:361) ~[apache-cassandra-2.2.13.jar:2.2.13]
    Aug 07 22:30:33 cassandra[34611]: at org.apache.cassandra.utils.ByteBufferU= til.readWithShortLength(ByteBufferUtil.java:340) ~[apache-cassandra-2.2.13.= jar:2.2.13]
    Aug 07 22:30:33 cassandra[34611]: at org.apache.cassandra.db.composites.Abs= tractCType$Serializer.deserialize(AbstractCType.java:382) ~[apache-cassandr= a-2.2.13.jar:2.2.13]
    Aug 07 22:30:33 cassandra[34611]: at org.apache.cassandra.db.composites.Abs= tractCType$Serializer.deserialize(AbstractCType.java:366) ~[apache-cassandr= a-2.2.13.jar:2.2.13]
    Aug 07 22:30:33 cassandra[34611]: at org.apache.cassandra.db.OnDiskAtom$Ser= ializer.deserializeFromSSTable(OnDiskAtom.java:81) ~[apache-cassandra-2.2.1= 3.jar:2.2.13]
    Aug 07 22:30:33 cassandra[34611]: at org.apache.cassandra.db.AbstractCell$1= .computeNext(AbstractCell.java:52) ~[apache-cassandra-2.2.13.jar:2.2.13] Aug 07 22:30:33 cassandra[34611]: at org.apache.cassandra.db.AbstractCell$1= .computeNext(AbstractCell.java:46) ~[apache-cassandra-2.2.13.jar:2.2.13] Aug 07 22:30:33 cassandra[34611]: at com.google.common.collect.AbstractIter= ator.tryToComputeNext(AbstractIterator.java:143) ~[guava-16.0.jar:na]
    Aug 07 22:30:33 cassandra[34611]: at com.google.common.collect.AbstractIter= ator.hasNext(AbstractIterator.java:138) ~[guava-16.0.jar:na]
    Aug 07 22:30:33 cassandra[34611]: at org.apache.cassandra.io.sstable.SSTabl= eIdentityIterator.hasNext(SSTableIdentityIterator.java:169) ~[apache-cassan= dra-2.2.13.jar:2.2.13]
    Aug 07 22:30:33 cassandra[34611]: at org.apache.cassandra.utils.MergeIterat= or$OneToOne.computeNext(MergeIterator.java:202) ~[apache-cassandra-2.2.13.j= ar:2.2.13]
    Aug 07 22:30:33 cassandra[34611]: at com.google.common.collect.AbstractIter= ator.tryToComputeNext(AbstractIterator.java:143) ~[guava-16.0.jar:na]
    Aug 07 22:30:33 cassandra[34611]: at com.google.common.collect.AbstractIter= ator.hasNext(AbstractIterator.java:138) ~[guava-16.0.jar:na]
    Aug 07 22:30:33 cassandra[34611]: at com.google.common.collect.Iterators$7.= computeNext(Iterators.java:645) ~[guava-16.0.jar:na]
    Aug 07 22:30:33 cassandra[34611]: at com.google.common.collect.AbstractIter= ator.tryToComputeNext(AbstractIterator.java:143) ~[guava-16.0.jar:na]
    Aug 07 22:30:33 cassandra[34611]: at com.google.common.collect.AbstractIter= ator.hasNext(AbstractIterator.java:138) ~[guava-16.0.jar:na]
    Aug 07 22:30:33 cassandra[34611]: at org.apache.cassandra.db.ColumnIndex$Bu= ilder.buildForCompaction(ColumnIndex.java:174) ~[apache-cassandra-2.2.13.ja= r:2.2.13]
    Aug 07 22:30:33 cassandra[34611]: at org.apache.cassandra.db.compaction.Laz= ilyCompactedRow.update(LazilyCompactedRow.java:187) ~[apache-cassandra-2.2.= 13.jar:2.2.13]
    Aug 07 22:30:33 cassandra[34611]: at org.apache.cassandra.repair.Validator.= rowHash(Validator.java:201) ~[apache-cassandra-2.2.13.jar:2.2.13]
    Aug 07 22:30:33 cassandra[34611]: at org.apache.cassandra.repair.Validator.= add(Validator.java:150) ~[apache-cassandra-2.2.13.jar:2.2.13]
    Aug 07 22:30:33 cassandra[34611]: at org.apache.cassandra.db.compaction.Com= pactionManager.doValidationCompaction(CompactionManager.java:1166) ~[apache= -cassandra-2.2.13.jar:2.2.13]
    Aug 07 22:30:33 cassandra[34611]: at org.apache.cassandra.db.compaction.Com= pactionManager.access$600(CompactionManager.java:76) ~[apache-cassandra-2.2= .13.jar:2.2.13]
    Aug 07 22:30:33 cassandra[34611]: at org.apache.cassandra.db.compaction.Com= pactionManager$10.call(CompactionManager.java:736) ~[apache-cassandra-2.2.1= 3.jar:2.2.13]
    Aug 07 22:30:33 cassandra[34611]: at java.util.concurrent.FutureTask.run(Fu= tureTask.java:266) ~[na:1.8.0_172]
    Aug 07 22:30:33 cassandra[34611]: at java.util.concurrent.ThreadPoolExecuto= r.runWorker(ThreadPoolExecutor.java:1149) ~[na:1.8.0_172]
    Aug 07 22:30:33 cassandra[34611]: at java.util.concurrent.ThreadPoolExecuto= r$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_172]
    Aug 07 22:30:33 cassandra[34611]: at java.lang.Thread.run(Thread.java:748) = [na:1.8.0_172]
    Aug 07 22:30:33 cassandra[34611]: Caused by: org.apache.cassandra.io.sstabl= e.CorruptSSTableException: Corrupted: /data/ssd2/data/KeyspaceMetadata/x-x/= l
    Aug 07 22:30:33 cassandra[34611]: at org.apache.cassandra.io.compress.Compr= essedRandomAccessReader.reBufferMmap(CompressedRandomAccessReader.java:216)= ~[apache-cassandra-2.2.13.jar:2.2.13]
    Aug 07 22:30:33 cassandra[34611]: at org.apache.cassandra.io.compress.Compr= essedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:226) ~[a= pache-cassandra-2.2.13.jar:2.2.13]
    Aug 07 22:30:33 cassandra[34611]: at org.apache.cassandra.io.compress.Compr= essedThrottledReader.reBuffer(CompressedThrottledReader.java:42) ~[apache-c= assandra-2.2.13.jar:2.2.13]
    Aug 07 22:30:33 cassandra[34611]: at org.apache.cassandra.io.util.RandomAcc= essReader.readBytes(RandomAccessReader.java:352) ~[apache-cassandra-2.2.13.= jar:2.2.13]
    Aug 07 22:30:33 cassandra[34611]: ... 27 common frames omitted
    Aug 07 22:30:33 cassandra[34611]: Caused by: org.apache.cassandra.io.compre= ss.CorruptBlockException: (/data/ssd2/data/KeyspaceMetadata/x-x/lb-26203-bi= g
    Aug 07 22:30:33 cassandra[34611]: at org.apache.cassandra.io.compress.Compr= essedRandomAccessReader.reBufferMmap(CompressedRandomAccessReader.java:185)= ~[apache-cassandra-2.2.13.jar:2.2.13]
    Aug 07 22:30:33 cassandra[34611]: ... 30 common frames omitted
    Aug 07 22:30:33 cassandra[34611]: INFO =C2=A021:30:33 Not a global repair, = will not do anticompaction
    Aug 07 22:30:33 cassandra[34611]: ERROR 21:30:33 Stopping gossiper
    Aug 07 22:30:33 cassandra[34611]: WARN =C2=A021:30:33 Stopping gossip by op= erator request
    Aug 07 22:30:33 cassandra[34611]: INFO =C2=A021:30:33 Announcing shutdown Aug 07 22:30:33 cassandra[34611]: INFO =C2=A021:30:33 Node /10.2.57.37 state jump to shutdown



    So I went to the file system = to see when this corrupt file was created and it was created on July 30th a= t 15.55

    root@x 0 01:14:03 ~ # l= s -l /data/ssd2/data/KeyspaceMetadata/x-x/lb-26203-big-Data.db
    -rw-r--r-- 1 cass_b cass_b 3182243670 Jul 30 15:55 /data/ssd2/data/Keyspace= Metadata/x-x/lb-26203-big-Data.db




    So I checked /var/log/message= s for errors around that time
    The only thing that stands ou= t to me is the message "Cannot perform a full major compaction as repa= ired and unrepaired sstables cannot be compacted together", I'm no= t sure if this would be an issue though and cause corruption.

    Jul 30 15:55:06 x syste= md: Created slice User Slice of root.
    Jul 30 15:55:06 x systemd: Started Session c165280 of user root.
    Jul 30 15:55:06 x audispd: node=3Dx. type=3DUSER_START msg=3Daudit(15644985= 06.021:457933): pid=3D17533 uid=3D0 auid=3D4294967295 ses=3D4294967295 msg= =3D'op=3DPAM:session_open grantors=3Dpam_keyinit,pam_limits,pam_keyinit= ,pam_limits,pam_tty_audit,pam_systemd,pam_unix acct=3D"root" exe=3D"/usr/bin/sudo" hostname=3D? addr=3D? terminal=3D? res=3Ds= uccess'
    Jul 30 15:55:06 x systemd: Removed slice User Slice of root.
    Jul 30 15:55:14 x tag_audit_log: type=3DUSER_CMD msg=3Daudit(1564498506.013= :457932): pid=3D17533 uid=3D509 auid=3D4294967295 ses=3D4294967295 msg=3D&#= 39;cwd=3D"/" cmd=3D2F7573722F7362696E2F69706D692D73656E736F727320= 2D2D71756965742D6361636865202D2D7364722D63616368652D7265637265617465202D2D6= 96E746572707265742D6F656D2D64617461202D2D6F75747075742D73656E736F722D737461= 7465202D2D69676E6F72652D6E6F742D617661696C61626C652D73656E736F7273202D2D6F7= 5747075742D73656E736F722D7468726573686F6C6473 terminal=3D? res=3Dsuccess'
    Jul 30 15:55:14 x tag_audit_log: type=3DUSER_START msg=3Daudit(1564498506.0= 21:457933): pid=3D17533 uid=3D0 auid=3D4294967295 ses=3D4294967295 msg=3D&#= 39;op=3DPAM:session_open grantors=3Dpam_keyinit,pam_limits,pam_keyinit,pam_= limits,pam_tty_audit,pam_systemd,pam_unix acct=3D"root" exe=3D"/usr/bin/sudo" hostname=3D? addr=3D? terminal=3D? res=3Ds= uccess'
    Jul 30 15:55:19 x cassandra: INFO =C2=A014:55:19 Writing Memtable-compactions_in_progress@1462227999(0.008KiB serialized bytes, = 1 ops, 0%/0% of on/off-heap limit)
    Jul 30 15:55:19 x cassandra: INFO =C2=A01= 4:55:19 Cannot perform a full major compaction as repaired and unrepaired s= stables cannot be compacted together. These two set of sstables will be com= pacted separately.
    Jul 30 15:55:19 x cassandra: INFO =C2=A014:55:19 Writing Memtable-compactions_in_progress@1198535528(1.002KiB serialized bytes, = 57 ops, 0%/0% of on/off-heap limit)
    Jul 30 15:55:20 x cassandra: INFO =C2=A014:55:20 Writing Memtable-compactions_in_progress@2039409834(0.008KiB serialized bytes, = 1 ops, 0%/0% of on/off-heap limit)
    Jul 30 15:55:24 x audispd: node=3Dx. type=3DUSER_LOGOUT msg=3Daudit(1564498= 524.409:457934): pid=3D46620 uid=3D0 auid=3D464400029 ses=3D2747 msg=3D'= ;op=3Dlogin id=3D464400029 exe=3D"/usr/sbin/sshd" hostname=3D? ad= dr=3D? terminal=3D/dev/pts/0 res=3Dsuccess'
    Jul 30 15:55:24 x audispd: node=3Dx. type=3DUSER_LOGOUT msg=3Daudit(1564498= 524.409:457935): pid=3D4878 uid=3D0 auid=3D464400029 ses=3D2749 msg=3D'= op=3Dlogin id=3D464400029 exe=3D"/usr/sbin/sshd" hostname=3D? add= r=3D? terminal=3D/dev/pts/1 res=3Dsuccess'

    Jul 30 15:55:57 x systemd: Created slice User Slice of root.
    Jul 30 15:55:57 x systemd: Started Session c165288 of user root.
    Jul 30 15:55:57 x audispd: node=3Dx. type=3DUSER_START msg=3Daudit(15644985= 57.294:457958): pid=3D19687 uid=3D0 auid=3D4294967295 ses=3D4294967295 msg= =3D'op=3DPAM:session_open grantors=3Dpam_keyinit,pam_limits,pam_keyinit= ,pam_limits,pam_tty_audit,pam_systemd,pam_unix acct=3D"root" exe=3D"/usr/bin/sudo" hostname=3D? addr=3D? terminal=3D? res=3Ds= uccess'
    Jul 30 15:55:57 x audispd: node=3Dx. type=3DUSER_START msg=3Daudit(15644985= 57.298:457959): pid=3D19690 uid=3D0 auid=3D4294967295 ses=3D4294967295 msg= =3D'op=3DPAM:session_open grantors=3Dpam_keyinit,pam_systemd,pam_keyini= t,pam_limits,pam_unix acct=3D"cass_b" exe=3D"/usr/sbin/runus= er" hostname=3D? addr=3D? terminal=3D? res=3Dsuccess'
    Jul 30 15:55:58 x systemd: Removed slice User Slice of root.
    Jul 30 15:56:02 x cassandra: INFO =C2=A014:56:02 Writing Memtable-compactions_in_progress@1532791194(0.008KiB serialized bytes, = 1 ops, 0%/0% of on/off-heap limit)
    Jul 30 15:56:02 x cassandra: INFO =C2=A014:56:02 Cannot perform a full majo= r compaction as repaired and unrepaired sstables cannot be compacted togeth= er. These two set of sstables will be compacted separately.
    Jul 30 15:56:02 x cassandra: INFO =C2=A014:56:02 Writing Memtable-compactions_in_progress@1455399453(0.281KiB serialized bytes, = 16 ops, 0%/0% of on/off-heap limit)
    Jul 30 15:56:04 x tag_audit_log: type=3DUSER_CMD msg=3Daudit(1564498555.190= :457951): pid=3D19294 uid=3D509 auid=3D4294967295 ses=3D4294967295 msg=3D&#= 39;cwd=3D"/" cmd=3D72756E75736572202D73202F62696E2F62617368202D6C= 20636173735F62202D632063617373616E6472612D6D6574612F63617373616E6472612F626= 96E2F6E6F6465746F6F6C2074707374617473 terminal=3D? res=3Dsuccess'




    We have checked a number of o= ther things like NTP setting etc but nothing is telling us what could cause= so many corruptions across the entire cluster.
    Things were healthy with this= cluster for months, the only thing I can think is that we started loading = data from a load of 20GB per instance up to 200GB where it sits now, maybe = this just highlighted the issue.



    Compaction and Compression on= Keyspace CL's [mixture]
    All CF's are using compre= ssion.

    AND compaction =3D {= 9;min_threshold': '4', 'class': 'org.apache.cassand= ra.db.compaction.SizeTieredCompactionStrategy', 'max_thresho= ld': '32'}
    AND compression =3D {'sstable_compression': 'org.apache.cassand= ra.io.compress.SnappyCompressor'}

    AND compaction =3D {'min_threshold': '4', 'class': = 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy= ', 'max_threshold': '32'}
    AND compression =3D {'sstable_compression': 'org.apache.cassand= ra.io.compress.LZ4Compressor'}

    AND compaction =3D {'class': 'org.apache.cassandra.db.compactio= n.LeveledCompactionStrategy'}
    AND compression =3D {'sstable_compression': 'org.apache.cassand= ra.io.compress.SnappyCompressor'}

    --We are also using internode= network compression:
    internode_compression: = all



    Does anyone have any idea wha= t I should check next?
    Our next theory is that there= may be an issue with Checksum, but I'm not sure where to go with this.=

=C2=A0

Any he= lp would be very much appreciated before I lose the last bit of hair I have= on my head.=C2=A0

=C2=A0

Kind R= egards,

Phil

=C2=A0

On Wed, 7 Aug 2019 at 20:51, Nitan Kainth <nitankainth@gmail.com<= /a>> wrote:

Repair during upgrade have caused corruption too.=

=C2=A0

Also, dropping and addi= ng columns with same name but different type


On Aug 7, 2019, at 2:42 PM, Jeff Jirsa <jjirsa@gmail.com> wrote:

Is compression enabled?

=C2=A0

If not, bit flips on di= sk can corrupt data files and reads + repair may send that corruption to ot= her hosts in the cluster=C2=A0


On Aug 7, 2019, at 3:46 AM, Philip =C3=93 Cond=C3=BAin <philipoconduin@gmail.com&= gt; wrote:

Hi All= ,

=C2=A0

I am c= urrently experiencing multiple datafile corruptions across most nodes in my= cluster, there seems to be no pattern to the corruption.=C2=A0 I'm sta= rting to think it might be a bug, we're using Cassandra 2.2.13.

=C2=A0

Withou= t going into detail about the issue I just want to confirm something.

=C2=A0

Can so= meone share with me a list of scenarios that would cause corruption?=

=C2=A0

1. OS = failure

2. Cas= sandra disturbed during the writing=C2=A0

=C2=A0

etc et= c.

=C2=A0

I need= to investigate each scenario and don't want to leave any out.

=C2=A0

--

Regar= ds,

Phil<= /span>


=C2=A0

--

Regar= ds,

Phil<= /span>



--
Regards,
Phil
--000000000000892f66058f9d962c--