cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Steinmaurer, Thomas" <thomas.steinmau...@dynatrace.com>
Subject RE: Major compaction ignoring one SSTable? (was Re: Fresh SSTable files (due to repair?) in a static table (was Re: Drop TTLd rows: upgradesstables -a or scrub?))
Date Tue, 18 Sep 2018 08:38:12 GMT
Alex,

any indications in Cassandra log about insufficient disk space during compactions?

Thomas

From: Oleksandr Shulgin <oleksandr.shulgin@zalando.de>
Sent: Dienstag, 18. September 2018 10:01
To: User <user@cassandra.apache.org>
Subject: Major compaction ignoring one SSTable? (was Re: Fresh SSTable files (due to repair?)
in a static table (was Re: Drop TTLd rows: upgradesstables -a or scrub?))

On Mon, Sep 17, 2018 at 4:29 PM Oleksandr Shulgin <oleksandr.shulgin@zalando.de<mailto:oleksandr.shulgin@zalando.de>>
wrote:

Thanks for your reply!  Indeed it could be coming from single-SSTable compaction, this I didn't
think about.  By any chance looking into compaction_history table could be useful to trace
it down?

Hello,

Yet another unexpected thing we are seeing is that after a major compaction completed on one
of the nodes there are two SSTables instead of only one (time is UTC):

-rw-r--r-- 1 999 root 99G Sep 18 00:13 mc-583-big-Data.db -rw-r--r-- 1 999 root 70G Mar 8
2018 mc-74-big-Data.db

The more recent one must be the result of major compaction on this table, but why the other
one from March was not included?

The logs don't help to understand the reason, and from compaction history on this node the
following record seems to be the only trace:

@ Row 1
-------------------+------------------------------------------------------------------
 id                | b6feb180-bad7-11e8-9f42-f1a67c22839a
 bytes_in          | 223804299627
 bytes_out         | 105322622473
 columnfamily_name | XXX
 compacted_at      | 2018-09-18 00:13:48+0000
 keyspace_name     | YYY
 rows_merged       | {1: 31321943, 2: 11722759, 3: 382232, 4: 23405, 5: 2250, 6: 134}

This also doesn't tell us a lot.

This has happened only on one node out of 10 where the same command was used to start major
compaction on this table.

Any ideas what could be the reason?

For now we have just started major compaction again to ensure these two last tables are compacted
together, but we would really like to understand the reason for this behavior.

Regards,
--
Alex

The contents of this e-mail are intended for the named addressee only. It contains information
that may be confidential. Unless you are the named addressee or an authorized designee, you
may not copy or use it, or disclose it to anyone else. If you received it in error please
notify us immediately and then destroy it. Dynatrace Austria GmbH (registration number FN
91482h) is a company registered in Linz whose registered office is at 4040 Linz, Austria,
Freistädterstraße 313
Mime
View raw message