Return-Path: X-Original-To: apmail-cassandra-user-archive@www.apache.org Delivered-To: apmail-cassandra-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 4847717F97 for ; Mon, 6 Apr 2015 22:19:13 +0000 (UTC) Received: (qmail 55163 invoked by uid 500); 6 Apr 2015 22:19:08 -0000 Delivered-To: apmail-cassandra-user-archive@cassandra.apache.org Received: (qmail 55121 invoked by uid 500); 6 Apr 2015 22:19:08 -0000 Mailing-List: contact user-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@cassandra.apache.org Delivered-To: mailing list user@cassandra.apache.org Received: (qmail 55111 invoked by uid 99); 6 Apr 2015 22:19:08 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 06 Apr 2015 22:19:08 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of doanduyhai@gmail.com designates 209.85.214.181 as permitted sender) Received: from [209.85.214.181] (HELO mail-ob0-f181.google.com) (209.85.214.181) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 06 Apr 2015 22:18:43 +0000 Received: by obbfy7 with SMTP id fy7so62325409obb.2 for ; Mon, 06 Apr 2015 15:17:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=OVhpkSlWbgNOz/idFAtIHS14a2r7bfDNJSbOAxDfeyA=; b=ZnkmHzYJgWturPPWUm2UDx+Su6E8OXWgCMVUdcDQ8KMLfwi7gWAOBEqzYvAtFZOKtz BOlby0qoIwkIdUCuC9LwpTYtcTLSoMQd9ONJe3HPZ0IJZwbkw4CDzqRUsNRbiVXJJejU XBVGEHTrHkXA7vEgC0oaLh5wfobTW/Vx8uTyAC3E8ZyE2xsh0R1Kja6aRr3KqitUIax6 CWCj+O3pjLToYBm2UcPlK4YgeQNIJSnIBLpipQVV8wLN2Mh0V9a8sfyvdI+qSE1XDA9J XZZMBLqnM2wgRxjJ7u7pn27bEisTFsVoBtq+eurnvcs4MzWjYJtu4eY74XyLbIq5zPzZ q9Yg== MIME-Version: 1.0 X-Received: by 10.182.48.231 with SMTP id p7mr808356obn.19.1428358676359; Mon, 06 Apr 2015 15:17:56 -0700 (PDT) Received: by 10.76.71.69 with HTTP; Mon, 6 Apr 2015 15:17:56 -0700 (PDT) Received: by 10.76.71.69 with HTTP; Mon, 6 Apr 2015 15:17:56 -0700 (PDT) In-Reply-To: References: Date: Tue, 7 Apr 2015 00:17:56 +0200 Message-ID: Subject: Re: How much disk is needed to compact Leveled compaction? From: DuyHai Doan To: user@cassandra.apache.org Content-Type: multipart/alternative; boundary=047d7b674400842f6d051315ab1c X-Virus-Checked: Checked by ClamAV on apache.org --047d7b674400842f6d051315ab1c Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable If you have SSD, you may afford switching to leveled compaction strategy, which requires much less than 50% of the current dataset for free space Le 5 avr. 2015 19:04, "daemeon reiydelle" a =C3=A9crit= : > You appear to have multiple java binaries in your path. That needs to be > resolved. > > sent from my mobile > Daemeon C.M. Reiydelle > USA 415.501.0198 > London +44.0.20.8144.9872 > On Apr 5, 2015 1:40 AM, "Jean Tremblay" > wrote: > >> Hi, >> I have a cluster of 5 nodes. We use cassandra 2.1.3. >> >> The 5 nodes use about 50-57% of the 1T SSD. >> One node managed to compact all its data. During one compaction this >> node used almost 100% of the drive. The other nodes refuse to continue >> compaction claiming that there is not enough disk space. >> >> From the documentation LeveledCompactionStrategy should be able to >> compact my data, well at least this is what I understand. >> >> <> compaction as the size of the largest column family. Leveled compaction >> needs much less space for compaction, only 10 * sstable_size_in_mb. >> However, even if you=E2=80=99re using leveled compaction, you should lea= ve much >> more free disk space available than this to accommodate streaming, repai= r, >> and snapshots, which can easily use 10GB or more of disk space. >> Furthermore, disk performance tends to decline after 80 to 90% of the di= sk >> space is used, so don=E2=80=99t push the boundaries.>> >> >> This is the disk usage. Node 4 is the only one that could compact >> everything. >> node0: /dev/disk1 931Gi 534Gi 396Gi 57% / >> node1: /dev/disk1 931Gi 513Gi 417Gi 55% / >> node2: /dev/disk1 931Gi 526Gi 404Gi 57% / >> node3: /dev/disk1 931Gi 507Gi 424Gi 54% / >> node4: /dev/disk1 931Gi 475Gi 456Gi 51% / >> >> When I try to compact the other ones I get this: >> >> objc[18698]: Class JavaLaunchHelper is implemented in both >> /Library/Java/JavaVirtualMachines/jdk1.8.0_40.jdk/Contents/Home/bin/java >> and /Library/Java/JavaVirtualMachines/jdk1.8.0_ >> 40.jdk/Contents/Home/jre/lib/libinstrument.dylib. One of the two will be >> used. Which one is undefined. >> error: Not enough space for compaction, estimated sstables =3D 2894, >> expected write size =3D 485616651726 >> -- StackTrace -- >> java.lang.RuntimeException: Not enough space for compaction, estimated >> sstables =3D 2894, expected write size =3D 485616651726 >> at org.apache.cassandra.db.compaction.CompactionTask. >> checkAvailableDiskSpace(CompactionTask.java:293) >> at org.apache.cassandra.db.compaction.CompactionTask. >> runMayThrow(CompactionTask.java:127) >> at org.apache.cassandra.utils.WrappedRunnable.run( >> WrappedRunnable.java:28) >> at org.apache.cassandra.db.compaction.CompactionTask.executeInternal( >> CompactionTask.java:76) >> at org.apache.cassandra.db.compaction.AbstractCompactionTask.execute( >> AbstractCompactionTask.java:59) >> at org.apache.cassandra.db.compaction.CompactionManager$7.runMayThrow( >> CompactionManager.java:512) >> at org.apache.cassandra.utils.WrappedRunnable.run( >> WrappedRunnable.java:28) >> >> I did not set the sstable_size_in_mb I use the 160MB default. >> >> Is it normal that during compaction it needs so much diskspace? What >> would be the best solution to overcome this problem? >> >> Thanks for your help >> >> --047d7b674400842f6d051315ab1c Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable

If you have SSD, you may afford switching to leveled compact= ion strategy, which requires much less than 50% of the current dataset for = free space

Le 5 avr. 2015 19:04, "daemeon reiydelle&qu= ot; <daemeonr@gmail.com> a = =C3=A9crit :

You appear to have multiple java binaries in your path. That needs= to be resolved.

sent from my mobile
Daemeon C.M. Reiydelle
USA 4= 15.501.0198
London +44.0.20.8144.9872

On Apr 5, 2015 1:40 AM, "Jean Tremblay"= ; <jean.tremblay@zen-innovations.com> wrote:
=
Hi,
I have a cluster of 5 nodes. We use cassandra 2.1.3.

The 5 nodes use about 50-57% of the 1T SSD.
One node managed to compact all its data. During one compaction this node u= sed almost 100% of the drive. The other nodes refuse to continue compaction= claiming that there is not enough disk space.

>From the documentation LeveledCompactionStrategy should be able to compact = my data, well at least this is what I understand.

10 * sstable_size_in_mb. However, even if you=E2=80=99re using leveled compaction, you should leave= much more free disk space available than this to accommodate streaming, re= pair, and snapshots, which can easily use 10GB or more of disk space. Furth= ermore, disk performance tends to decline after 80 to 90% of the disk space is used, so don=E2=80=99t push the bound= aries.>>
This is the disk usage. Node 4 is the only one tha= t could=C2=A0compact everything.
node0: /dev/disk= 1 931Gi 534Gi 396Gi 57% /
node1: /dev/disk= 1 931Gi 513Gi 417Gi 55% /
node2: /dev/disk= 1 931Gi 526Gi 404Gi 57% /
node3: /dev/disk= 1 931Gi 507Gi 424Gi 54% /
node4: /dev/disk= 1 931Gi 475Gi 456Gi 51% /

When I try to compact the other ones I get this:

objc[18698]: Class JavaLaunchHelper is implemente= d in both /Library/Java/JavaVirtualMachines/jdk1.8.0_40.jdk/C= ontents/Home/bin/java and /Library/Java/JavaVirtualMachines/jdk1.8.0= _40.jdk/Contents/Home/jre/lib/libinstrument.dylib. One of the two will be used. Which one is undefined.
error: Not enough space for compaction, estimated= sstables =3D 2894, expected write size =3D 485616651726
-- StackTrace --
java.lang.RuntimeException: Not enough space for = compaction, estimated sstables =3D 2894, expected write size =3D 4856166517= 26
at or= g.apache.cassandra.db.compaction.CompactionTask.checkAvailabl= eDiskSpace(CompactionTask.java:293)
at or= g.apache.cassandra.db.compaction.CompactionTask.runMayThrow(C= ompactionTask.java:127)
at or= g.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.= java:28)
at or= g.apache.cassandra.db.compaction.CompactionTask.executeIntern= al(CompactionTask.java:76)
at or= g.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
at or= g.apache.cassandra.db.compaction.CompactionManager$7.runMayTh= row(CompactionManager.java:512)
at or= g.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.= java:28)

I did not set the sstable_size_in_mb I use the 160MB default.

Is it normal that during compaction it needs so much diskspace? What would = be the best solution to overcome this problem?

Thanks for your help

--047d7b674400842f6d051315ab1c--