Delivery pressures being what they are I have moved to 1.0.10.  And I was able to run for about 12 hours with no issues (although the performance seemed somewhat variable).

 

However, after all the work I issued a truncate on the old column family (the one replaced by this process) and I get an out of memory condition then.

 

Is dropping the column family better here? (was doing truncate then drop).  If I just drop does data go away?

 

As to your specific requests for information, I will try to get to this.  But the CPU is high but not saturated.  It is a big box and there is still some CPU to spare.

 

Wade

 

From: aaron morton [mailto:aaron@thelastpickle.com]
Sent: Wednesday, June 06, 2012 4:05 AM
To: user@cassandra.apache.org
Subject: Re: memory issue on 1.1.0

 

I looked through the log again. Still looks like it's overloaded and not handling the overload very well. 

 

It looks like a sustained write load of around 280K columns every 5 minutes for about 5 hours. It may be that the CPU is the bottle neck when it comes to GC throughput. You are hitting ParNew issues from the very start, and end up with 20 second CMS. Do you see high CPU load ? 

 

Can you enable the GC logging options in cassandra-env.sh ? 

 

Can you throttle back the test and to a level where the server does not fail ? 

 

Alternatively can you dump the heap when it get's full and see what it taking up all the space ?

 

Cheers

 

-----------------

Aaron Morton

Freelance Developer

@aaronmorton

 

On 6/06/2012, at 2:12 PM, Poziombka, Wade L wrote:



Ok, so I have completely refactored to remove deletes and it still fails. So it is completely unrelated to deletes.

I guess I need to go back to 1.0.10?  When I originally evaluated I ran 1.0.8... perhaps I went a bridge too far with 1.1.

I don't think I am doing anything exotic here.

Here is my column family.  

KsDef(name:TB_UNIT, strategy_class:org.apache.cassandra.locator.SimpleStrategy, strategy_options:{replication_factor=3},
cf_defs:[

CfDef(keyspace:TB_UNIT, name:token, column_type:Standard, comparator_type:BytesType, column_metadata:[ColumnDef(name:70 61 6E 45 6E 63, validation_class:BytesType), ColumnDef(name:63 72 65 61 74 65 54 73, validation_class:DateType), ColumnDef(name:63 72 65 61 74 65 44 61 74 65, validation_class:DateType, index_type:KEYS, index_name:TokenCreateDate), ColumnDef(name:65 6E 63 72 79 70 74 69 6F 6E 53 65 74 74 69 6E 67 73 49 44, validation_class:UTF8Type, index_type:KEYS, index_name:EncryptionSettingsID)], caching:keys_only),

CfDef(keyspace:TB_UNIT, name:pan_d721fd40fd9443aa81cc6f59c8e047c6, column_type:Standard, comparator_type:BytesType, caching:keys_only),

CfDef(keyspace:TB_UNIT, name:counters, column_type:Standard, comparator_type:BytesType, column_metadata:[ColumnDef(name:75 73 65 43 6F 75 6E 74, validation_class:CounterColumnType)], default_validation_class:CounterColumnType, caching:keys_only)

])


-----Original Message-----
From: Poziombka, Wade L [mailto:wade.l.poziombka@intel.com]
Sent: Tuesday, June 05, 2012 3:09 PM
To: user@cassandra.apache.org
Subject: RE: memory issue on 1.1.0

Thank you.  I do have some of the same observations.  Do you do deletes?

My observation is that without deletes (or column updates I guess) I can run forever happy.  but when I run (what for me is a batch process) operations that delete and modify column values I run into this.

Reading bug https://issues.apache.org/jira/browse/CASSANDRA-3741 the advice is to NOT do deletes individually and to truncate.  I am scrambling to try to do this but curious if it will be worth the effort.

Wade

-----Original Message-----
From: Mina Naguib [mailto:mina.naguib@bloomdigital.com]
Sent: Tuesday, June 05, 2012 2:24 PM
To: user@cassandra.apache.org
Subject: Re: memory issue on 1.1.0


Hi Wade

I don't know if your scenario matches mine, but I've been struggling with memory pressure in 1.x as well.  I made the jump from 0.7.9 to 1.1.0, along with enabling compression and levelled compactions, so I don't know which specifically is the main culprit.

Specifically, all my nodes seem to "lose" heap memory.  As parnew and CMS do their job, over any reasonable period of time, the "floor" of memory after a GC keeps rising.  This is quite visible if you leave jconsole connected for a day or so, and manifests itself as a funny-looking cone like so: http://mina.naguib.ca/images/cassandra_jconsole.png

Once memory pressure reaches a point where the heap can't be maintained reliably below 75%, cassandra goes into survival mode - via a bunch of tunables in cassandra.conf it'll do things like flush memtables, drop caches, etc - all of which, in my experience, especially with the recent off-heap data structures, exasperate the problem.

I've been meaning, of course, to collect enough technical data to file a bug report, but haven't had the time.  I have not yet tested 1.1.1 to see if it improves the situation.

What I have found however, is a band-aid which you see at the rightmost section of the graph in the screenshot I posted.  That is simply to hit "Perform GC" button in jconsole.  It seems that a full System.gc() *DOES* reclaim heap memory that parnew and CMS fail to reclaim.

On my production cluster I have a full-GC via JMX scheduled in a rolling fashion every 4 hours.  It's extremely expensive (20-40 seconds of unresponsiveness) but is a necessary evil in my situation.  Without it, my nodes enter a nasty spiral of constant flushing, constant compactions, high heap usage, instability and high latency.


On 2012-06-05, at 2:56 PM, Poziombka, Wade L wrote:


Alas, upgrading to 1.1.1 did not solve my issue.

 

-----Original Message-----

From: Brandon Williams [mailto:driftx@gmail.com]

Sent: Monday, June 04, 2012 11:24 PM

To: user@cassandra.apache.org

Subject: Re: memory issue on 1.1.0

 

Perhaps the deletes:

https://issues.apache.org/jira/browse/CASSANDRA-3741

 

-Brandon

 

On Sun, Jun 3, 2012 at 6:12 PM, Poziombka, Wade L <wade.l.poziombka@intel.com> wrote:

Running a very write intensive (new column, delete old column etc.) process and failing on memory.  Log file attached.

 

Curiously when I add new data I have never seen this have in past

sent hundreds of millions "new" transactions.  It seems to be when I

modify.  my process is as follows

 

key slice to get columns to modify in batches of 100, in separate threads modify those columns.  I advance the slice with the start key each with last key in previous batch.  Mutations done are update a column value in one column family(token), delete column and add new column in another (pan).

 

Runs well until after about 5 million rows then it seems to run out of memory.  Note that these column families are quite small.

 

WARN [ScheduledTasks:1] 2012-06-03 17:49:01,558 GCInspector.java

(line

145) Heap is 0.7967470834946492 full.  You may need to reduce

memtable and/or cache sizes.  Cassandra will now flush up to the two

largest memtables to free up memory.  Adjust

flush_largest_memtables_at threshold in cassandra.yaml if you don't

want Cassandra to do this automatically  INFO [ScheduledTasks:1]

2012-06-03 17:49:01,559 StorageService.java (line 2772) Unable to

reduce heap usage since there are no dirty column families  INFO

[GossipStage:1] 2012-06-03 17:49:01,999 Gossiper.java (line 797)

InetAddress /10.230.34.170 is now UP  INFO [ScheduledTasks:1]

2012-06-03 17:49:10,048 GCInspector.java (line 122) GC for ParNew:

206 ms for 1 collections, 7345969520 used; max is 8506048512  INFO

[ScheduledTasks:1] 2012-06-03 17:49:53,187 GCInspector.java (line

122) GC for ConcurrentMarkSweep: 12770 ms for 1 collections,

5714800208 used; max is 8506048512

 

----------------

Keyspace: keyspace

      Read Count: 50042632

      Read Latency: 0.23157864418482224 ms.

      Write Count: 44948323

      Write Latency: 0.019460829472992797 ms.

      Pending Tasks: 0

              Column Family: pan

              SSTable count: 5

              Space used (live): 1977467326

              Space used (total): 1977467326

              Number of Keys (estimate): 16334848

              Memtable Columns Count: 0

              Memtable Data Size: 0

              Memtable Switch Count: 74

              Read Count: 14985122

              Read Latency: 0.408 ms.

              Write Count: 19972441

              Write Latency: 0.022 ms.

              Pending Tasks: 0

              Bloom Filter False Postives: 829

              Bloom Filter False Ratio: 0.00073

              Bloom Filter Space Used: 37048400

              Compacted row minimum size: 125

              Compacted row maximum size: 149

              Compacted row mean size: 149

 

              Column Family: token

              SSTable count: 4

              Space used (live): 1250973873

              Space used (total): 1250973873

              Number of Keys (estimate): 14217216

              Memtable Columns Count: 0

              Memtable Data Size: 0

              Memtable Switch Count: 49

              Read Count: 30059563

              Read Latency: 0.167 ms.

              Write Count: 14985488

              Write Latency: 0.014 ms.

              Pending Tasks: 0

              Bloom Filter False Postives: 13642

              Bloom Filter False Ratio: 0.00322

              Bloom Filter Space Used: 28002984

              Compacted row minimum size: 150

              Compacted row maximum size: 258

              Compacted row mean size: 224

 

              Column Family: counters

              SSTable count: 2

              Space used (live): 561549994

              Space used (total): 561549994

              Number of Keys (estimate): 9985024

              Memtable Columns Count: 0

              Memtable Data Size: 0

              Memtable Switch Count: 38

              Read Count: 4997947

              Read Latency: 0.092 ms.

              Write Count: 9990394

              Write Latency: 0.023 ms.

              Pending Tasks: 0

              Bloom Filter False Postives: 191

              Bloom Filter False Ratio: 0.37525

              Bloom Filter Space Used: 18741152

              Compacted row minimum size: 125

              Compacted row maximum size: 179

              Compacted row mean size: 150

 

----------------