cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Joshua McKenzie (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (CASSANDRA-11448) Running OOS should trigger the disk failure policy
Date Fri, 01 Apr 2016 15:59:25 GMT

     [ https://issues.apache.org/jira/browse/CASSANDRA-11448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Joshua McKenzie updated CASSANDRA-11448:
----------------------------------------
       Resolution: Fixed
    Fix Version/s:     (was: 3.0.x)
                       (was: 2.2.x)
                       (was: 2.1.x)
                       (was: 3.x)
                   3.5
                   3.0.5
                   2.2.6
                   2.1.14
           Status: Resolved  (was: Ready to Commit)

[Committed|https://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=commit;h=f3b3c410a0d84a4348cf05954b38df6b087762a7]
the various versions.

On a second reading, I agree with you re: needing to catch on the singleton case, as it should
just propagate up to {{ColumnFamilyStore.flushMemtable}} and be caught.

> Running OOS should trigger the disk failure policy
> --------------------------------------------------
>
>                 Key: CASSANDRA-11448
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-11448
>             Project: Cassandra
>          Issue Type: Bug
>            Reporter: Brandon Williams
>            Assignee: Branimir Lambov
>             Fix For: 2.1.14, 2.2.6, 3.0.5, 3.5
>
>
> Currently when you run OOS, this happens:
> {noformat}
> ERROR [MemtableFlushWriter:8561] 2016-03-28 01:17:37,047  CassandraDaemon.java:229 -
Exception in thread Thread[MemtableFlushWriter:8561,5,main]   java.lang.RuntimeException:
Insufficient disk space to write 48 bytes 
>     at org.apache.cassandra.io.util.DiskAwareRunnable.getWriteDirectory(DiskAwareRunnable.java:29)
~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
>     at org.apache.cassandra.db.Memtable$FlushRunnable.runMayThrow(Memtable.java:332)
~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
>     at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
>     at com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
~[guava-16.0.1.jar:na]
>     at org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1120)
~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
>     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
~[na:1.8.0_66]
>     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
~[na:1.8.0_66]
>     at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_66]
> {noformat}
> Now your flush writer is dead and postflush tasks build up forever.  Instead we should
throw FSWE and trigger the failure policy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message