cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Philip Thompson (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (CASSANDRA-8052) OOMs from allocating large arrays when deserializing (e.g probably corrupted EstimatedHistogram data)
Date Wed, 01 Apr 2015 15:22:53 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-8052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14390788#comment-14390788
] 

Philip Thompson commented on CASSANDRA-8052:
--------------------------------------------

I believe the correct operational behavior is to run scrub on the afflicted sstable before
attempting to replace the node. Have you reproduced a similar exception in 2.0 or 2.1? 

[~aweisberg], when you get the chance, can you look over the reporter's concerns for 2.1 and
let me know if you think there's a possible risk there?

> OOMs from allocating large arrays when deserializing (e.g probably corrupted EstimatedHistogram
data)
> -----------------------------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-8052
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-8052
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>         Environment: linux
>            Reporter: Matt Byrd
>              Labels: OOM, checksum, corruption, oom, serialization
>
> We've seen nodes with what are presumably corrupted sstables repeatedly OOM on attempted
startup with such a message:
> {code}
> java.lang.OutOfMemoryError: Java heap space
>  at org.apache.cassandra.utils.EstimatedHistogram$EstimatedHistogramSerializer.deserialize(EstimatedHistogram.java:266)

> at org.apache.cassandra.io.sstable.SSTableMetadata$SSTableMetadataSerializer.deserialize(SSTableMetadata.java:292)
>  at org.apache.cassandra.io.sstable.SSTableMetadata$SSTableMetadataSerializer.deserialize(SSTableMetadata.java:282)
>  at org.apache.cassandra.io.sstable.SSTableReader.openMetadata(SSTableReader.java:234)
>  at org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:194)
>  at org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:157)
>  at org.apache.cassandra.io.sstable.SSTableReader$1.run(SSTableReader.java:273)
>  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>  at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>  at java.lang.Thread.run(Thread.java:722)
> {code}
> It's probably not a coincidence that it's throwing an exception here since this seems
to be the first byte of the file read.
> Presumably the correct operational process is just to replace the node, 
> however I was wondering if generally we might want to validate lengths when we deserialise
things?
> This could avoid allocating large byte buffers causing unpredictable OOMs and instead
throw an exception to be handled as appropriate.
> In this particular instance, there is no need for an unduly large size for the estimated
histogram.
> Admittedly things are slightly different in 2.1, though I suspect a similar thing might
have happened with:
> {code}
>        int numComponents = in.readInt();
>        // read toc
>        Map<MetadataType, Integer> toc = new HashMap<>(numComponents); 
> {code}
> Doing a find usages of DataInputStream.readInt() reveals quite a few places where an
int is read in and then an ArrayList, array or map of that size is created.
> In some cases this size might validly vary over a java int,
> or be in a performance critical or delicate piece of code where one doesn't want such
checks.
> Also there are other checksums and mechanisms at play which make some input less likely
to be corrupted.
> However, is it maybe worth a pass over instances of this type of input, to try and avoid
such cases where it makes sense?
> Perhaps there are less likely but worse failure modes present and hidden? 
> E.g if the deserialisation is happens to be for a message sent to some or all nodes say.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message