flink-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Stefan Richter (JIRA)" <j...@apache.org>
Subject [jira] [Created] (FLINK-6761) Limitation for maximum state size per key in RocksDB backend
Date Mon, 29 May 2017 12:38:04 GMT
Stefan Richter created FLINK-6761:

             Summary: Limitation for maximum state size per key in RocksDB backend
                 Key: FLINK-6761
                 URL: https://issues.apache.org/jira/browse/FLINK-6761
             Project: Flink
          Issue Type: Bug
          Components: State Backends, Checkpointing
    Affects Versions: 1.2.1, 1.3.0
            Reporter: Stefan Richter
            Priority: Critical

RocksDB`s JNI bridge allows for putting and getting `byte[]` as keys and values. 
States that internally use RocksDB's merge operator, e.g. `ListState`, can currently merge
multiple `byte[]` under one key, which will be internally concatenated to one value in RocksDB.

This becomes problematic, as soon as the accumulated state size under one key grows larger
than `Integer.MAX_VALUE` bytes. Whenever Java code tries to access a state that grew beyond
this limit through merging, we will encounter an `ArrayIndexOutOfBoundsException` at best
and a segfault at worst.

This behaviour is problematic, because RocksDB silently stores states that exceed this limitation,
but on access (e.g. in checkpointing), the code fails unexpectedly.

I think the only proper solution to this is for RocksDB's JNI bridge to build on `(Direct)ByteBuffer`
- which can go around the size limitation - as input and output types, instead of simple `byte[]`.

This message was sent by Atlassian JIRA

View raw message