cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sylvain Lebresne (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (CASSANDRA-5981) Netty frame length exception when storing data to Cassandra using binary protocol
Date Wed, 02 Oct 2013 09:35:25 GMT

     [ https://issues.apache.org/jira/browse/CASSANDRA-5981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Sylvain Lebresne updated CASSANDRA-5981:
----------------------------------------

    Attachment: 5981-v2.txt

Alright, attaching a v2 (that includes making the "max frame length configurable" patch) that
rewrite the Frame decoder to handle the frame slightly more manually to allow us to do what
we want. This mostly mimick the code of Netty LengthFieldBasedFrameDecoder, though a bit simplified
since adapted to just what we need. I'll note that this patch is against the 2.0 branch: I've
been able to run the java driver tests with that patch so we should be good but this still
is not entirely trivial a change so I'm starting to wonder if it's worth pushing it in 1.2,
especially given that the current behavior (having the error logged server side) is not really
a big deal.

> Netty frame length exception when storing data to Cassandra using binary protocol
> ---------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-5981
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-5981
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>         Environment: Linux, Java 7
>            Reporter: Justin Sweeney
>            Assignee: Sylvain Lebresne
>            Priority: Minor
>             Fix For: 1.2.11
>
>         Attachments: 0001-Correctly-catch-frame-too-long-exceptions.txt, 0002-Allow-to-configure-the-max-frame-length.txt,
5981-v2.txt
>
>
> Using Cassandra 1.2.8, I am running into an issue where when I send a large amount of
data using the binary protocol, I get the following netty exception in the Cassandra log file:
> {quote}
> ERROR 09:08:35,845 Unexpected exception during request
> org.jboss.netty.handler.codec.frame.TooLongFrameException: Adjusted frame length exceeds
268435456: 292413714 - discarded
>         at org.jboss.netty.handler.codec.frame.LengthFieldBasedFrameDecoder.fail(LengthFieldBasedFrameDecoder.java:441)
>         at org.jboss.netty.handler.codec.frame.LengthFieldBasedFrameDecoder.failIfNecessary(LengthFieldBasedFrameDecoder.java:412)
>         at org.jboss.netty.handler.codec.frame.LengthFieldBasedFrameDecoder.decode(LengthFieldBasedFrameDecoder.java:372)
>         at org.apache.cassandra.transport.Frame$Decoder.decode(Frame.java:181)
>         at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:422)
>         at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
>         at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
>         at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
>         at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:84)
>         at org.jboss.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:472)
>         at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:333)
>         at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:35)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>         at java.lang.Thread.run(Thread.java:722)
> {quote}
> I am using the Datastax driver and using CQL to execute insert queries. The query that
is failing is using atomic batching executing a large number of statements (~55).
> Looking into the code a bit, I saw that in the org.apache.cassandra.transport.Frame$Decoder
class, the MAX_FRAME_LENGTH is hard coded to 256 mb.
> Is this something that should be configurable or is this a hard limit that will prevent
batch statements of this size from executing for some reason?



--
This message was sent by Atlassian JIRA
(v6.1#6144)

Mime
View raw message