Return-Path: X-Original-To: apmail-cassandra-commits-archive@www.apache.org Delivered-To: apmail-cassandra-commits-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 768C110FC6 for ; Thu, 26 Sep 2013 12:53:09 +0000 (UTC) Received: (qmail 71446 invoked by uid 500); 26 Sep 2013 12:53:06 -0000 Delivered-To: apmail-cassandra-commits-archive@cassandra.apache.org Received: (qmail 71287 invoked by uid 500); 26 Sep 2013 12:53:05 -0000 Mailing-List: contact commits-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@cassandra.apache.org Delivered-To: mailing list commits@cassandra.apache.org Received: (qmail 70958 invoked by uid 99); 26 Sep 2013 12:53:03 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 26 Sep 2013 12:53:03 +0000 Date: Thu, 26 Sep 2013 12:53:03 +0000 (UTC) From: "Sylvain Lebresne (JIRA)" To: commits@cassandra.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (CASSANDRA-5981) Netty frame length exception when storing data to Cassandra using binary protocol MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/CASSANDRA-5981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13778731#comment-13778731 ] Sylvain Lebresne commented on CASSANDRA-5981: --------------------------------------------- bq. I'd be tempted to actually close the connection immediately. That was the initial intent, but now I feel closing the connection in that case is too harsh. If we do allows to configure the max frame length (reasonable if only because some may want to lower it from the relatively high default) then client libraries can't valid frame size on their side and this become a end-user error. And closing the connection on a end-user error feels wrong (especially because it potentially cuts other unrelated streams on that connection). bq. I'd probably go for implementing a custom frame decoder Agreed, that's probably the simpler. I'll work that out. > Netty frame length exception when storing data to Cassandra using binary protocol > --------------------------------------------------------------------------------- > > Key: CASSANDRA-5981 > URL: https://issues.apache.org/jira/browse/CASSANDRA-5981 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: Linux, Java 7 > Reporter: Justin Sweeney > Assignee: Sylvain Lebresne > Priority: Minor > Fix For: 1.2.11 > > Attachments: 0001-Correctly-catch-frame-too-long-exceptions.txt, 0002-Allow-to-configure-the-max-frame-length.txt > > > Using Cassandra 1.2.8, I am running into an issue where when I send a large amount of data using the binary protocol, I get the following netty exception in the Cassandra log file: > {quote} > ERROR 09:08:35,845 Unexpected exception during request > org.jboss.netty.handler.codec.frame.TooLongFrameException: Adjusted frame length exceeds 268435456: 292413714 - discarded > at org.jboss.netty.handler.codec.frame.LengthFieldBasedFrameDecoder.fail(LengthFieldBasedFrameDecoder.java:441) > at org.jboss.netty.handler.codec.frame.LengthFieldBasedFrameDecoder.failIfNecessary(LengthFieldBasedFrameDecoder.java:412) > at org.jboss.netty.handler.codec.frame.LengthFieldBasedFrameDecoder.decode(LengthFieldBasedFrameDecoder.java:372) > at org.apache.cassandra.transport.Frame$Decoder.decode(Frame.java:181) > at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:422) > at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303) > at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268) > at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255) > at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:84) > at org.jboss.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:472) > at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:333) > at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:35) > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) > at java.lang.Thread.run(Thread.java:722) > {quote} > I am using the Datastax driver and using CQL to execute insert queries. The query that is failing is using atomic batching executing a large number of statements (~55). > Looking into the code a bit, I saw that in the org.apache.cassandra.transport.Frame$Decoder class, the MAX_FRAME_LENGTH is hard coded to 256 mb. > Is this something that should be configurable or is this a hard limit that will prevent batch statements of this size from executing for some reason? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira