Return-Path: X-Original-To: apmail-cassandra-commits-archive@www.apache.org Delivered-To: apmail-cassandra-commits-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 263551056C for ; Mon, 28 Oct 2013 14:13:37 +0000 (UTC) Received: (qmail 80452 invoked by uid 500); 28 Oct 2013 14:13:34 -0000 Delivered-To: apmail-cassandra-commits-archive@cassandra.apache.org Received: (qmail 80183 invoked by uid 500); 28 Oct 2013 14:13:34 -0000 Mailing-List: contact commits-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@cassandra.apache.org Delivered-To: mailing list commits@cassandra.apache.org Received: (qmail 80121 invoked by uid 99); 28 Oct 2013 14:13:32 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 28 Oct 2013 14:13:32 +0000 Date: Mon, 28 Oct 2013 14:13:32 +0000 (UTC) From: "Sylvain Lebresne (JIRA)" To: commits@cassandra.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Updated] (CASSANDRA-5981) Netty frame length exception when storing data to Cassandra using binary protocol MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/CASSANDRA-5981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sylvain Lebresne updated CASSANDRA-5981: ---------------------------------------- Attachment: 5981-v3.txt I believe you're right. I suppose there is no reason to ever get into that case if you don't pick and unreasonably low max frame size, but there's no hurt in being careful so attaching v3 that make sure we don't discard too much. And no, there isn't really a test in Cassandra for dropping large message because well, we don't really have any test for the native protocol so far. That being said, I do have a test for it in the java driver tests (though i'll need to commit it). > Netty frame length exception when storing data to Cassandra using binary protocol > --------------------------------------------------------------------------------- > > Key: CASSANDRA-5981 > URL: https://issues.apache.org/jira/browse/CASSANDRA-5981 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: Linux, Java 7 > Reporter: Justin Sweeney > Assignee: Sylvain Lebresne > Priority: Minor > Fix For: 2.0.3 > > Attachments: 0001-Correctly-catch-frame-too-long-exceptions.txt, 0002-Allow-to-configure-the-max-frame-length.txt, 5981-v2.txt, 5981-v3.txt > > > Using Cassandra 1.2.8, I am running into an issue where when I send a large amount of data using the binary protocol, I get the following netty exception in the Cassandra log file: > {quote} > ERROR 09:08:35,845 Unexpected exception during request > org.jboss.netty.handler.codec.frame.TooLongFrameException: Adjusted frame length exceeds 268435456: 292413714 - discarded > at org.jboss.netty.handler.codec.frame.LengthFieldBasedFrameDecoder.fail(LengthFieldBasedFrameDecoder.java:441) > at org.jboss.netty.handler.codec.frame.LengthFieldBasedFrameDecoder.failIfNecessary(LengthFieldBasedFrameDecoder.java:412) > at org.jboss.netty.handler.codec.frame.LengthFieldBasedFrameDecoder.decode(LengthFieldBasedFrameDecoder.java:372) > at org.apache.cassandra.transport.Frame$Decoder.decode(Frame.java:181) > at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:422) > at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303) > at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268) > at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255) > at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:84) > at org.jboss.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:472) > at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:333) > at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:35) > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) > at java.lang.Thread.run(Thread.java:722) > {quote} > I am using the Datastax driver and using CQL to execute insert queries. The query that is failing is using atomic batching executing a large number of statements (~55). > Looking into the code a bit, I saw that in the org.apache.cassandra.transport.Frame$Decoder class, the MAX_FRAME_LENGTH is hard coded to 256 mb. > Is this something that should be configurable or is this a hard limit that will prevent batch statements of this size from executing for some reason? -- This message was sent by Atlassian JIRA (v6.1#6144)