cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sylvain Lebresne (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (CASSANDRA-5981) Netty frame length exception when storing data to Cassandra using binary protocol
Date Mon, 16 Sep 2013 14:18:52 GMT

     [ https://issues.apache.org/jira/browse/CASSANDRA-5981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Sylvain Lebresne updated CASSANDRA-5981:
----------------------------------------

    Attachment: 0002-Allow-to-configure-the-max-frame-length.txt
                0001-Correctly-catch-frame-too-long-exceptions.txt

bq. That said, we should turn this into an InvalidRequestException instead of erroring out
internally

I agree and that's why there is a 'catch (TooLongFrameException)' in the Frame decoding code.
But it appears that instead of throwing an exception the normal way (like it does for corrupted
frames for instance), Netty instead fires the exceptionCaught callback directly so that catch
was bypassed. Anyway, attaching patch that fixes that. The initial code was throwing a ProtocolException
but I do agree an InvalidRequestException is probably more appropriate: after all a ProtocolException
also close the connection which is not necessary here.  However, this required a few minor
changes in Frame as we were not using Netty's LengthFieldBasedFrameDecoder.

As for making the max frame length configurable, I'm not necessarily against the idea. But
as said above, more so than people can set it lower than the default if they want a more strict
protection against badly behaving clients.  Attaching patch for that too.

                
> Netty frame length exception when storing data to Cassandra using binary protocol
> ---------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-5981
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-5981
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>         Environment: Linux, Java 7
>            Reporter: Justin Sweeney
>            Assignee: Sylvain Lebresne
>            Priority: Minor
>         Attachments: 0001-Correctly-catch-frame-too-long-exceptions.txt, 0002-Allow-to-configure-the-max-frame-length.txt
>
>
> Using Cassandra 1.2.8, I am running into an issue where when I send a large amount of
data using the binary protocol, I get the following netty exception in the Cassandra log file:
> {quote}
> ERROR 09:08:35,845 Unexpected exception during request
> org.jboss.netty.handler.codec.frame.TooLongFrameException: Adjusted frame length exceeds
268435456: 292413714 - discarded
>         at org.jboss.netty.handler.codec.frame.LengthFieldBasedFrameDecoder.fail(LengthFieldBasedFrameDecoder.java:441)
>         at org.jboss.netty.handler.codec.frame.LengthFieldBasedFrameDecoder.failIfNecessary(LengthFieldBasedFrameDecoder.java:412)
>         at org.jboss.netty.handler.codec.frame.LengthFieldBasedFrameDecoder.decode(LengthFieldBasedFrameDecoder.java:372)
>         at org.apache.cassandra.transport.Frame$Decoder.decode(Frame.java:181)
>         at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:422)
>         at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
>         at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
>         at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
>         at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:84)
>         at org.jboss.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:472)
>         at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:333)
>         at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:35)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>         at java.lang.Thread.run(Thread.java:722)
> {quote}
> I am using the Datastax driver and using CQL to execute insert queries. The query that
is failing is using atomic batching executing a large number of statements (~55).
> Looking into the code a bit, I saw that in the org.apache.cassandra.transport.Frame$Decoder
class, the MAX_FRAME_LENGTH is hard coded to 256 mb.
> Is this something that should be configurable or is this a hard limit that will prevent
batch statements of this size from executing for some reason?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message