Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 9C9CC200BE7 for ; Tue, 6 Dec 2016 04:02:00 +0100 (CET) Received: by cust-asf.ponee.io (Postfix) id 9B123160B21; Tue, 6 Dec 2016 03:02:00 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 706C7160B18 for ; Tue, 6 Dec 2016 04:01:59 +0100 (CET) Received: (qmail 25707 invoked by uid 500); 6 Dec 2016 03:01:58 -0000 Mailing-List: contact commits-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@cassandra.apache.org Delivered-To: mailing list commits@cassandra.apache.org Received: (qmail 25663 invoked by uid 99); 6 Dec 2016 03:01:58 -0000 Received: from arcas.apache.org (HELO arcas) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 06 Dec 2016 03:01:58 +0000 Received: from arcas.apache.org (localhost [127.0.0.1]) by arcas (Postfix) with ESMTP id 61A512C0086 for ; Tue, 6 Dec 2016 03:01:58 +0000 (UTC) Date: Tue, 6 Dec 2016 03:01:58 +0000 (UTC) From: "Stanislav Vishnevskiy (JIRA)" To: commits@cassandra.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Comment Edited] (CASSANDRA-13004) Corruption while adding a column to a table MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Tue, 06 Dec 2016 03:02:00 -0000 [ https://issues.apache.org/jira/browse/CASSANDRA-13004?page=3Dcom.atla= ssian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId= =3D15724156#comment-15724156 ]=20 Stanislav Vishnevskiy edited comment on CASSANDRA-13004 at 12/6/16 3:01 AM: ---------------------------------------------------------------------------= - We found this in the logs at the exact time this happened. {code} ERROR [SharedPool-Worker-11] 2016-12-06 01:44:16,971 Message.java:617 - Une= xpected exception during request; channel =3D [id: 0xbd9a77e9, /10.10.0.48:= 38317 =3D> /10.10.0.129:9042] java.io.IOError: java.io.IOException: Corrupt value length 1485619006 encou= ntered, as it exceeds the maximum of 268435456, which is set via max_value_= size_in_mb in cassandra.yaml =09at org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.comput= eNext(UnfilteredRowIteratorSerializer.java:222) ~[apache-cassandra-3.0.9.ja= r:3.0.9] =09at org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.comput= eNext(UnfilteredRowIteratorSerializer.java:210) ~[apache-cassandra-3.0.9.ja= r:3.0.9] =09at org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.= java:47) ~[apache-cassandra-3.0.9.jar:3.0.9] =09at org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:129)= ~[apache-cassandra-3.0.9.jar:3.0.9] =09at org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeItera= tor.java:369) ~[apache-cassandra-3.0.9.jar:3.0.9] =09at org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeItera= tor.java:189) ~[apache-cassandra-3.0.9.jar:3.0.9] =09at org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeI= terator.java:158) ~[apache-cassandra-3.0.9.jar:3.0.9] =09at org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.= java:47) ~[apache-cassandra-3.0.9.jar:3.0.9] =09at org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMerg= eIterator.computeNext(UnfilteredRowIterators.java:509) ~[apache-cassandra-3= .0.9.jar:3.0.9] =09at org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMerg= eIterator.computeNext(UnfilteredRowIterators.java:369) ~[apache-cassandra-3= .0.9.jar:3.0.9] =09at org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.= java:47) ~[apache-cassandra-3.0.9.jar:3.0.9] =09at org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:129)= ~[apache-cassandra-3.0.9.jar:3.0.9] =09at org.apache.cassandra.db.transform.FilteredRows.isEmpty(FilteredRows.j= ava:50) ~[apache-cassandra-3.0.9.jar:3.0.9] =09at org.apache.cassandra.db.transform.Filter.closeIfEmpty(Filter.java:73)= ~[apache-cassandra-3.0.9.jar:3.0.9] =09at org.apache.cassandra.db.transform.Filter.applyToPartition(Filter.java= :43) ~[apache-cassandra-3.0.9.jar:3.0.9] =09at org.apache.cassandra.db.transform.Filter.applyToPartition(Filter.java= :26) ~[apache-cassandra-3.0.9.jar:3.0.9] =09at org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitio= ns.java:96) ~[apache-cassandra-3.0.9.jar:3.0.9] =09at org.apache.cassandra.cql3.statements.SelectStatement.process(SelectSt= atement.java:707) ~[apache-cassandra-3.0.9.jar:3.0.9] =09at org.apache.cassandra.cql3.statements.SelectStatement.processResults(S= electStatement.java:400) ~[apache-cassandra-3.0.9.jar:3.0.9] =09at org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectSt= atement.java:353) ~[apache-cassandra-3.0.9.jar:3.0.9] =09at org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectSt= atement.java:227) ~[apache-cassandra-3.0.9.jar:3.0.9] =09at org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectSt= atement.java:76) ~[apache-cassandra-3.0.9.jar:3.0.9] =09at org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProces= sor.java:206) ~[apache-cassandra-3.0.9.jar:3.0.9] =09at org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcess= or.java:487) ~[apache-cassandra-3.0.9.jar:3.0.9] =09at org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcess= or.java:464) ~[apache-cassandra-3.0.9.jar:3.0.9] =09at org.apache.cassandra.transport.messages.ExecuteMessage.execute(Execut= eMessage.java:130) ~[apache-cassandra-3.0.9.jar:3.0.9] =09at org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Messag= e.java:513) [apache-cassandra-3.0.9.jar:3.0.9] =09at org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Messag= e.java:407) [apache-cassandra-3.0.9.jar:3.0.9] =09at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChanne= lInboundHandler.java:105) [netty-all-4.0.23.Final.jar:4.0.23.Final] =09at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(Abst= ractChannelHandlerContext.java:333) [netty-all-4.0.23.Final.jar:4.0.23.Fina= l] =09at io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractCha= nnelHandlerContext.java:32) [netty-all-4.0.23.Final.jar:4.0.23.Final] =09at io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelH= andlerContext.java:324) [netty-all-4.0.23.Final.jar:4.0.23.Final] =09at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:51= 1) [na:1.8.0_111] =09at org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$Fut= ureTask.run(AbstractLocalAwareExecutorService.java:164) [apache-cassandra-3= .0.9.jar:3.0.9] =09at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) [ap= ache-cassandra-3.0.9.jar:3.0.9] =09at java.lang.Thread.run(Thread.java:745) [na:1.8.0_111] Caused by: java.io.IOException: Corrupt value length 1485619006 encountered= , as it exceeds the maximum of 268435456, which is set via max_value_size_i= n_mb in cassandra.yaml =09at org.apache.cassandra.db.marshal.AbstractType.readValue(AbstractType.j= ava:402) ~[apache-cassandra-3.0.9.jar:3.0.9] =09at org.apache.cassandra.db.rows.BufferCell$Serializer.deserialize(Buffer= Cell.java:302) ~[apache-cassandra-3.0.9.jar:3.0.9] =09at org.apache.cassandra.db.rows.UnfilteredSerializer.readComplexColumn(U= nfilteredSerializer.java:502) ~[apache-cassandra-3.0.9.jar:3.0.9] =09at org.apache.cassandra.db.rows.UnfilteredSerializer.deserializeRowBody(= UnfilteredSerializer.java:456) ~[apache-cassandra-3.0.9.jar:3.0.9] =09at org.apache.cassandra.db.rows.UnfilteredSerializer.deserialize(Unfilte= redSerializer.java:377) ~[apache-cassandra-3.0.9.jar:3.0.9] =09at org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.comput= eNext(UnfilteredRowIteratorSerializer.java:217) ~[apache-cassandra-3.0.9.ja= r:3.0.9] =09... 35 common frames omitted {code} was (Author: stanislav): We found this in the logs at the exact time this happened. {code} ERROR [SharedPool-Worker-11] 2016-12-06 01:44:16,971 Message.java:617 - Une= xpected exception during request; channel =3D [id: 0xbd9a77e9, /10.10.0.48:= 38317 =3D> /10.10.0.129:9042] java.io.IOError: java.io.IOException: Corrupt value length 1485619006 encou= ntered, as it exceeds the maximum of 268435456, which is set via max_value_= size_in_mb in cassandra.yaml =09at org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.comput= eNext(UnfilteredRowIteratorSerializer.java:222) ~[apache-cassandra-3.0.9.ja= r:3.0.9] =09at org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.comput= eNext(UnfilteredRowIteratorSerializer.java:210) ~[apache-cassandra-3.0.9.ja= r:3.0.9] =09at org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.= java:47) ~[apache-cassandra-3.0.9.jar:3.0.9] =09at org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:129)= ~[apache-cassandra-3.0.9.jar:3.0.9] =09at org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeItera= tor.java:369) ~[apache-cassandra-3.0.9.jar:3.0.9] =09at org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeItera= tor.java:189) ~[apache-cassandra-3.0.9.jar:3.0.9] =09at org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeI= terator.java:158) ~[apache-cassandra-3.0.9.jar:3.0.9] =09at org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.= java:47) ~[apache-cassandra-3.0.9.jar:3.0.9] =09at org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMerg= eIterator.computeNext(UnfilteredRowIterators.java:509) ~[apache-cassandra-3= .0.9.jar:3.0.9] =09at org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMerg= eIterator.computeNext(UnfilteredRowIterators.java:369) ~[apache-cassandra-3= .0.9.jar:3.0.9] =09at org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.= java:47) ~[apache-cassandra-3.0.9.jar:3.0.9] =09at org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:129)= ~[apache-cassandra-3.0.9.jar:3.0.9] =09at org.apache.cassandra.db.transform.FilteredRows.isEmpty(FilteredRows.j= ava:50) ~[apache-cassandra-3.0.9.jar:3.0.9] =09at org.apache.cassandra.db.transform.Filter.closeIfEmpty(Filter.java:73)= ~[apache-cassandra-3.0.9.jar:3.0.9] =09at org.apache.cassandra.db.transform.Filter.applyToPartition(Filter.java= :43) ~[apache-cassandra-3.0.9.jar:3.0.9] =09at org.apache.cassandra.db.transform.Filter.applyToPartition(Filter.java= :26) ~[apache-cassandra-3.0.9.jar:3.0.9] =09at org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitio= ns.java:96) ~[apache-cassandra-3.0.9.jar:3.0.9] =09at org.apache.cassandra.cql3.statements.SelectStatement.process(SelectSt= atement.java:707) ~[apache-cassandra-3.0.9.jar:3.0.9] =09at org.apache.cassandra.cql3.statements.SelectStatement.processResults(S= electStatement.java:400) ~[apache-cassandra-3.0.9.jar:3.0.9] =09at org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectSt= atement.java:353) ~[apache-cassandra-3.0.9.jar:3.0.9] =09at org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectSt= atement.java:227) ~[apache-cassandra-3.0.9.jar:3.0.9] =09at org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectSt= atement.java:76) ~[apache-cassandra-3.0.9.jar:3.0.9] =09at org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProces= sor.java:206) ~[apache-cassandra-3.0.9.jar:3.0.9] =09at org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcess= or.java:487) ~[apache-cassandra-3.0.9.jar:3.0.9] =09at org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcess= or.java:464) ~[apache-cassandra-3.0.9.jar:3.0.9] =09at org.apache.cassandra.transport.messages.ExecuteMessage.execute(Execut= eMessage.java:130) ~[apache-cassandra-3.0.9.jar:3.0.9] =09at org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Messag= e.java:513) [apache-cassandra-3.0.9.jar:3.0.9] =09at org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Messag= e.java:407) [apache-cassandra-3.0.9.jar:3.0.9] =09at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChanne= lInboundHandler.java:105) [netty-all-4.0.23.Final.jar:4.0.23.Final] =09at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(Abst= ractChannelHandlerContext.java:333) [netty-all-4.0.23.Final.jar:4.0.23.Fina= l] =09at io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractCha= nnelHandlerContext.java:32) [netty-all-4.0.23.Final.jar:4.0.23.Final] =09at io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelH= andlerContext.java:324) [netty-all-4.0.23.Final.jar:4.0.23.Final] =09at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:51= 1) [na:1.8.0_111] =09at org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$Fut= ureTask.run(AbstractLocalAwareExecutorService.java:164) [apache-cassandra-3= .0.9.jar:3.0.9] =09at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) [ap= ache-cassandra-3.0.9.jar:3.0.9] =09at java.lang.Thread.run(Thread.java:745) [na:1.8.0_111] Caused by: java.io.IOException: Corrupt value length 1485619006 encountered= , as it exceeds the maximum of 268435456, which is set via max_value_size_i= n_mb in cassandra.yaml =09at org.apache.cassandra.db.marshal.AbstractType.readValue(AbstractType.j= ava:402) ~[apache-cassandra-3.0.9.jar:3.0.9] =09at org.apache.cassandra.db.rows.BufferCell$Serializer.deserialize(Buffer= Cell.java:302) ~[apache-cassandra-3.0.9.jar:3.0.9] =09at org.apache.cassandra.db.rows.UnfilteredSerializer.readComplexColumn(U= nfilteredSerializer.java:502) ~[apache-cassandra-3.0.9.jar:3.0.9] =09at org.apache.cassandra.db.rows.UnfilteredSerializer.deserializeRowBody(= UnfilteredSerializer.java:456) ~[apache-cassandra-3.0.9.jar:3.0.9] =09at org.apache.cassandra.db.rows.UnfilteredSerializer.deserialize(Unfilte= redSerializer.java:377) ~[apache-cassandra-3.0.9.jar:3.0.9] =09at org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.comput= eNext(UnfilteredRowIteratorSerializer.java:217) ~[apache-cassandra-3.0.9.ja= r:3.0.9] =09... 35 common frames omitted {/code} > Corruption while adding a column to a table > ------------------------------------------- > > Key: CASSANDRA-13004 > URL: https://issues.apache.org/jira/browse/CASSANDRA-1300= 4 > Project: Cassandra > Issue Type: Bug > Reporter: Stanislav Vishnevskiy > > We had the following schema in production.=20 > {code:none} > CREATE TYPE IF NOT EXISTS discord_channels.channel_recipient ( > nick text > ); > CREATE TYPE IF NOT EXISTS discord_channels.channel_permission_overwrite ( > id bigint, > type int, > allow_ int, > deny int > ); > CREATE TABLE IF NOT EXISTS discord_channels.channels ( > id bigint, > guild_id bigint, > type tinyint, > name text, > topic text, > position int, > owner_id bigint, > icon_hash text, > recipients map>, > permission_overwrites map>, > bitrate int, > user_limit int, > last_pin_timestamp timestamp, > last_message_id bigint, > PRIMARY KEY (id) > ); > {code} > And then we executed the following alter. > {code:none} > ALTER TABLE discord_channels.channels ADD application_id bigint; > {code} > And one row (that we can tell) got corrupted at the same time and could n= o longer be read from the Python driver.=20 > {code:none} > [E 161206 01:56:58 geventreactor:141] Error decoding response from Cassan= dra. ver(4); flags(0000); stream(27); op(8); offset(9); len(887); buffer: '= \x84\x00\x00\x1b\x08\x00\x00\x03w\x00\x00\x00\x02\x00\x00\x00\x01\x00\x00\x= 00\x0f\x00\x10discord_channels\x00\x08channels\x00\x02id\x00\x02\x00\x0eapp= lication_id\x00\x02\x00\x07bitrate\x00\t\x00\x08guild_id\x00\x02\x00\ticon_= hash\x00\r\x00\x0flast_message_id\x00\x02\x00\x12last_pin_timestamp\x00\x0b= \x00\x04name\x00\r\x00\x08owner_id\x00\x02\x00\x15permission_overwrites\x00= !\x00\x02\x000\x00\x10discord_channels\x00\x1cchannel_permission_overwrite\= x00\x04\x00\x02id\x00\x02\x00\x04type\x00\t\x00\x06allow_\x00\t\x00\x04deny= \x00\t\x00\x08position\x00\t\x00\nrecipients\x00!\x00\x02\x000\x00\x10disco= rd_channels\x00\x11channel_recipient\x00\x01\x00\x04nick\x00\r\x00\x05topic= \x00\r\x00\x04type\x00\x14\x00\nuser_limit\x00\t\x00\x00\x00\x01\x00\x00\x0= 0\x08\x03\x8a\x19\x8e\xf8\x82\x00\x01\xff\xff\xff\xff\x00\x00\x00\x04\x00\x= 00\xfa\x00\x00\x00\x00\x08\x00\x00\xfa\x00\x00\xf8G\xc5\x00\x00\x00\x00\x00= \x00\x00\x08\x03\x8b\xc0\xb5nB\x00\x02\x00\x00\x00\x08G\xc5\xffI\x98\xc4\xb= 4(\x00\x00\x00\x03\x8b\xc0\xa8\xff\xff\xff\xff\x00\x00\x01<\x00\x00\x00\x06= \x00\x00\x00\x08\x03\x81L\xea\xfc\x82\x00\n\x00\x00\x00$\x00\x00\x00\x08\x0= 3\x81L\xea\xfc\x82\x00\n\x00\x00\x00\x04\x00\x00\x00\x01\x00\x00\x00\x04\x0= 0\x00\x08\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x03\x8a\x1e\x= e6\x8b\x80\x00\n\x00\x00\x00$\x00\x00\x00\x08\x03\x8a\x1e\xe6\x8b\x80\x00\n= \x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x040\x07\xf8Q\x00\x00\x00\x04\= x00\x00\x00\x00\x00\x00\x00\x08\x03\x8a\x1f\x1b{\x82\x00\x00\x00\x00\x00$\x= 00\x00\x00\x08\x03\x8a\x1f\x1b{\x82\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00= \x00\x00\x00\x04\x00\x07\xf8Q\x00\x00\x00\x04\x10\x00\x00\x00\x00\x00\x00\x= 08\x03\x8a\x1fH6\x82\x00\x01\x00\x00\x00$\x00\x00\x00\x08\x03\x8a\x1fH6\x82= \x00\x01\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x05\xe8A\x00\x= 00\x00\x04\x10\x02\x00\x00\x00\x00\x00\x08\x03\x8a+=3D\xca\xc0\x00\n\x00\x0= 0\x00$\x00\x00\x00\x08\x03\x8a+=3D\xca\xc0\x00\n\x00\x00\x00\x04\x00\x00\x0= 0\x00\x00\x00\x00\x04\x00\x00\x08\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x= 00\x00\x08\x03\x8a\x8f\x979\x80\x00\n\x00\x00\x00$\x00\x00\x00\x08\x03\x8a\= x8f\x979\x80\x00\n\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\= x00\x00\x00\x00\x00\x04\x00 \x08\x01\x00\x00\x00\x04\xc4\xb4(\x00\xff\xff\x= ff\xff\x00\x00\x00O[f\x80Q\x07general\x05\xf8G\xc5\xffI\x98\xc4\xb4(\x00\xf= 8O[f\x80Q\x00\x00\x00\x02\x04\xf8O[f\x80Q\x00\xf8G\xc5\xffI\x98\x01\x00\x00= \xf8O[f\x80Q\x00\x00\x00\x00\xf8G\xc5\xffI\x97\xc4\xb4(\x06\x00\xf8O\x7fe\x= 1fm\x08\x03\x00\x00\x00\x01\x00\x00\x00\x00\x04\x00\x00\x00\x00' > {code} > And then in cqlsh when trying to read the row we got this.=20 > {code:none} > /usr/bin/cqlsh.py:632: DateOverFlowWarning: Some timestamps are larger th= an Python datetime can represent. Timestamps are displayed in milliseconds = from epoch. > Traceback (most recent call last): > File "/usr/bin/cqlsh.py", line 1301, in perform_simple_statement > result =3D future.result() > File "/usr/share/cassandra/lib/cassandra-driver-internal-only-3.5.0.pos= t0-d8d0456.zip/cassandra-driver-3.5.0.post0-d8d0456/cassandra/cluster.py", = line 3650, in result > raise self._final_exception > UnicodeDecodeError: 'utf8' codec can't decode byte 0x80 in position 2: in= valid start byte > {code} > We tried to read the data and it would refuse to read the name column (th= e UTF8 error) and the last_pin_timestamp column had an absurdly large value= . > We ended up rewriting the whole row as we had the data in another place a= nd it fixed the problem. However there is clearly a race condition in the s= chema change sub-system. > Any ideas? -- This message was sent by Atlassian JIRA (v6.3.4#6332)