cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yuki Morishita (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (CASSANDRA-3003) Trunk single-pass streaming doesn't handle large row correctly
Date Sun, 04 Sep 2011 15:38:10 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-3003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13096885#comment-13096885
] 

Yuki Morishita commented on CASSANDRA-3003:
-------------------------------------------

Sylvain,

Thank you for the review.
For now, I leave the max timestamp calculation part as it is done during streaming.

bq. we need to use Integer.MIN_VALUE as the value for expireBefore when deserializing the
columns, otherwise the expired columns will be converted to DeletedColumns, which will change
there serialized size (and thus screw up the data size and column index)

Fixed.

bq. for markDeltaAsDeleted, we must check if the length is already negative and leave it so
if it is, otherwise if a streamed sstable get re-streamed to another node before it was compacted,
we could end up not cleaning the delta correctly.

bq. it would be nice in SSTW.appendFromStream() to assert the sanity of our little deserialize-reserialize
dance and assert what we did write the number of bytes that we wrote in the header.

Nice point. I added the same assertion as other append() does.

bq. the patch change a clearAllDelta to a markDeltaAsDeleted in CounterColumnTest which is
bogus (and the test does fail with that change).

I forgot to revert this one. I should have run test before submitting...

bq. I would markDeltaAsDeleted to markForClearingDelta as this describe what the function
does better

Fixed.

bq. nitpick: there is a few space at end of lines in some comments (I know I know, I'm picky).

Fixed this one too, I guess.

> Trunk single-pass streaming doesn't handle large row correctly
> --------------------------------------------------------------
>
>                 Key: CASSANDRA-3003
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-3003
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>    Affects Versions: 1.0
>            Reporter: Sylvain Lebresne
>            Assignee: Yuki Morishita
>            Priority: Critical
>              Labels: streaming
>             Fix For: 1.0
>
>         Attachments: 3003-v1.txt, 3003-v2.txt, 3003-v3.txt, 3003-v5.txt, v3003-v4.txt
>
>
> For normal column family, trunk streaming always buffer the whole row into memory. In
uses
> {noformat}
>   ColumnFamily.serializer().deserializeColumns(in, cf, true, true);
> {noformat}
> on the input bytes.
> We must avoid this for rows that don't fit in the inMemoryLimit.
> Note that for regular column families, for a given row, there is actually no need to
even recreate the bloom filter of column index, nor to deserialize the columns. It is enough
to filter the key and row size to feed the index writer, but then simply dump the rest on
disk directly. This would make streaming more efficient, avoid a lot of object creation and
avoid the pitfall of big rows.
> Counters column family are unfortunately trickier, because each column needs to be deserialized
(to mark them as 'fromRemote'). However, we don't need to do the double pass of LazilyCompactedRow
for that. We can simply use a SSTableIdentityIterator and deserialize/reserialize input as
it comes.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message