cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Robert Stupp (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (CASSANDRA-10520) Compressed writer and reader should support non-compressed data.
Date Fri, 24 Feb 2017 14:10:44 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-10520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15882754#comment-15882754
] 

Robert Stupp commented on CASSANDRA-10520:
------------------------------------------

The upgrade failures are caused by the old nodes trying to issue a {{MigrationTask}} against
upgraded node(s), which should be prevented by {{if (!MigrationManager.shouldPullSchemaFrom(endpoint))}}
in {{MigrationTask.runMayThrow}}. However, due to CASSANDRA-11128, which introduced {{version
= Math.min(version, current_version);}} in {{MessagingService.setVersion()}}, the {{shouldPullSchemaFrom}}
returns {{true}} even for nodes with a newer messaging-version.

CASSANDRA-11128 was introduced in 3.0 and we haven't updated the messaging-version since.

I think we can safely resolve this issue as fixed but should reopen CASSANDRA-11128. /cc [~slebresne]

However, this means that for upgrades from 3.0/3.x to 4.0 users must ensure this/11128 is
fixed.

> Compressed writer and reader should support non-compressed data.
> ----------------------------------------------------------------
>
>                 Key: CASSANDRA-10520
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-10520
>             Project: Cassandra
>          Issue Type: Improvement
>          Components: Local Write-Read Paths
>            Reporter: Branimir Lambov
>            Assignee: Branimir Lambov
>              Labels: messaging-service-bump-required
>             Fix For: 4.x
>
>         Attachments: ReadWriteTestCompression.java
>
>
> Compressing uncompressible data, as done, for instance, to write SSTables during stress-tests,
results in chunks larger than 64k which are a problem for the buffer pooling mechanisms employed
by the {{CompressedRandomAccessReader}}. This results in non-negligible performance issues
due to excessive memory allocation.
> To solve this problem and avoid decompression delays in the cases where it does not provide
benefits, I think we should allow compressed files to store uncompressed chunks as alternative
to compressed data. Such a chunk could be written after compression returns a buffer larger
than, for example, 90% of the input, and would not result in additional delays in writing.
On reads it could be recognized by size (using a single global threshold constant in the compression
metadata) and data could be directly transferred into the decompressed buffer, skipping the
decompression step and ensuring a 64k buffer for compressed data always suffices.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Mime
View raw message