cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Vladimir Kuzmin (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (CASSANDRA-9264) Cassandra should not persist files without checksums
Date Wed, 29 Apr 2015 19:02:06 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-9264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14519955#comment-14519955
] 

Vladimir Kuzmin commented on CASSANDRA-9264:
--------------------------------------------

I see two options:
1. Have a checksum for whole data file. For generic solution we should be able to update checksum
incrementally to avoid overhead if change only one block of data. 
2. Checksums for every record in data file. Probably as a part of serialization.

I'd keep checksums in a separate file in both cases and provide the tool that operator could
check consistency offline.
 
About CRC32, this is quite good if we think that only natural collisions are possible. It's
not good if someone intruder is going to change file and keep the same checksum (in our case,
it doesn't make sense because both data file and cheksum file are available)  

> Cassandra should not persist files without checksums
> ----------------------------------------------------
>
>                 Key: CASSANDRA-9264
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-9264
>             Project: Cassandra
>          Issue Type: Wish
>            Reporter: Ariel Weisberg
>             Fix For: 3.x
>
>
> Even if checksums aren't validated on the read side every time it is helpful to have
them persisted with checksums so that if a corrupted file is encountered you can at least
validate that the issue is corruption and not an application level error that generated a
corrupt file.
> We should standardize on conventions for how to checksum a file and which checksums to
use so we can ensure we get the best performance possible.
> For a small checksum I think we should use CRC32 because the hardware support appears
quite good.
> For cases where a 4-byte checksum is not enough I think we can look at either xxhash64
or MurmurHash3.
> The problem with xxhash64 is that output is only 8-bytes. The problem with MurmurHash3
is that the Java implementation is slow. If we can live with 8-bytes and make it easy to switch
hash implementations I think xxhash64 is a good choice because we already ship a good implementation
with LZ4.
> I would also like to see hashes always prefixed by a type so that we can swap hashes
without running into pain trying to figure out what hash implementation is present. I would
also like to avoid making assumptions about the number of bytes in a hash field where possible
keeping in mind compatibility and space issues.
> Hashing after compression is also desirable over hashing before compression.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message