accumulo-notifications mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Billie Rinaldi (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (ACCUMULO-2204) Add delete client to continuous ingest
Date Thu, 16 Jan 2014 18:09:22 GMT

    [ https://issues.apache.org/jira/browse/ACCUMULO-2204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13873682#comment-13873682
] 

Billie Rinaldi commented on ACCUMULO-2204:
------------------------------------------

I guess the only problem is that if all the deleted nodes come back (or any continuous chain
to a good node), we can't tell anything is wrong. I was thinking about whether we could add
a node ID into the checksum of the node it points to (a node would checksum its key as it
does now, including the ID of the node it points to, then add in the ID of the previous node).
I haven't thought of a way it could be fault tolerant, though, just an indicator that something
might -- or might not -- be wrong.

> Add delete client to continuous ingest
> --------------------------------------
>
>                 Key: ACCUMULO-2204
>                 URL: https://issues.apache.org/jira/browse/ACCUMULO-2204
>             Project: Accumulo
>          Issue Type: Sub-task
>            Reporter: Keith Turner
>
> Adding the linked list operation of deleting nodes would detect deleted data coming back.
Could create something similar to the walker that does the following.
>  # selects a random node X
>  # follows the linked list for a random number of times and stops at node Y
>  # makes X point Y
>  # deletes all nodes that were between X and Y in the list
> For example given the following linked list 
> {noformat}
>    7->5->29->13->19->23->17
> {noformat}
> If 5 were picked as the first node and 23 as the last node, then the following operations
would be done.
>  # write 5->23
>  # flush
>  # delete 29
>  # flush
>  # delete 13
>  # flush
>  # delete 19
>  # flush
>  # do batch read and/or scan to verify deletes 
> If 29 or 13 should come back, then the nodes they point to would not exist and verification
would catch this.  I think the operations above are done in such a way that the delete client
could be killed at any time.
> Since continuous ingest works w/ random number there is a small chance that the delete
client could delete a node just written by another client.  With 63 bit random numbers this
chance is exceedingly small.  Should it occur the person debugging should be able to sort
it out when looking at the  write ahead logs.  Therefore I do not think its worthwhile taking
any action in the test.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Mime
View raw message