accumulo-notifications mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Keith Turner (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (ACCUMULO-2204) Add delete client to continuous ingest
Date Thu, 16 Jan 2014 19:05:20 GMT

    [ https://issues.apache.org/jira/browse/ACCUMULO-2204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13873783#comment-13873783
] 

Keith Turner commented on ACCUMULO-2204:
----------------------------------------

bq. I guess the only problem is that if all the deleted nodes come back 

Continuous ingest is looking for problems that are locallized to a tablet/tablet server not
bugs occurring across the entire cluster.  This problem your raise is not specific to deletes.
 If all writes made by an ingest client were lost, that also would not be detected.   If we
want to be really rigorous, maybe we could take the verification map reduce job take as input
the logs of the delete clients and the uuids given to the filter.   This data could be joined
in the reduce phase to detect nodes that should not be there.    There are probably ways make
the verification more rigorous w/ respect to writes also.  I think if we wanted to make ingest
more rigorous, could make that a separate ticket.  It good to consider while doing this work
in case it changes what we might want to do.

> Add delete client to continuous ingest
> --------------------------------------
>
>                 Key: ACCUMULO-2204
>                 URL: https://issues.apache.org/jira/browse/ACCUMULO-2204
>             Project: Accumulo
>          Issue Type: Sub-task
>            Reporter: Keith Turner
>
> Adding the linked list operation of deleting nodes would detect deleted data coming back.
Could create something similar to the walker that does the following.
>  # selects a random node X
>  # follows the linked list for a random number of times and stops at node Y
>  # makes X point Y
>  # deletes all nodes that were between X and Y in the list
> For example given the following linked list 
> {noformat}
>    7->5->29->13->19->23->17
> {noformat}
> If 5 were picked as the first node and 23 as the last node, then the following operations
would be done.
>  # write 5->23
>  # flush
>  # delete 29
>  # flush
>  # delete 13
>  # flush
>  # delete 19
>  # flush
>  # do batch read and/or scan to verify deletes 
> If 29 or 13 should come back, then the nodes they point to would not exist and verification
would catch this.  I think the operations above are done in such a way that the delete client
could be killed at any time.
> Since continuous ingest works w/ random number there is a small chance that the delete
client could delete a node just written by another client.  With 63 bit random numbers this
chance is exceedingly small.  Should it occur the person debugging should be able to sort
it out when looking at the  write ahead logs.  Therefore I do not think its worthwhile taking
any action in the test.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Mime
View raw message