hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Raghu Angadi (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-2012) Periodic verification at the Datanode
Date Wed, 31 Oct 2007 21:45:51 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-2012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12539194
] 

Raghu Angadi commented on HADOOP-2012:
--------------------------------------


My preference would also be make scan period configurable. Also I can make the bw used for
scanning adaptive.

In my implementation, there are no 'start' and 'end' of a period. All the blocks are kept
sorted by their last verification time. The loop just looks at the first block and if its
last verification time is older than scan period, then it is verified. All the new blocks
are assigned a (psuedo) last verification time of {{randLong(now - SCAN_PERIOD)}} so that
its gets verified within the scan period. 

So if we want to make scan b/w adaptive, it needs to be changed every time a new block is
added or removed, or verified by client (verification by client comes at 0 cost). This is
of course doable. will do it.

bq. It would make sense to have a reasonable upper bound on the amount of bandwidth used for
scanning and emit a warning if this is not enough to examine all blocks in a scan period.
So if someone set a scan period of 1 minute or something else silly the Datanode doesn't spend
all its time scanning.

Yes. If datanode is not able complete verification within the configured  period, datanode
will print warning (no more than once a day).


> Periodic verification at the Datanode
> -------------------------------------
>
>                 Key: HADOOP-2012
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2012
>             Project: Hadoop
>          Issue Type: New Feature
>          Components: dfs
>            Reporter: Raghu Angadi
>            Assignee: Raghu Angadi
>             Fix For: 0.16.0
>
>         Attachments: HADOOP-2012.patch, HADOOP-2012.patch, HADOOP-2012.patch, HADOOP-2012.patch
>
>
> Currently on-disk data corruption on data blocks is detected only when it is read by
the client or by another datanode.  These errors are detected much earlier if datanode can
periodically verify the data checksums for the local blocks.
> Some of the issues to consider :
> - How should we check the blocks ( no more often than once every couple of weeks ?)
> - How do we keep track of when a block was last verfied ( there is a .meta file associcated
with each lock ).
> - What action to take once a corruption is detected
> - Scanning should be done as a very low priority with rest of the datanode disk traffic
in mind.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message