From hdfs-issues-return-280478-archive-asf-public=cust-asf.ponee.io@hadoop.apache.org Wed Aug 28 14:01:13 2019 Return-Path: X-Original-To: archive-asf-public@cust-asf.ponee.io Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [207.244.88.153]) by mx-eu-01.ponee.io (Postfix) with SMTP id 684A0180181 for ; Wed, 28 Aug 2019 16:01:13 +0200 (CEST) Received: (qmail 99395 invoked by uid 500); 28 Aug 2019 14:01:06 -0000 Mailing-List: contact hdfs-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list hdfs-issues@hadoop.apache.org Received: (qmail 99279 invoked by uid 99); 28 Aug 2019 14:01:06 -0000 Received: from mailrelay1-us-west.apache.org (HELO mailrelay1-us-west.apache.org) (209.188.14.139) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 28 Aug 2019 14:01:06 +0000 Received: from jira-he-de.apache.org (static.172.67.40.188.clients.your-server.de [188.40.67.172]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id C7127E30DC for ; Wed, 28 Aug 2019 14:01:05 +0000 (UTC) Received: from jira-he-de.apache.org (localhost.localdomain [127.0.0.1]) by jira-he-de.apache.org (ASF Mail Server at jira-he-de.apache.org) with ESMTP id 117137822B1 for ; Wed, 28 Aug 2019 14:01:03 +0000 (UTC) Date: Wed, 28 Aug 2019 14:01:03 +0000 (UTC) From: "ASF GitHub Bot (Jira)" To: hdfs-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Work logged] (HDDS-1843) Undetectable corruption after restart of a datanode MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HDDS-1843?focusedWorklogId=302868&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-302868 ] ASF GitHub Bot logged work on HDDS-1843: ---------------------------------------- Author: ASF GitHub Bot Created on: 28/Aug/19 14:00 Start Date: 28/Aug/19 14:00 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #1364: HDDS-1843. Undetectable corruption after restart of a datanode. URL: https://github.com/apache/hadoop/pull/1364#discussion_r318597501 ########## File path: hadoop-hdds/common/src/main/resources/audit.log ########## @@ -0,0 +1,25 @@ +2019-08-28 11:36:31,489 | INFO | OMAudit | user=sbanerjee | ip=127.0.0.1 | op=CREATE_VOLUME {admin=sbanerjee, owner=sbanerjee, volume=testcontainerstatemachinefailures, creationTime=1566972391485, quotaInBytes=1152921504606846976} | ret=SUCCESS | +2019-08-28 11:36:31,494 | INFO | OMAudit | user=sbanerjee | ip=127.0.0.1 | op=READ_VOLUME {volume=testcontainerstatemachinefailures} | ret=SUCCESS | +2019-08-28 11:36:31,511 | INFO | OMAudit | user=sbanerjee | ip=127.0.0.1 | op=CREATE_BUCKET {volume=testcontainerstatemachinefailures, bucket=testcontainerstatemachinefailures, acls=[user:sbanerjee:a[ACCESS], group:staff:a[ACCESS], group:everyone:a[ACCESS], group:localaccounts:a[ACCESS], group:_appserverusr:a[ACCESS], group:admin:a[ACCESS], group:_appserveradm:a[ACCESS], group:_lpadmin:a[ACCESS], group:com.apple.sharepoint.group.2:a[ACCESS], group:_appstore:a[ACCESS], group:_lpoperator:a[ACCESS], group:_developer:a[ACCESS], group:_analyticsusers:a[ACCESS], group:com.apple.access_ftp:a[ACCESS], group:com.apple.access_screensharing:a[ACCESS], group:com.apple.access_ssh-disabled:a[ACCESS], group:com.apple.sharepoint.group.1:a[ACCESS]], isVersionEnabled=false, storageType=DISK, creationTime=0} | ret=SUCCESS | +2019-08-28 11:36:31,515 | INFO | OMAudit | user=sbanerjee | ip=127.0.0.1 | op=READ_VOLUME {volume=testcontainerstatemachinefailures} | ret=SUCCESS | +2019-08-28 11:36:31,519 | INFO | OMAudit | user=sbanerjee | ip=127.0.0.1 | op=READ_BUCKET {volume=testcontainerstatemachinefailures, bucket=testcontainerstatemachinefailures} | ret=SUCCESS | +2019-08-28 11:36:31,561 | INFO | OMAudit | user=sbanerjee | ip=127.0.0.1 | op=ALLOCATE_KEY {volume=testcontainerstatemachinefailures, bucket=testcontainerstatemachinefailures, key=ratis, dataSize=1024, replicationType=RATIS, replicationFactor=ONE, keyLocationInfo=null} | ret=SUCCESS | +2019-08-28 11:36:37,850 | INFO | OMAudit | user=sbanerjee | ip=127.0.0.1 | op=COMMIT_KEY {volume=testcontainerstatemachinefailures, bucket=testcontainerstatemachinefailures, key=ratis, dataSize=10, replicationType=null, replicationFactor=null, keyLocationInfo=[{blockID={containerID=1, localID=102693102652358657}, length=10, offset=0, token=null, pipeline=Pipeline[ Id: 33e5321e-9d61-4d31-94ca-a18a6abc80ba, Nodes: 82c96fdc-06f5-47f4-ab5d-7a2fa599b46e{ip: 192.168.0.64, host: 192.168.0.64, networkLocation: /default-rack, certSerialId: null}, Type:RATIS, Factor:ONE, State:OPEN], createVersion=0}], clientID=102693102651244544} | ret=SUCCESS | +2019-08-28 11:36:50,166 | INFO | OMAudit | user=sbanerjee | ip=127.0.0.1 | op=READ_VOLUME {volume=testcontainerstatemachinefailures} | ret=SUCCESS | +2019-08-28 11:36:50,168 | INFO | OMAudit | user=sbanerjee | ip=127.0.0.1 | op=READ_BUCKET {volume=testcontainerstatemachinefailures, bucket=testcontainerstatemachinefailures} | ret=SUCCESS | +2019-08-28 11:36:50,177 | INFO | OMAudit | user=sbanerjee | ip=127.0.0.1 | op=ALLOCATE_KEY {volume=testcontainerstatemachinefailures, bucket=testcontainerstatemachinefailures, key=ratis, dataSize=1024, replicationType=RATIS, replicationFactor=ONE, keyLocationInfo=null} | ret=SUCCESS | +2019-08-28 11:36:56,287 | INFO | OMAudit | user=sbanerjee | ip=127.0.0.1 | op=COMMIT_KEY {volume=testcontainerstatemachinefailures, bucket=testcontainerstatemachinefailures, key=ratis, dataSize=6, replicationType=null, replicationFactor=null, keyLocationInfo=[{blockID={containerID=2, localID=102693103873294339}, length=6, offset=0, token=null, pipeline=Pipeline[ Id: 5c7f9b1c-2fbc-470e-8e56-1e9bbf131bcb, Nodes: 82c96fdc-06f5-47f4-ab5d-7a2fa599b46e{ip: 192.168.0.64, host: 192.168.0.64, networkLocation: /default-rack, certSerialId: null}, Type:RATIS, Factor:ONE, State:OPEN], createVersion=0}], clientID=102693103872901122} | ret=SUCCESS | +2019-08-28 11:37:08,500 | INFO | OMAudit | user=sbanerjee | ip=127.0.0.1 | op=READ_VOLUME {volume=testcontainerstatemachinefailures} | ret=SUCCESS | +2019-08-28 11:37:08,502 | INFO | OMAudit | user=sbanerjee | ip=127.0.0.1 | op=READ_BUCKET {volume=testcontainerstatemachinefailures, bucket=testcontainerstatemachinefailures} | ret=SUCCESS | +2019-08-28 11:37:08,514 | INFO | OMAudit | user=sbanerjee | ip=127.0.0.1 | op=ALLOCATE_KEY {volume=testcontainerstatemachinefailures, bucket=testcontainerstatemachinefailures, key=ratis, dataSize=1024, replicationType=RATIS, replicationFactor=ONE, keyLocationInfo=null} | ret=SUCCESS | +2019-08-28 11:37:25,713 | INFO | OMAudit | user=sbanerjee | ip=127.0.0.1 | op=ALLOCATE_BLOCK {volume=testcontainerstatemachinefailures, bucket=testcontainerstatemachinefailures, key=ratis, dataSize=0, replicationType=null, replicationFactor=null, keyLocationInfo=null, clientID=102693105074634756} | ret=SUCCESS | +2019-08-28 11:37:31,807 | INFO | OMAudit | user=sbanerjee | ip=127.0.0.1 | op=COMMIT_KEY {volume=testcontainerstatemachinefailures, bucket=testcontainerstatemachinefailures, key=ratis, dataSize=10, replicationType=null, replicationFactor=null, keyLocationInfo=[{blockID={containerID=3, localID=102693105075027973}, length=5, offset=0, token=null, pipeline=Pipeline[ Id: 0aae28a5-8d47-4e6d-8275-0bef573d3730, Nodes: 82c96fdc-06f5-47f4-ab5d-7a2fa599b46e{ip: 192.168.0.64, host: 192.168.0.64, networkLocation: /default-rack, certSerialId: null}, Type:RATIS, Factor:ONE, State:OPEN], createVersion=0}, {blockID={containerID=4, localID=102693106202181638}, length=5, offset=0, token=null, pipeline=Pipeline[ Id: 66409363-72fc-4118-a5c8-5d82a6dcb7a0, Nodes: 82c96fdc-06f5-47f4-ab5d-7a2fa599b46e{ip: 192.168.0.64, host: 192.168.0.64, networkLocation: /default-rack, certSerialId: null}, Type:RATIS, Factor:ONE, State:OPEN], createVersion=0}], clientID=102693105074634756} | ret=SUCCESS | Review comment: whitespace:end of line ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: users@infra.apache.org Issue Time Tracking ------------------- Worklog Id: (was: 302868) Time Spent: 1h 20m (was: 1h 10m) > Undetectable corruption after restart of a datanode > --------------------------------------------------- > > Key: HDDS-1843 > URL: https://issues.apache.org/jira/browse/HDDS-1843 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Datanode > Affects Versions: 0.5.0 > Reporter: Shashikant Banerjee > Assignee: Shashikant Banerjee > Priority: Critical > Labels: pull-request-available > Fix For: 0.5.0 > > Attachments: HDDS-1843.000.patch > > Time Spent: 1h 20m > Remaining Estimate: 0h > > Right now, all write chunks use BufferedIO ie, sync flag is disabled by default. Also, Rocks Db metadata updates are done in Rocks DB cache first at Datanode. In case, there comes a situation where the buffered chunk data as well as the corresponding metadata update is lost as a part of datanode restart, it may lead to a situation where, it will not be possible to detect the corruption (not even with container scanner) of this nature in a reasonable time frame, until and unless there is a client IO failure or Recon server detects it over time. In order to atleast to detect the problem, Ratis snapshot on datanode should sync the rocks db file . In such a way, ContainerScanner will be able to detect this.We can also add a metric around sync to measure how much of a throughput loss it can incurr. > Thanks [~msingh] for suggesting this. -- This message was sent by Atlassian Jira (v8.3.2#803003) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org