From hdfs-issues-return-230854-archive-asf-public=cust-asf.ponee.io@hadoop.apache.org Sat Aug 25 01:28:10 2018 Return-Path: X-Original-To: archive-asf-public@cust-asf.ponee.io Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by mx-eu-01.ponee.io (Postfix) with SMTP id 86EDA180629 for ; Sat, 25 Aug 2018 01:28:08 +0200 (CEST) Received: (qmail 11709 invoked by uid 500); 24 Aug 2018 23:28:07 -0000 Mailing-List: contact hdfs-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list hdfs-issues@hadoop.apache.org Received: (qmail 11698 invoked by uid 99); 24 Aug 2018 23:28:07 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 24 Aug 2018 23:28:07 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id 1AD46C1A9A for ; Fri, 24 Aug 2018 23:28:07 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -110.301 X-Spam-Level: X-Spam-Status: No, score=-110.301 tagged_above=-999 required=6.31 tests=[ENV_AND_HDR_SPF_MATCH=-0.5, RCVD_IN_DNSWL_MED=-2.3, SPF_PASS=-0.001, USER_IN_DEF_SPF_WL=-7.5, USER_IN_WHITELIST=-100] autolearn=disabled Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id MVprZVmnAx3M for ; Fri, 24 Aug 2018 23:28:02 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTP id 7FE685F35A for ; Fri, 24 Aug 2018 23:28:01 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id 937CFE008D for ; Fri, 24 Aug 2018 23:28:00 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id 3ED9A26B56 for ; Fri, 24 Aug 2018 23:28:00 +0000 (UTC) Date: Fri, 24 Aug 2018 23:28:00 +0000 (UTC) From: "Shweta (JIRA)" To: hdfs-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HDFS-13837) hdfs.TestDistributedFileSystem.testDFSClient: test is flaky MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HDFS-13837?page=3Dcom.atlassian= .jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=3D1659= 2311#comment-16592311 ]=20 Shweta commented on HDFS-13837: ------------------------------- Thanks [~xiaochen] for the prompt review. Yes, the LeaseRenewer Logs will b= e helpful when the test fails. Also, as per your suggesstion I have made the change at class level so that= Lease Renewer logging is present for all the tests in the class. I have ad= ded a new patch w.r.t. this change. Please review. Thank you. :) > hdfs.TestDistributedFileSystem.testDFSClient: test is flaky > ----------------------------------------------------------- > > Key: HDFS-13837 > URL: https://issues.apache.org/jira/browse/HDFS-13837 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs > Reporter: Shweta > Assignee: Shweta > Priority: Major > Attachments: HDFS-13837.001.patch, HDFS-13837.002.patch, TestDist= ributedFileSystem.testDFSClient_Stderr_log > > > Stack Trace : > {noformat}=20 > java.lang.AssertionError > at org.apache.hadoop.hdfs.TestDistributedFileSystem.testDFSClient(TestDi= stributedFileSystem.java:449) > {noformat} > =C2=A0Stdout: > {noformat} > [truncated]kmanagement.BlockManager (BlockManager.java:processMisReplicat= esAsync(3385)) - Number of blocks being written =3D 0 > 2018-07-31 21:42:46,675 [Reconstruction Queue Initializer] INFO hdfs.Sta= teChange (BlockManager.java:processMisReplicatesAsync(3388)) - STATE* Repli= cation Queue initialization scan for invalid, over- and under-replicated bl= ocks completed in 5 msec > 2018-07-31 21:42:46,676 [IPC Server Responder] INFO ipc.Server (Server.j= ava:run(1307)) - IPC Server Responder: starting > 2018-07-31 21:42:46,676 [IPC Server listener on port1] INFO ipc.Server (= Server.java:run(1146)) - IPC Server listener on port1: starting > 2018-07-31 21:42:46,678 [main] INFO namenode.NameNode (NameNode.java:sta= rtCommonServices(831)) - NameNode RPC up at: localhost/x.x.x.x:port1 > 2018-07-31 21:42:46,678 [main] INFO namenode.FSNamesystem (FSNamesystem.= java:startActiveServices(1230)) - Starting services required for active sta= te > 2018-07-31 21:42:46,678 [main] INFO namenode.FSDirectory (FSDirectory.ja= va:updateCountForQuota(758)) - Initializing quota with 4 thread(s) > 2018-07-31 21:42:46,679 [main] INFO namenode.FSDirectory (FSDirectory.ja= va:updateCountForQuota(767)) - Quota initialization completed in 0 millisec= onds > name space=3D1 > storage space=3D0 > storage types=3DRAM_DISK=3D0, SSD=3D0, DISK=3D0, ARCHIVE=3D0 > 2018-07-31 21:42:46,682 [CacheReplicationMonitor(1111752355)] INFO block= management.CacheReplicationMonitor (CacheReplicationMonitor.java:run(160)) = - Starting CacheReplicationMonitor with interval 30000 milliseconds > 2018-07-31 21:42:46,686 [main] INFO hdfs.MiniDFSCluster (MiniDFSCluster.= java:startDataNodes(1599)) - Starting DataNode 0 with dfs.datanode.data.dir= : [DISK]file:/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-= hdfs/target/test/data/dfs/data/data1,[DISK][file:/tmp/tmp.u8GhlLcdks/src/CD= H/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2|fi= le:///tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/tar= get/test/data/dfs/data/data2] > 2018-07-31 21:42:46,687 [main] INFO checker.ThrottledAsyncChecker (Throt= tledAsyncChecker.java:schedule(122)) - Scheduling a check for [DISK]file:/t= mp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/tes= t/data/dfs/data/data1 > 2018-07-31 21:42:46,687 [main] INFO checker.ThrottledAsyncChecker (Throt= tledAsyncChecker.java:schedule(122)) - Scheduling a check for [DISK]file:/t= mp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/tes= t/data/dfs/data/data2 > 2018-07-31 21:42:46,695 [main] INFO impl.MetricsSystemImpl (MetricsSyste= mImpl.java:init(158)) - DataNode metrics system started (again) > 2018-07-31 21:42:46,695 [main] INFO common.Util (Util.java:isDiskStatsEn= abled(395)) - dfs.datanode.fileio.profiling.sampling.percentage set to 0. D= isabling file IO profiling > 2018-07-31 21:42:46,695 [main] INFO datanode.BlockScanner (BlockScanner.= java:(184)) - Initialized block scanner with targetBytesPerSec 104857= 6 > 2018-07-31 21:42:46,696 [main] INFO datanode.DataNode (DataNode.java:(496)) - Configured hostname is x.x.x.x > 2018-07-31 21:42:46,696 [main] INFO common.Util (Util.java:isDiskStatsEn= abled(395)) - dfs.datanode.fileio.profiling.sampling.percentage set to 0. D= isabling file IO profiling > 2018-07-31 21:42:46,696 [main] INFO datanode.DataNode (DataNode.java:sta= rtDataNode(1385)) - Starting DataNode with maxLockedMemory =3D 0 > 2018-07-31 21:42:46,697 [main] INFO datanode.DataNode (DataNode.java:ini= tDataXceiver(1142)) - Opened streaming server at /x.x.x.x:port2 > 2018-07-31 21:42:46,697 [main] INFO datanode.DataNode (DataXceiverServer= .java:(78)) - Balancing bandwidth is 10485760 bytes/s > 2018-07-31 21:42:46,697 [main] INFO datanode.DataNode (DataXceiverServer= .java:(79)) - Number threads for balancing is 50 > 2018-07-31 21:42:46,699 [main] INFO server.AuthenticationFilter (Authent= icationFilter.java:constructSecretProvider(240)) - Unable to initialize Fil= eSignerSecretProvider, falling back to use random secrets. > 2018-07-31 21:42:46,699 [main] INFO http.HttpRequestLog (HttpRequestLog.= java:getRequestLog(81)) - Http request log for http.requests.datanode is no= t defined > 2018-07-31 21:42:46,700 [main] INFO http.HttpServer2 (HttpServer2.java:a= ddGlobalFilter(923)) - Added global filter 'safety' (class=3Dorg.apache.had= oop.http.HttpServer2$QuotingInputFilter) > 2018-07-31 21:42:46,701 [main] INFO http.HttpServer2 (HttpServer2.java:a= ddFilter(896)) - Added filter static_user_filter (class=3Dorg.apache.hadoop= .http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode > 2018-07-31 21:42:46,701 [main] INFO http.HttpServer2 (HttpServer2.java:a= ddFilter(906)) - Added filter static_user_filter (class=3Dorg.apache.hadoop= .http.lib.StaticUserWebFilter$StaticUserFilter) to context static > 2018-07-31 21:42:46,701 [main] INFO http.HttpServer2 (HttpServer2.java:a= ddFilter(906)) - Added filter static_user_filter (class=3Dorg.apache.hadoop= .http.lib.StaticUserWebFilter$StaticUserFilter) to context logs > 2018-07-31 21:42:46,702 [main] INFO http.HttpServer2 (HttpServer2.java:b= indListener(1123)) - Jetty bound to port 35341 > 2018-07-31 21:42:46,702 [main] INFO server.Server (Server.java:doStart(3= 46)) - jetty-9.3.20.v20170531 > 2018-07-31 21:42:46,704 [main] INFO handler.ContextHandler (ContextHandl= er.java:doStart(781)) - Started o.e.j.s.ServletContextHandler@38b5f25 > {/logs,file:///tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hado= op-hdfs/target/log/,AVAILABLE} > 2018-07-31 21:42:46,705 [main] INFO handler.ContextHandler (ContextHandle= r.java:doStart(781)) - Started o.e.j.s.ServletContextHandler@67594471 > {/static,file:///tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/ha= doop-hdfs/target/test-classes/webapps/static/,AVAILABLE} > 2018-07-31 21:42:46,711 [main] INFO handler.ContextHandler (ContextHandle= r.java:doStart(781)) - Started o.e.j.w.WebAppContext@68fe48d7 > {/,file:///tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-h= dfs/target/test-classes/webapps/datanode/,AVAILABLE} \{/datanode} > 2018-07-31 21:42:46,712 [main] INFO server.AbstractConnector (AbstractCo= nnector.java:doStart(278)) - Started ServerConnector@379ce046\{HTTP/1.1,[ht= tp/1.1]}{localhost:35341} > 2018-07-31 21:42:46,712 [main] INFO server.Server (Server.java:doStart(4= 14)) - Started @27029ms > 2018-07-31 21:42:46,715 [main] INFO web.DatanodeHttpServer (DatanodeHttp= Server.java:start(239)) - Listening HTTP traffic on /x.x.x.x:port3 > 2018-07-31 21:42:46,716 [main] INFO datanode.DataNode (DataNode.java:sta= rtDataNode(1412)) - dnUserName =3D jenkins > 2018-07-31 21:42:46,716 [org.apache.hadoop.util.JvmPauseMonitor$Monitor@= 10acd6] INFO util.JvmPauseMonitor (JvmPauseMonitor.java:run(188)) - Startin= g JVM pause monitor > 2018-07-31 21:42:46,716 [main] INFO datanode.DataNode (DataNode.java:sta= rtDataNode(1413)) - supergroup =3D supergroup > 2018-07-31 21:42:46,717 [main] INFO ipc.CallQueueManager (CallQueueManag= er.java:(84)) - Using callQueue: class java.util.concurrent.LinkedBlo= ckingQueue queueCapacity: 1000 scheduler: class org.apache.hadoop.ipc.Defau= ltRpcScheduler > 2018-07-31 21:42:46,718 [Socket Reader #1 for port port4|#1 for port por= t4] INFO ipc.Server (Server.java:run(1067)) - Starting Socket Reader #1 for= port port4 > 2018-07-31 21:42:46,719 [main] INFO datanode.DataNode (DataNode.java:ini= tIpcServer(1029)) - Opened IPC server at /x.x.x.x:port4 > 2018-07-31 21:42:46,721 [main] INFO datanode.DataNode (BlockPoolManager.= java:refreshNamenodes(149)) - Refresh request received for nameservices: nu= ll > 2018-07-31 21:42:46,721 [main] INFO datanode.DataNode (BlockPoolManager.= java:doRefreshNamenodes(198)) - Starting BPOfferServices for nameservices: = > 2018-07-31 21:42:46,722 [Thread-2206] INFO datanode.DataNode (BPServiceA= ctor.java:run(809)) - Block pool (Datanode Uuid unassigned) s= ervice to localhost/x.x.x.x:port1 starting to offer service > 2018-07-31 21:42:46,724 [IPC Server Responder] INFO ipc.Server (Server.j= ava:run(1307)) - IPC Server Responder: starting > 2018-07-31 21:42:46,726 [IPC Server listener on port4] INFO ipc.Server (= Server.java:run(1146)) - IPC Server listener on port4: starting > 2018-07-31 21:42:46,729 [main] INFO hdfs.MiniDFSCluster (MiniDFSCluster.= java:startDataNodes(1599)) - Starting DataNode 1 with dfs.datanode.data.dir= : [DISK]file:/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-= hdfs/target/test/data/dfs/data/data3,[DISK][file:/tmp/tmp.u8GhlLcdks/src/CD= H/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data4|fi= le:///tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/tar= get/test/data/dfs/data/data4] > 2018-07-31 21:42:46,730 [Thread-2206] INFO datanode.DataNode (BPOfferSer= vice.java:verifyAndSetNamespaceInfo(378)) - Acknowledging ACTIVE Namenode d= uring handshakeBlock pool (Datanode Uuid unassigned) service = to localhost/x.x.x.x:port1 > 2018-07-31 21:42:46,730 [main] INFO checker.ThrottledAsyncChecker (Throt= tledAsyncChecker.java:schedule(122)) - Scheduling a check for [DISK]file:/t= mp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/tes= t/data/dfs/data/data3 > 2018-07-31 21:42:46,731 [main] INFO checker.ThrottledAsyncChecker (Throt= tledAsyncChecker.java:schedule(122)) - Scheduling a check for [DISK]file:/t= mp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/tes= t/data/dfs/data/data4 > 2018-07-31 21:42:46,732 [Thread-2206] INFO common.Storage (DataStorage.j= ava:getParallelVolumeLoadThreadsNum(346)) - Using 2 threads to upgrade data= directories (dfs.datanode.parallel.volumes.load.threads.num=3D2, dataDirs= =3D2) > 2018-07-31 21:42:46,735 [Thread-2206] INFO common.Storage (Storage.java:= tryLock(847)) - Lock on /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-proj= ect/hadoop-hdfs/target/test/data/dfs/data/data1/in_use.lock acquired by nod= ename 7074@quasar-ztplxb-1.vpc.cloudera.com > 2018-07-31 21:42:46,736 [Thread-2206] INFO common.Storage (DataStorage.j= ava:loadStorageDirectory(273)) - Storage directory with location [DISK]file= :/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/= test/data/dfs/data/data1 is not formatted for namespace 1027244560. Formatt= ing... > 2018-07-31 21:42:46,736 [Thread-2206] INFO common.Storage (DataStorage.j= ava:createStorageID(150)) - Generated new storageID DS-1d19d47a-b472-463a-a= 1ed-43be073676f5 for directory /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hd= fs-project/hadoop-hdfs/target/test/data/dfs/data/data1 > 2018-07-31 21:42:46,740 [main] INFO impl.MetricsSystemImpl (MetricsSyste= mImpl.java:init(158)) - DataNode metrics system started (again) > 2018-07-31 21:42:46,740 [main] INFO common.Util (Util.java:isDiskStatsEn= abled(395)) - dfs.datanode.fileio.profiling.sampling.percentage set to 0. D= isabling file IO profiling > 2018-07-31 21:42:46,740 [main] INFO datanode.BlockScanner (BlockScanner.= java:(184)) - Initialized block scanner with targetBytesPerSec 104857= 6 > 2018-07-31 21:42:46,740 [main] INFO datanode.DataNode (DataNode.java:(496)) - Configured hostname is x.x.x.x > 2018-07-31 21:42:46,741 [main] INFO common.Util (Util.java:isDiskStatsEn= abled(395)) - dfs.datanode.fileio.profiling.sampling.percentage set to 0. D= isabling file IO profiling > 2018-07-31 21:42:46,741 [main] INFO datanode.DataNode (DataNode.java:sta= rtDataNode(1385)) - Starting DataNode with maxLockedMemory =3D 0 > 2018-07-31 21:42:46,741 [main] INFO datanode.DataNode (DataNode.java:ini= tDataXceiver(1142)) - Opened streaming server at /x.x.x.x:port5 > 2018-07-31 21:42:46,742 [main] INFO datanode.DataNode (DataXceiverServer= .java:(78)) - Balancing bandwidth is 10485760 bytes/s > 2018-07-31 21:42:46,742 [main] INFO datanode.DataNode (DataXceiverServer= .java:(79)) - Number threads for balancing is 50 > 2018-07-31 21:42:46,744 [main] INFO server.AuthenticationFilter (Authent= icationFilter.java:constructSecretProvider(240)) - Unable to initialize Fil= eSignerSecretProvider, falling back to use random secrets. > 2018-07-31 21:42:46,744 [Thread-2206] INFO common.Storage (Storage.java:= tryLock(847)) - Lock on /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-proj= ect/hadoop-hdfs/target/test/data/dfs/data/data2/in_use.lock acquired by nod= ename 7074@quasar-ztplxb-1.vpc.cloudera.com > 2018-07-31 21:42:46,744 [main] INFO http.HttpRequestLog (HttpRequestLog.= java:getRequestLog(81)) - Http request log for http.requests.datanode is no= t defined > 2018-07-31 21:42:46,744 [Thread-2206] INFO common.Storage (DataStorage.j= ava:loadStorageDirectory(273)) - Storage directory with location [DISK]file= :/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/= test/data/dfs/data/data2 is not formatted for namespace 1027244560. Formatt= ing... > 2018-07-31 21:42:46,744 [Thread-2206] INFO common.Storage (DataStorage.j= ava:createStorageID(150)) - Generated new storageID DS-c307fe32-1d7b-4c82-9= d8f-6c43bb6ebe3f for directory /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hd= fs-project/hadoop-hdfs/target/test/data/dfs/data/data2 > 2018-07-31 21:42:46,745 [main] INFO http.HttpServer2 (HttpServer2.java:a= ddGlobalFilter(923)) - Added global filter 'safety' (class=3Dorg.apache.had= oop.http.HttpServer2$QuotingInputFilter) > 2018-07-31 21:42:46,746 [main] INFO http.HttpServer2 (HttpServer2.java:a= ddFilter(896)) - Added filter static_user_filter (class=3Dorg.apache.hadoop= .http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode > 2018-07-31 21:42:46,746 [main] INFO http.HttpServer2 (HttpServer2.java:a= ddFilter(906)) - Added filter static_user_filter (class=3Dorg.apache.hadoop= .http.lib.StaticUserWebFilter$StaticUserFilter) to context logs > 2018-07-31 21:42:46,746 [main] INFO http.HttpServer2 (HttpServer2.java:a= ddFilter(906)) - Added filter static_user_filter (class=3Dorg.apache.hadoop= .http.lib.StaticUserWebFilter$StaticUserFilter) to context static > 2018-07-31 21:42:46,746 [main] INFO http.HttpServer2 (HttpServer2.java:b= indListener(1123)) - Jetty bound to port 35966 > 2018-07-31 21:42:46,747 [main] INFO server.Server (Server.java:doStart(3= 46)) - jetty-9.3.20.v20170531 > 2018-07-31 21:42:46,749 [main] INFO handler.ContextHandler (ContextHandl= er.java:doStart(781)) - Started o.e.j.s.ServletContextHandler@2c6aed22\{/lo= gs,file:///tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdf= s/target/log/,AVAILABLE} > 2018-07-31 21:42:46,750 [main] INFO handler.ContextHandler (ContextHandl= er.java:doStart(781)) - Started o.e.j.s.ServletContextHandler@7acfb656\{/st= atic,file:///tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-h= dfs/target/test-classes/webapps/static/,AVAILABLE} > 2018-07-31 21:42:46,755 [main] INFO handler.ContextHandler (ContextHandl= er.java:doStart(781)) - Started o.e.j.w.WebAppContext@41ffaeb8\{/,file:///t= mp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/tes= t-classes/webapps/datanode/,AVAILABLE}{/datanode} > 2018-07-31 21:42:46,756 [main] INFO server.AbstractConnector (AbstractCon= nector.java:doStart(278)) - Started ServerConnector@43f0c2d1 > {HTTP/1.1,[http/1.1]}\{localhost:35966} > 2018-07-31 21:42:46,756 [main] INFO server.Server (Server.java:doStart(4= 14)) - Started @27074ms > 2018-07-31 21:42:46,760 [main] INFO web.DatanodeHttpServer (DatanodeHttp= Server.java:start(239)) - Listening HTTP traffic on /x.x.x.x:port6 > 2018-07-31 21:42:46,761 [main] INFO datanode.DataNode (DataNode.java:sta= rtDataNode(1412)) - dnUserName =3D jenkins > 2018-07-31 21:42:46,761 [main] INFO datanode.DataNode (DataNode.java:sta= rtDataNode(1413)) - supergroup =3D supergroup > 2018-07-31 21:42:46,761 [Thread-2206] INFO common.Storage (BlockPoolSlic= eStorage.java:recoverTransitionRead(252)) - Analyzing storage directories f= or bpid BP-1309332346-172.26.0.186-1533098566550 > 2018-07-31 21:42:46,761 [main] INFO ipc.CallQueueManager (CallQueueManag= er.java:(84)) - Using callQueue: class java.util.concurrent.LinkedBlo= ckingQueue queueCapacity: 1000 scheduler: class org.apache.hadoop.ipc.Defau= ltRpcScheduler > 2018-07-31 21:42:46,761 [org.apache.hadoop.util.JvmPauseMonitor$Monitor@= 4096aa05] INFO util.JvmPauseMonitor (JvmPauseMonitor.java:run(188)) - Start= ing JVM pause monitor > 2018-07-31 21:42:46,761 [Thread-2206] INFO common.Storage (Storage.java:= lock(806)) - Locking is disabled for /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/had= oop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/current/BP-130= 9332346-172.26.0.186-1533098566550 > 2018-07-31 21:42:46,761 [Thread-2206] INFO common.Storage (BlockPoolSlic= eStorage.java:loadStorageDirectory(168)) - Block pool storage directory for= location [DISK]file:/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project= /hadoop-hdfs/target/test/data/dfs/data/data1 and block pool id BP-130933234= 6-172.26.0.186-1533098566550 is not formatted for BP-1309332346-172.26.0.18= 6-1533098566550. Formatting ... > 2018-07-31 21:42:46,762 [Thread-2206] INFO common.Storage (BlockPoolSlic= eStorage.java:format(280)) - Formatting block pool BP-1309332346-172.26.0.1= 86-1533098566550 directory /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-p= roject/hadoop-hdfs/target/test/data/dfs/data/data1/current/BP-1309332346-17= 2.26.0.186-1533098566550/current > 2018-07-31 21:42:46,762 [Socket Reader #1 for port port7|#1 for port por= t7] INFO ipc.Server (Server.java:run(1067)) - Starting Socket Reader #1 for= port port7 > 2018-07-31 21:42:46,764 [main] INFO datanode.DataNode (DataNode.java:ini= tIpcServer(1029)) - Opened IPC server at /x.x.x.x:port7 > 2018-07-31 21:42:46,766 [main] INFO datanode.DataNode (BlockPoolManager.= java:refreshNamenodes(149)) - Refresh request received for nameservices: nu= ll > 2018-07-31 21:42:46,766 [main] INFO datanode.DataNode (BlockPoolManager.= java:doRefreshNamenodes(198)) - Starting BPOfferServices for nameservices: = > 2018-07-31 21:42:46,767 [Thread-2237] INFO datanode.DataNode (BPServiceA= ctor.java:run(809)) - Block pool (Datanode Uuid unassigned) s= ervice to localhost/x.x.x.x:port1 starting to offer service > 2018-07-31 21:42:46,768 [IPC Server Responder] INFO ipc.Server (Server.j= ava:run(1307)) - IPC Server Responder: starting > 2018-07-31 21:42:46,769 [IPC Server listener on port7] INFO ipc.Server (= Server.java:run(1146)) - IPC Server listener on port7: starting > 2018-07-31 21:42:46,770 [Thread-2237] INFO datanode.DataNode (BPOfferSer= vice.java:verifyAndSetNamespaceInfo(378)) - Acknowledging ACTIVE Namenode d= uring handshakeBlock pool (Datanode Uuid unassigned) service = to localhost/x.x.x.x:port1 > 2018-07-31 21:42:46,770 [Thread-2237] INFO common.Storage (DataStorage.j= ava:getParallelVolumeLoadThreadsNum(346)) - Using 2 threads to upgrade data= directories (dfs.datanode.parallel.volumes.load.threads.num=3D2, dataDirs= =3D2) > 2018-07-31 21:42:46,772 [main] DEBUG hdfs.DFSClient (DFSClient.java:(311)) - Sets dfs.client.block.write.replace-datanode-on-failure.min-repl= ication to 0 > 2018-07-31 21:42:46,775 [IPC Server handler 2 on port1] INFO FSNamesyste= m.audit (FSNamesystem.java:logAuditMessage(7816)) - allowed=3Dtrue ugi=3Dje= nkins (auth:SIMPLE) ip=3D/x.x.x.x cmd=3DdatanodeReport src=3Dnull dst=3Dnul= l perm=3Dnull proto=3Drpc > 2018-07-31 21:42:46,776 [main] INFO hdfs.MiniDFSCluster (MiniDFSCluster.= java:shouldWait(2653)) - dnInfo.length !=3D numDataNodes > 2018-07-31 21:42:46,776 [main] INFO hdfs.MiniDFSCluster (MiniDFSCluster.= java:waitActive(2574)) - Waiting for cluster to become active > 2018-07-31 21:42:46,780 [Thread-2206] INFO common.Storage (BlockPoolSlic= eStorage.java:recoverTransitionRead(252)) - Analyzing storage directories f= or bpid BP-1309332346-172.26.0.186-1533098566550 > 2018-07-31 21:42:46,780 [Thread-2206] INFO common.Storage (Storage.java:= lock(806)) - Locking is disabled for /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/had= oop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2/current/BP-130= 9332346-172.26.0.186-1533098566550 > 2018-07-31 21:42:46,780 [Thread-2237] INFO common.Storage (Storage.java:= tryLock(847)) - Lock on /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-proj= ect/hadoop-hdfs/target/test/data/dfs/data/data3/in_use.lock acquired by nod= ename 7074@quasar-ztplxb-1.vpc.cloudera.com > 2018-07-31 21:42:46,780 [Thread-2206] INFO common.Storage (BlockPoolSlic= eStorage.java:loadStorageDirectory(168)) - Block pool storage directory for= location [DISK]file:/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project= /hadoop-hdfs/target/test/data/dfs/data/data2 and block pool id BP-130933234= 6-172.26.0.186-1533098566550 is not formatted for BP-1309332346-172.26.0.18= 6-1533098566550. Formatting ... > 2018-07-31 21:42:46,780 [Thread-2206] INFO common.Storage (BlockPoolSlic= eStorage.java:format(280)) - Formatting block pool BP-1309332346-172.26.0.1= 86-1533098566550 directory /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-p= roject/hadoop-hdfs/target/test/data/dfs/data/data2/current/BP-1309332346-17= 2.26.0.186-1533098566550/current > 2018-07-31 21:42:46,780 [Thread-2237] INFO common.Storage (DataStorage.j= ava:loadStorageDirectory(273)) - Storage directory with location [DISK]file= :/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/= test/data/dfs/data/data3 is not formatted for namespace 1027244560. Formatt= ing... > 2018-07-31 21:42:46,781 [Thread-2237] INFO common.Storage (DataStorage.j= ava:createStorageID(150)) - Generated new storageID DS-386f4969-f72d-4a3a-b= aad-ef59d4bc3016 for directory /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hd= fs-project/hadoop-hdfs/target/test/data/dfs/data/data3 > 2018-07-31 21:42:46,789 [Thread-2206] INFO datanode.DataNode (DataNode.j= ava:initStorage(1693)) - Setting up storage: nsid=3D1027244560;bpid=3DBP-13= 09332346-172.26.0.186-1533098566550;lv=3D-57;nsInfo=3Dlv=3D-64;cid=3DtestCl= usterID;nsid=3D1027244560;c=3D1533098566550;bpid=3DBP-1309332346-172.26.0.1= 86-1533098566550;dnuuid=3Dnull > 2018-07-31 21:42:46,792 [Thread-2237] INFO common.Storage (Storage.java:= tryLock(847)) - Lock on /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-proj= ect/hadoop-hdfs/target/test/data/dfs/data/data4/in_use.lock acquired by nod= ename 7074@quasar-ztplxb-1.vpc.cloudera.com > 2018-07-31 21:42:46,792 [Thread-2237] INFO common.Storage (DataStorage.j= ava:loadStorageDirectory(273)) - Storage directory with location [DISK]file= :/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/= test/data/dfs/data/data4 is not formatted for namespace 1027244560. Formatt= ing... > 2018-07-31 21:42:46,792 [Thread-2237] INFO common.Storage (DataStorage.j= ava:createStorageID(150)) - Generated new storageID DS-cac8e9dc-026d-474d-a= 76b-57d577c44846 for directory /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hd= fs-project/hadoop-hdfs/target/test/data/dfs/data/data4 > 2018-07-31 21:42:46,794 [Thread-2206] INFO datanode.DataNode (DataNode.j= ava:checkDatanodeUuid(1517)) - Generated and persisted new Datanode UUID 78= 5a0dd8-bdae-40c1-94f6-a6b639dafb14 > 2018-07-31 21:42:46,796 [Thread-2206] INFO impl.FsDatasetImpl (FsVolumeL= ist.java:addVolume(306)) - Added new volume: DS-1d19d47a-b472-463a-a1ed-43b= e073676f5 > 2018-07-31 21:42:46,796 [Thread-2206] INFO impl.FsDatasetImpl (FsDataset= Impl.java:addVolume(431)) - Added volume - [DISK]file:/tmp/tmp.u8GhlLcdks/s= rc/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dat= a1, StorageType: DISK > 2018-07-31 21:42:46,797 [Thread-2206] INFO impl.FsDatasetImpl (FsVolumeL= ist.java:addVolume(306)) - Added new volume: DS-c307fe32-1d7b-4c82-9d8f-6c4= 3bb6ebe3f > 2018-07-31 21:42:46,797 [Thread-2206] INFO impl.FsDatasetImpl (FsDataset= Impl.java:addVolume(431)) - Added volume - [DISK]file:/tmp/tmp.u8GhlLcdks/s= rc/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dat= a2, StorageType: DISK > 2018-07-31 21:42:46,797 [Thread-2206] INFO impl.FsDatasetImpl (FsDataset= Impl.java:registerMBean(2169)) - Registered FSDatasetState MBean > 2018-07-31 21:42:46,798 [Thread-2206] INFO checker.ThrottledAsyncChecker= (ThrottledAsyncChecker.java:schedule(122)) - Scheduling a check for /tmp/t= mp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/da= ta/dfs/data/data1 > 2018-07-31 21:42:46,799 [Thread-2206] INFO checker.DatasetVolumeChecker = (DatasetVolumeChecker.java:checkAllVolumes(219)) - Scheduled health check f= or volume /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdf= s/target/test/data/dfs/data/data1 > 2018-07-31 21:42:46,799 [Thread-2206] INFO checker.ThrottledAsyncChecker= (ThrottledAsyncChecker.java:schedule(122)) - Scheduling a check for /tmp/t= mp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/da= ta/dfs/data/data2 > 2018-07-31 21:42:46,799 [Thread-2206] INFO checker.DatasetVolumeChecker = (DatasetVolumeChecker.java:checkAllVolumes(219)) - Scheduled health check f= or volume /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdf= s/target/test/data/dfs/data/data2 > 2018-07-31 21:42:46,799 [Thread-2206] INFO impl.FsDatasetImpl (FsDataset= Impl.java:addBlockPool(2655)) - Adding block pool BP-1309332346-172.26.0.18= 6-1533098566550 > 2018-07-31 21:42:46,800 [Thread-2251] INFO impl.FsDatasetImpl (FsVolumeL= ist.java:run(409)) - Scanning block pool BP-1309332346-172.26.0.186-1533098= 566550 on volume /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/had= oop-hdfs/target/test/data/dfs/data/data1... > 2018-07-31 21:42:46,801 [Thread-2237] INFO common.Storage (BlockPoolSlic= eStorage.java:recoverTransitionRead(252)) - Analyzing storage directories f= or bpid BP-1309332346-172.26.0.186-1533098566550 > 2018-07-31 21:42:46,801 [Thread-2237] INFO common.Storage (Storage.java:= lock(806)) - Locking is disabled for /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/had= oop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data3/current/BP-130= 9332346-172.26.0.186-1533098566550 > 2018-07-31 21:42:46,801 [Thread-2237] INFO common.Storage (BlockPoolSlic= eStorage.java:loadStorageDirectory(168)) - Block pool storage directory for= location [DISK]file:/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project= /hadoop-hdfs/target/test/data/dfs/data/data3 and block pool id BP-130933234= 6-172.26.0.186-1533098566550 is not formatted for BP-1309332346-172.26.0.18= 6-1533098566550. Formatting ... > 2018-07-31 21:42:46,801 [Thread-2237] INFO common.Storage (BlockPoolSlic= eStorage.java:format(280)) - Formatting block pool BP-1309332346-172.26.0.1= 86-1533098566550 directory /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-p= roject/hadoop-hdfs/target/test/data/dfs/data/data3/current/BP-1309332346-17= 2.26.0.186-1533098566550/current > 2018-07-31 21:42:46,804 [Thread-2252] INFO impl.FsDatasetImpl (FsVolumeL= ist.java:run(409)) - Scanning block pool BP-1309332346-172.26.0.186-1533098= 566550 on volume /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/had= oop-hdfs/target/test/data/dfs/data/data2... > 2018-07-31 21:42:46,811 [Thread-2237] INFO common.Storage (BlockPoolSlic= eStorage.java:recoverTransitionRead(252)) - Analyzing storage directories f= or bpid BP-1309332346-172.26.0.186-1533098566550 > 2018-07-31 21:42:46,811 [Thread-2237] INFO common.Storage (Storage.java:= lock(806)) - Locking is disabled for /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/had= oop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data4/current/BP-130= 9332346-172.26.0.186-1533098566550 > 2018-07-31 21:42:46,811 [Thread-2237] INFO common.Storage (BlockPoolSlic= eStorage.java:loadStorageDirectory(168)) - Block pool storage directory for= location [DISK]file:/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project= /hadoop-hdfs/target/test/data/dfs/data/data4 and block pool id BP-130933234= 6-172.26.0.186-1533098566550 is not formatted for BP-1309332346-172.26.0.18= 6-1533098566550. Formatting ... > 2018-07-31 21:42:46,812 [Thread-2237] INFO common.Storage (BlockPoolSlic= eStorage.java:format(280)) - Formatting block pool BP-1309332346-172.26.0.1= 86-1533098566550 directory /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-p= roject/hadoop-hdfs/target/test/data/dfs/data/data4/current/BP-1309332346-17= 2.26.0.186-1533098566550/current > 2018-07-31 21:42:46,813 [Thread-2252] INFO impl.FsDatasetImpl (FsVolumeL= ist.java:run(414)) - Time taken to scan block pool BP-1309332346-172.26.0.1= 86-1533098566550 on /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/= hadoop-hdfs/target/test/data/dfs/data/data2: 9ms > 2018-07-31 21:42:46,816 [Thread-2237] INFO datanode.DataNode (DataNode.j= ava:initStorage(1693)) - Setting up storage: nsid=3D1027244560;bpid=3DBP-13= 09332346-172.26.0.186-1533098566550;lv=3D-57;nsInfo=3Dlv=3D-64;cid=3DtestCl= usterID;nsid=3D1027244560;c=3D1533098566550;bpid=3DBP-1309332346-172.26.0.1= 86-1533098566550;dnuuid=3Dnull > 2018-07-31 21:42:46,817 [Thread-2251] INFO impl.FsDatasetImpl (FsVolumeL= ist.java:run(414)) - Time taken to scan block pool BP-1309332346-172.26.0.1= 86-1533098566550 on /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/= hadoop-hdfs/target/test/data/dfs/data/data1: 17ms > 2018-07-31 21:42:46,817 [Thread-2206] INFO impl.FsDatasetImpl (FsVolumeL= ist.java:addBlockPool(440)) - Total time to scan all replicas for block poo= l BP-1309332346-172.26.0.186-1533098566550: 18ms > 2018-07-31 21:42:46,817 [Thread-2255] INFO impl.FsDatasetImpl (FsVolumeL= ist.java:run(198)) - Adding replicas to map for block pool BP-1309332346-17= 2.26.0.186-1533098566550 on volume /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoo= p-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1... > 2018-07-31 21:42:46,818 [Thread-2255] INFO impl.BlockPoolSlice (BlockPoo= lSlice.java:readReplicasFromCache(770)) - Replica Cache file: /tmp/tmp.u8Gh= lLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/= data/data1/current/BP-1309332346-172.26.0.186-1533098566550/current/replica= s doesn't exist=20 > 2018-07-31 21:42:46,818 [Thread-2256] INFO impl.FsDatasetImpl (FsVolumeL= ist.java:run(198)) - Adding replicas to map for block pool BP-1309332346-17= 2.26.0.186-1533098566550 on volume /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoo= p-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2... > 2018-07-31 21:42:46,818 [Thread-2255] INFO impl.FsDatasetImpl (FsVolumeL= ist.java:run(203)) - Time to add replicas to map for block pool BP-13093323= 46-172.26.0.186-1533098566550 on volume /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/= hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1: 0ms > 2018-07-31 21:42:46,818 [Thread-2256] INFO impl.BlockPoolSlice (BlockPoo= lSlice.java:readReplicasFromCache(770)) - Replica Cache file: /tmp/tmp.u8Gh= lLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/= data/data2/current/BP-1309332346-172.26.0.186-1533098566550/current/replica= s doesn't exist=20 > 2018-07-31 21:42:46,818 [Thread-2256] INFO impl.FsDatasetImpl (FsVolumeL= ist.java:run(203)) - Time to add replicas to map for block pool BP-13093323= 46-172.26.0.186-1533098566550 on volume /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/= hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2: 0ms > 2018-07-31 21:42:46,819 [Thread-2206] INFO impl.FsDatasetImpl (FsVolumeL= ist.java:getAllVolumesMap(229)) - Total time to add all replicas to map: 2m= s > 2018-07-31 21:42:46,819 [VolumeScannerThread(/tmp/tmp.u8GhlLcdks/src/CDH= /hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1)] I= NFO datanode.VolumeScanner (VolumeScanner.java:findNextUsableBlockIter(386)= ) - Now scanning bpid BP-1309332346-172.26.0.186-1533098566550 on volume /t= mp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/tes= t/data/dfs/data/data1 > 2018-07-31 21:42:46,819 [VolumeScannerThread(/tmp/tmp.u8GhlLcdks/src/CDH= /hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2)] I= NFO datanode.VolumeScanner (VolumeScanner.java:findNextUsableBlockIter(386)= ) - Now scanning bpid BP-1309332346-172.26.0.186-1533098566550 on volume /t= mp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/tes= t/data/dfs/data/data2 > 2018-07-31 21:42:46,819 [Thread-2206] INFO datanode.DirectoryScanner (Di= rectoryScanner.java:start(281)) - Periodic Directory Tree Verification scan= starting at 7/31/18 10:52 PM with interval of 21600000ms > 2018-07-31 21:42:46,819 [VolumeScannerThread(/tmp/tmp.u8GhlLcdks/src/CDH= /hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2)] I= NFO datanode.VolumeScanner (VolumeScanner.java:runLoop(544)) - VolumeScanne= r(/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target= /test/data/dfs/data/data2, DS-c307fe32-1d7b-4c82-9d8f-6c43bb6ebe3f): finish= ed scanning block pool BP-1309332346-172.26.0.186-1533098566550 > 2018-07-31 21:42:46,819 [VolumeScannerThread(/tmp/tmp.u8GhlLcdks/src/CDH= /hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1)] I= NFO datanode.VolumeScanner (VolumeScanner.java:runLoop(544)) - VolumeScanne= r(/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target= /test/data/dfs/data/data1, DS-1d19d47a-b472-463a-a1ed-43be073676f5): finish= ed scanning block pool BP-1309332346-172.26.0.186-1533098566550 > 2018-07-31 21:42:46,820 [BP-1309332346-172.26.0.186-1533098566550 heartb= eating to localhost/x.x.x.x:port1] INFO datanode.DataNode (BPServiceActor.j= ava:register(763)) - Block pool BP-1309332346-172.26.0.186-1533098566550 (D= atanode Uuid 785a0dd8-bdae-40c1-94f6-a6b639dafb14) service to localhost/x.x= .x.x:port1 beginning handshake with NN > 2018-07-31 21:42:46,821 [IPC Server handler 3 on port1] INFO hdfs.StateC= hange (DatanodeManager.java:registerDatanode(1038)) - BLOCK* registerDatano= de: from DatanodeRegistration(x.x.x.x:port2, datanodeUuid=3D785a0dd8-bdae-4= 0c1-94f6-a6b639dafb14, infoPort=3Dport3, infoSecurePort=3D0, ipcPort=3Dport= 4, storageInfo=3Dlv=3D-57;cid=3DtestClusterID;nsid=3D1027244560;c=3D1533098= 566550) storage 785a0dd8-bdae-40c1-94f6-a6b639dafb14 > 2018-07-31 21:42:46,821 [IPC Server handler 3 on port1] INFO net.Network= Topology (NetworkTopology.java:add(145)) - Adding a new node: /default-rack= /x.x.x.x:port2 > 2018-07-31 21:42:46,821 [IPC Server handler 3 on port1] INFO blockmanage= ment.BlockReportLeaseManager (BlockReportLeaseManager.java:registerNode(204= )) - Registered DN 785a0dd8-bdae-40c1-94f6-a6b639dafb14 (x.x.x.x:port2). > 2018-07-31 21:42:46,823 [BP-1309332346-172.26.0.186-1533098566550 heartb= eating to localhost/x.x.x.x:port1] INFO datanode.DataNode (BPServiceActor.j= ava:register(782)) - Block pool Block pool BP-1309332346-172.26.0.186-15330= 98566550 (Datanode Uuid 785a0dd8-bdae-40c1-94f6-a6b639dafb14) service to lo= calhost/x.x.x.x:port1 successfully registered with NN > 2018-07-31 21:42:46,823 [BP-1309332346-172.26.0.186-1533098566550 heartb= eating to localhost/x.x.x.x:port1] INFO datanode.DataNode (BPServiceActor.j= ava:offerService(612)) - For namenode localhost/x.x.x.x:port1 using BLOCKRE= PORT_INTERVAL of 21600000msec CACHEREPORT_INTERVAL of 10000msec Initial del= ay: 0msec; heartBeatInterval=3D3000 > 2018-07-31 21:42:46,824 [VolumeScannerThread(/tmp/tmp.u8GhlLcdks/src/CDH= /hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1)] I= NFO datanode.VolumeScanner (VolumeScanner.java:findNextUsableBlockIter(403)= ) - VolumeScanner(/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/ha= doop-hdfs/target/test/data/dfs/data/data1, DS-1d19d47a-b472-463a-a1ed-43be0= 73676f5): no suitable block pools found to scan. Waiting 1814399995 ms. > 2018-07-31 21:42:46,824 [VolumeScannerThread(/tmp/tmp.u8GhlLcdks/src/CDH= /hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2)] I= NFO datanode.VolumeScanner (VolumeScanner.java:findNextUsableBlockIter(403)= ) - VolumeScanner(/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/ha= doop-hdfs/target/test/data/dfs/data/data2, DS-c307fe32-1d7b-4c82-9d8f-6c43b= b6ebe3f): no suitable block pools found to scan. Waiting 1814399995 ms. > 2018-07-31 21:42:46,825 [IPC Server handler 4 on port1] INFO blockmanage= ment.DatanodeDescriptor (DatanodeDescriptor.java:updateStorage(949)) - Addi= ng new storage ID DS-1d19d47a-b472-463a-a1ed-43be073676f5 for DN x.x.x.x:po= rt2 > 2018-07-31 21:42:46,825 [IPC Server handler 4 on port1] INFO blockmanage= ment.DatanodeDescriptor (DatanodeDescriptor.java:updateStorage(949)) - Addi= ng new storage ID DS-c307fe32-1d7b-4c82-9d8f-6c43bb6ebe3f for DN x.x.x.x:po= rt2 > 2018-07-31 21:42:46,827 [Block report processor] INFO BlockStateChange (= BlockManager.java:processReport(2462)) - BLOCK* processReport 0xa2b93752cc4= 45e4b: Processing first storage report for DS-1d19d47a-b472-463a-a1ed-43be0= 73676f5 from datanode 785a0dd8-bdae-40c1-94f6-a6b639dafb14 > 2018-07-31 21:42:46,828 [Block report processor] INFO BlockStateChange (= BlockManager.java:processReport(2488)) - BLOCK* processReport 0xa2b93752cc4= 45e4b: from storage DS-1d19d47a-b472-463a-a1ed-43be073676f5 node DatanodeRe= gistration(x.x.x.x:port2, datanodeUuid=3D785a0dd8-bdae-40c1-94f6-a6b639dafb= 14, infoPort=3Dport3, infoSecurePort=3D0, ipcPort=3Dport4, storageInfo=3Dlv= =3D-57;cid=3DtestClusterID;nsid=3D1027244560;c=3D1533098566550), blocks: 0,= hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 > 2018-07-31 21:42:46,828 [Block report processor] INFO BlockStateChange (= BlockManager.java:processReport(2462)) - BLOCK* processReport 0xa2b93752cc4= 45e4b: Processing first storage report for DS-c307fe32-1d7b-4c82-9d8f-6c43b= b6ebe3f from datanode 785a0dd8-bdae-40c1-94f6-a6b639dafb14 > 2018-07-31 21:42:46,828 [Thread-2237] INFO datanode.DataNode (DataNode.j= ava:checkDatanodeUuid(1517)) - Generated and persisted new Datanode UUID 48= eaddaf-0bf7-47e5-bc33-4a640b4cde9d > 2018-07-31 21:42:46,828 [Block report processor] INFO BlockStateChange (= BlockManager.java:processReport(2488)) - BLOCK* processReport 0xa2b93752cc4= 45e4b: from storage DS-c307fe32-1d7b-4c82-9d8f-6c43bb6ebe3f node DatanodeRe= gistration(x.x.x.x:port2, datanodeUuid=3D785a0dd8-bdae-40c1-94f6-a6b639dafb= 14, infoPort=3Dport3, infoSecurePort=3D0, ipcPort=3Dport4, storageInfo=3Dlv= =3D-57;cid=3DtestClusterID;nsid=3D1027244560;c=3D1533098566550), blocks: 0,= hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 > 2018-07-31 21:42:46,830 [Thread-2237] INFO impl.FsDatasetImpl (FsVolumeL= ist.java:addVolume(306)) - Added new volume: DS-386f4969-f72d-4a3a-baad-ef5= 9d4bc3016 > 2018-07-31 21:42:46,830 [Thread-2237] INFO impl.FsDatasetImpl (FsDataset= Impl.java:addVolume(431)) - Added volume - [DISK]file:/tmp/tmp.u8GhlLcdks/s= rc/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dat= a3, StorageType: DISK > 2018-07-31 21:42:46,830 [BP-1309332346-172.26.0.186-1533098566550 heartb= eating to localhost/x.x.x.x:port1] INFO datanode.DataNode (BPServiceActor.j= ava:blockReport(422)) - Successfully sent block report 0xa2b93752cc445e4b, = containing 2 storage report(s), of which we sent 2. The reports had 0 total= blocks and used 1 RPC(s). This took 0 msec to generate and 3 msecs for RPC= and NN processing. Got back one command: FinalizeCommand/5. > 2018-07-31 21:42:46,830 [BP-1309332346-172.26.0.186-1533098566550 heartb= eating to localhost/x.x.x.x:port1] INFO datanode.DataNode (BPOfferService.j= ava:processCommandFromActive(759)) - Got finalize command for block pool BP= -1309332346-172.26.0.186-1533098566550 > 2018-07-31 21:42:46,831 [Thread-2237] INFO impl.FsDatasetImpl (FsVolumeL= ist.java:addVolume(306)) - Added new volume: DS-cac8e9dc-026d-474d-a76b-57d= 577c44846 > 2018-07-31 21:42:46,831 [Thread-2237] INFO impl.FsDatasetImpl (FsDataset= Impl.java:addVolume(431)) - Added volume - [DISK]file:/tmp/tmp.u8GhlLcdks/s= rc/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dat= a4, StorageType: DISK > 2018-07-31 21:42:46,831 [Thread-2237] INFO impl.FsDatasetImpl (FsDataset= Impl.java:registerMBean(2169)) - Registered FSDatasetState MBean > 2018-07-31 21:42:46,832 [Thread-2237] INFO checker.ThrottledAsyncChecker= (ThrottledAsyncChecker.java:schedule(122)) - Scheduling a check for /tmp/t= mp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/da= ta/dfs/data/data3 > 2018-07-31 21:42:46,833 [Thread-2237] INFO checker.DatasetVolumeChecker = (DatasetVolumeChecker.java:checkAllVolumes(219)) - Scheduled health check f= or volume /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdf= s/target/test/data/dfs/data/data3 > 2018-07-31 21:42:46,833 [Thread-2237] INFO checker.ThrottledAsyncChecker= (ThrottledAsyncChecker.java:schedule(122)) - Scheduling a check for /tmp/t= mp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/da= ta/dfs/data/data4 > 2018-07-31 21:42:46,833 [Thread-2237] INFO checker.DatasetVolumeChecker = (DatasetVolumeChecker.java:checkAllVolumes(219)) - Scheduled health check f= or volume /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdf= s/target/test/data/dfs/data/data4 > 2018-07-31 21:42:46,833 [Thread-2237] INFO impl.FsDatasetImpl (FsDataset= Impl.java:addBlockPool(2655)) - Adding block pool BP-1309332346-172.26.0.18= 6-1533098566550 > 2018-07-31 21:42:46,834 [Thread-2262] INFO impl.FsDatasetImpl (FsVolumeL= ist.java:run(409)) - Scanning block pool BP-1309332346-172.26.0.186-1533098= 566550 on volume /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/had= oop-hdfs/target/test/data/dfs/data/data3... > 2018-07-31 21:42:46,834 [Thread-2263] INFO impl.FsDatasetImpl (FsVolumeL= ist.java:run(409)) - Scanning block pool BP-1309332346-172.26.0.186-1533098= 566550 on volume /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/had= oop-hdfs/target/test/data/dfs/data/data4... > 2018-07-31 21:42:46,840 [Thread-2263] INFO impl.FsDatasetImpl (FsVolumeL= ist.java:run(414)) - Time taken to scan block pool BP-1309332346-172.26.0.1= 86-1533098566550 on /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/= hadoop-hdfs/target/test/data/dfs/data/data4: 6ms > 2018-07-31 21:42:46,842 [Thread-2262] INFO impl.FsDatasetImpl (FsVolumeL= ist.java:run(414)) - Time taken to scan block pool BP-1309332346-172.26.0.1= 86-1533098566550 on /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/= hadoop-hdfs/target/test/data/dfs/data/data3: 8ms > 2018-07-31 21:42:46,842 [Thread-2237] INFO impl.FsDatasetImpl (FsVolumeL= ist.java:addBlockPool(440)) - Total time to scan all replicas for block poo= l BP-1309332346-172.26.0.186-1533098566550: 9ms > 2018-07-31 21:42:46,842 [Thread-2266] INFO impl.FsDatasetImpl (FsVolumeL= ist.java:run(198)) - Adding replicas to map for block pool BP-1309332346-17= 2.26.0.186-1533098566550 on volume /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoo= p-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data3... > 2018-07-31 21:42:46,843 [Thread-2266] INFO impl.BlockPoolSlice (BlockPoo= lSlice.java:readReplicasFromCache(770)) - Replica Cache file: /tmp/tmp.u8Gh= lLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/= data/data3/current/BP-1309332346-172.26.0.186-1533098566550/current/replica= s doesn't exist=20 > 2018-07-31 21:42:46,843 [Thread-2267] INFO impl.FsDatasetImpl (FsVolumeL= ist.java:run(198)) - Adding replicas to map for block pool BP-1309332346-17= 2.26.0.186-1533098566550 on volume /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoo= p-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data4... > 2018-07-31 21:42:46,843 [Thread-2266] INFO impl.FsDatasetImpl (FsVolumeL= ist.java:run(203)) - Time to add replicas to map for block pool BP-13093323= 46-172.26.0.186-1533098566550 on volume /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/= hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data3: 0ms > 2018-07-31 21:42:46,843 [Thread-2267] INFO impl.BlockPoolSlice (BlockPoo= lSlice.java:readReplicasFromCache(770)) - Replica Cache file: /tmp/tmp.u8Gh= lLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/= data/data4/current/BP-1309332346-172.26.0.186-1533098566550/current/replica= s doesn't exist=20 > 2018-07-31 21:42:46,843 [Thread-2267] INFO impl.FsDatasetImpl (FsVolumeL= ist.java:run(203)) - Time to add replicas to map for block pool BP-13093323= 46-172.26.0.186-1533098566550 on volume /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/= hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data4: 0ms > 2018-07-31 21:42:46,843 [Thread-2237] INFO impl.FsDatasetImpl (FsVolumeL= ist.java:getAllVolumesMap(229)) - Total time to add all replicas to map: 1m= s > 2018-07-31 21:42:46,844 [VolumeScannerThread(/tmp/tmp.u8GhlLcdks/src/CDH= /hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data3)] I= NFO datanode.VolumeScanner (VolumeScanner.java:findNextUsableBlockIter(386)= ) - Now scanning bpid BP-1309332346-172.26.0.186-1533098566550 on volume /t= mp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/tes= t/data/dfs/data/data3 > 2018-07-31 21:42:46,844 [VolumeScannerThread(/tmp/tmp.u8GhlLcdks/src/CDH= /hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data4)] I= NFO datanode.VolumeScanner (VolumeScanner.java:findNextUsableBlockIter(386)= ) - Now scanning bpid BP-1309332346-172.26.0.186-1533098566550 on volume /t= mp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/tes= t/data/dfs/data/data4 > 2018-07-31 21:42:46,844 [VolumeScannerThread(/tmp/tmp.u8GhlLcdks/src/CDH= /hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data3)] I= NFO datanode.VolumeScanner (VolumeScanner.java:runLoop(544)) - VolumeScanne= r(/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target= /test/data/dfs/data/data3, DS-386f4969-f72d-4a3a-baad-ef59d4bc3016): finish= ed scanning block pool BP-1309332346-172.26.0.186-1533098566550 > 2018-07-31 21:42:46,844 [VolumeScannerThread(/tmp/tmp.u8GhlLcdks/src/CDH= /hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data4)] I= NFO datanode.VolumeScanner (VolumeScanner.java:runLoop(544)) - VolumeScanne= r(/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target= /test/data/dfs/data/data4, DS-cac8e9dc-026d-474d-a76b-57d577c44846): finish= ed scanning block pool BP-1309332346-172.26.0.186-1533098566550 > 2018-07-31 21:42:46,844 [Thread-2237] INFO datanode.DirectoryScanner (Di= rectoryScanner.java:start(281)) - Periodic Directory Tree Verification scan= starting at 7/31/18 10:08 PM with interval of 21600000ms > 2018-07-31 21:42:46,844 [VolumeScannerThread(/tmp/tmp.u8GhlLcdks/src/CDH= /hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data3)] I= NFO datanode.VolumeScanner (VolumeScanner.java:findNextUsableBlockIter(403)= ) - VolumeScanner(/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/ha= doop-hdfs/target/test/data/dfs/data/data3, DS-386f4969-f72d-4a3a-baad-ef59d= 4bc3016): no suitable block pools found to scan. Waiting 1814400000 ms. > 2018-07-31 21:42:46,844 [VolumeScannerThread(/tmp/tmp.u8GhlLcdks/src/CDH= /hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data4)] I= NFO datanode.VolumeScanner (VolumeScanner.java:findNextUsableBlockIter(403)= ) - VolumeScanner(/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/ha= doop-hdfs/target/test/data/dfs/data/data4, DS-cac8e9dc-026d-474d-a76b-57d57= 7c44846): no suitable block pools found to scan. Waiting 1814400000 ms. > 2018-07-31 21:42:46,845 [BP-1309332346-172.26.0.186-1533098566550 heartb= eating to localhost/x.x.x.x:port1] INFO datanode.DataNode (BPServiceActor.j= ava:register(763)) - Block pool BP-1309332346-172.26.0.186-1533098566550 (D= atanode Uuid 48eaddaf-0bf7-47e5-bc33-4a640b4cde9d) service to localhost/x.x= .x.x:port1 beginning handshake with NN > 2018-07-31 21:42:46,846 [IPC Server handler 6 on port1] INFO hdfs.StateC= hange (DatanodeManager.java:registerDatanode(1038)) - BLOCK* registerDatano= de: from DatanodeRegistration(x.x.x.x:port5, datanodeUuid=3D48eaddaf-0bf7-4= 7e5-bc33-4a640b4cde9d, infoPort=3Dport6, infoSecurePort=3D0, ipcPort=3Dport= 7, storageInfo=3Dlv=3D-57;cid=3DtestClusterID;nsid=3D1027244560;c=3D1533098= 566550) storage 48eaddaf-0bf7-47e5-bc33-4a640b4cde9d > 2018-07-31 21:42:46,846 [IPC Server handler 6 on port1] INFO net.Network= Topology (NetworkTopology.java:add(145)) - Adding a new node: /default-rack= /x.x.x.x:port5 > 2018-07-31 21:42:46,846 [IPC Server handler 6 on port1] INFO blockmanage= ment.BlockReportLeaseManager (BlockReportLeaseManager.java:registerNode(204= )) - Registered DN 48eaddaf-0bf7-47e5-bc33-4a640b4cde9d (x.x.x.x:port5). > 2018-07-31 21:42:46,847 [BP-1309332346-172.26.0.186-1533098566550 heartb= eating to localhost/x.x.x.x:port1] INFO datanode.DataNode (BPServiceActor.j= ava:register(782)) - Block pool Block pool BP-1309332346-172.26.0.186-15330= 98566550 (Datanode Uuid 48eaddaf-0bf7-47e5-bc33-4a640b4cde9d) service to lo= calhost/x.x.x.x:port1 successfully registered with NN > 2018-07-31 21:42:46,847 [BP-1309332346-172.26.0.186-1533098566550 heartb= eating to localhost/x.x.x.x:port1] INFO datanode.DataNode (BPServiceActor.j= ava:offerService(612)) - For namenode localhost/x.x.x.x:port1 using BLOCKRE= PORT_INTERVAL of 21600000msec CACHEREPORT_INTERVAL of 10000msec Initial del= ay: 0msec; heartBeatInterval=3D3000 > 2018-07-31 21:42:46,848 [IPC Server handler 7 on port1] INFO blockmanage= ment.DatanodeDescriptor (DatanodeDescriptor.java:updateStorage(949)) - Addi= ng new storage ID DS-386f4969-f72d-4a3a-baad-ef59d4bc3016 for DN x.x.x.x:po= rt5 > 2018-07-31 21:42:46,850 [IPC Server handler 7 on port1] INFO blockmanage= ment.DatanodeDescriptor (DatanodeDescriptor.java:updateStorage(949)) - Addi= ng new storage ID DS-cac8e9dc-026d-474d-a76b-57d577c44846 for DN x.x.x.x:po= rt5 > 2018-07-31 21:42:46,852 [Block report processor] INFO BlockStateChange (= BlockManager.java:processReport(2462)) - BLOCK* processReport 0xcb3b9727741= 549e0: Processing first storage report for DS-cac8e9dc-026d-474d-a76b-57d57= 7c44846 from datanode 48eaddaf-0bf7-47e5-bc33-4a640b4cde9d > 2018-07-31 21:42:46,852 [Block report processor] INFO BlockStateChange (= BlockManager.java:processReport(2488)) - BLOCK* processReport 0xcb3b9727741= 549e0: from storage DS-cac8e9dc-026d-474d-a76b-57d577c44846 node DatanodeRe= gistration(x.x.x.x:port5, datanodeUuid=3D48eaddaf-0bf7-47e5-bc33-4a640b4cde= 9d, infoPort=3Dport6, infoSecurePort=3D0, ipcPort=3Dport7, storageInfo=3Dlv= =3D-57;cid=3DtestClusterID;nsid=3D1027244560;c=3D1533098566550), blocks: 0,= hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 > 2018-07-31 21:42:46,852 [Block report processor] INFO BlockStateChange (= BlockManager.java:processReport(2462)) - BLOCK* processReport 0xcb3b9727741= 549e0: Processing first storage report for DS-386f4969-f72d-4a3a-baad-ef59d= 4bc3016 from datanode 48eaddaf-0bf7-47e5-bc33-4a640b4cde9d > 2018-07-31 21:42:46,852 [Block report processor] INFO BlockStateChange (= BlockManager.java:processReport(2488)) - BLOCK* processReport 0xcb3b9727741= 549e0: from storage DS-386f4969-f72d-4a3a-baad-ef59d4bc3016 node DatanodeRe= gistration(x.x.x.x:port5, datanodeUuid=3D48eaddaf-0bf7-47e5-bc33-4a640b4cde= 9d, infoPort=3Dport6, infoSecurePort=3D0, ipcPort=3Dport7, storageInfo=3Dlv= =3D-57;cid=3DtestClusterID;nsid=3D1027244560;c=3D1533098566550), blocks: 0,= hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 > 2018-07-31 21:42:46,853 [BP-1309332346-172.26.0.186-1533098566550 heartb= eating to localhost/x.x.x.x:port1] INFO datanode.DataNode (BPServiceActor.j= ava:blockReport(422)) - Successfully sent block report 0xcb3b9727741549e0, = containing 2 storage report(s), of which we sent 2. The reports had 0 total= blocks and used 1 RPC(s). This took 0 msec to generate and 3 msecs for RPC= and NN processing. Got back one command: FinalizeCommand/5. > 2018-07-31 21:42:46,853 [BP-1309332346-172.26.0.186-1533098566550 heartb= eating to localhost/x.x.x.x:port1] INFO datanode.DataNode (BPOfferService.j= ava:processCommandFromActive(759)) - Got finalize command for block pool BP= -1309332346-172.26.0.186-1533098566550 > 2018-07-31 21:42:46,878 [IPC Server handler 9 on port1] INFO FSNamesyste= m.audit (FSNamesystem.java:logAuditMessage(7816)) - allowed=3Dtrue ugi=3Dje= nkins (auth:SIMPLE) ip=3D/x.x.x.x cmd=3DdatanodeReport src=3Dnull dst=3Dnul= l perm=3Dnull proto=3Drpc > 2018-07-31 21:42:46,878 [main] INFO hdfs.MiniDFSCluster (MiniDFSCluster.= java:waitActive(2629)) - Cluster is active > 2018-07-31 21:42:46,880 [main] DEBUG hdfs.DFSClient (DFSClient.java:(311)) - Sets dfs.client.block.write.replace-datanode-on-failure.min-repl= ication to 0 > 2018-07-31 21:42:46,881 [main] INFO hdfs.MiniDFSCluster (MiniDFSCluster.= java:shutdown(1965)) - Shutting down the Mini HDFS Cluster > 2018-07-31 21:42:46,881 [main] INFO hdfs.MiniDFSCluster (MiniDFSCluster.= java:shutdownDataNode(2013)) - Shutting down DataNode 1 > 2018-07-31 21:42:46,881 [main] WARN datanode.DirectoryScanner (Directory= Scanner.java:shutdown(340)) - DirectoryScanner: shutdown has been called > 2018-07-31 21:42:46,881 [org.apache.hadoop.hdfs.server.datanode.DataXcei= verServer@6aba5d30] INFO datanode.DataNode (DataXceiverServer.java:closeAll= Peers(281)) - Closing all peers. > 2018-07-31 21:42:46,882 [VolumeScannerThread(/tmp/tmp.u8GhlLcdks/src/CDH= /hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data3)] I= NFO datanode.VolumeScanner (VolumeScanner.java:run(642)) - VolumeScanner(/t= mp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/tes= t/data/dfs/data/data3, DS-386f4969-f72d-4a3a-baad-ef59d4bc3016) exiting. > 2018-07-31 21:42:46,882 [VolumeScannerThread(/tmp/tmp.u8GhlLcdks/src/CDH= /hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data4)] I= NFO datanode.VolumeScanner (VolumeScanner.java:run(642)) - VolumeScanner(/t= mp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/tes= t/data/dfs/data/data4, DS-cac8e9dc-026d-474d-a76b-57d577c44846) exiting. > 2018-07-31 21:42:46,887 [main] INFO handler.ContextHandler (ContextHandl= er.java:doStop(910)) - Stopped o.e.j.w.WebAppContext@41ffaeb8\{/,null,UNAVA= ILABLE}{/datanode} > 2018-07-31 21:42:46,888 [main] INFO server.AbstractConnector (AbstractCo= nnector.java:doStop(318)) - Stopped ServerConnector@43f0c2d1\{HTTP/1.1,[htt= p/1.1]} \{localhost:0} > 2018-07-31 21:42:46,888 [main] INFO handler.ContextHandler (ContextHandle= r.java:doStop(910)) - Stopped o.e.j.s.ServletContextHandler@7acfb656 > {/static,file:///tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/ha= doop-hdfs/target/test-classes/webapps/static/,UNAVAILABLE} > 2018-07-31 21:42:46,888 [main] INFO handler.ContextHandler (ContextHandle= r.java:doStop(910)) - Stopped o.e.j.s.ServletContextHandler@2c6aed22 > {/logs,file:///tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hado= op-hdfs/target/log/,UNAVAILABLE} > 2018-07-31 21:42:46,890 [main] INFO ipc.Server (Server.java:stop(3074)) -= Stopping server on port7 > 2018-07-31 21:42:46,891 [IPC Server listener on port7] INFO ipc.Server (= Server.java:run(1178)) - Stopping IPC Server listener on port7 > 2018-07-31 21:42:46,892 [BP-1309332346-172.26.0.186-1533098566550 heartb= eating to localhost/x.x.x.x:port1] WARN datanode.IncrementalBlockReportMana= ger (IncrementalBlockReportManager.java:waitTillNextIBR(160)) - Incremental= BlockReportManager interrupted > 2018-07-31 21:42:46,891 [IPC Server Responder] INFO ipc.Server (Server.j= ava:run(1312)) - Stopping IPC Server Responder > 2018-07-31 21:42:46,892 [BP-1309332346-172.26.0.186-1533098566550 heartb= eating to localhost/x.x.x.x:port1] WARN datanode.DataNode (BPServiceActor.j= ava:run(852)) - Ending block pool service for: Block pool BP-1309332346-172= .26.0.186-1533098566550 (Datanode Uuid 48eaddaf-0bf7-47e5-bc33-4a640b4cde9d= ) service to localhost/x.x.x.x:port1 > 2018-07-31 21:42:46,892 [BP-1309332346-172.26.0.186-1533098566550 heartb= eating to localhost/x.x.x.x:port1] INFO datanode.DataNode (BlockPoolManager= .java:remove(102)) - Removed Block pool BP-1309332346-172.26.0.186-15330985= 66550 (Datanode Uuid 48eaddaf-0bf7-47e5-bc33-4a640b4cde9d) > 2018-07-31 21:42:46,892 [BP-1309332346-172.26.0.186-1533098566550 heartb= eating to localhost/x.x.x.x:port1] INFO impl.FsDatasetImpl (FsDatasetImpl.j= ava:shutdownBlockPool(2666)) - Removing block pool BP-1309332346-172.26.0.1= 86-1533098566550 > 2018-07-31 21:42:46,893 [refreshUsed-/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/= hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data4/current/BP-= 1309332346-172.26.0.186-1533098566550] WARN fs.CachingGetSpaceUsed (Caching= GetSpaceUsed.java:run(183)) - Thread Interrupted waiting to refresh disk in= formation: sleep interrupted > 2018-07-31 21:42:46,893 [refreshUsed-/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/= hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data3/current/BP-= 1309332346-172.26.0.186-1533098566550] WARN fs.CachingGetSpaceUsed (Caching= GetSpaceUsed.java:run(183)) - Thread Interrupted waiting to refresh disk in= formation: sleep interrupted > 2018-07-31 21:42:46,894 [main] INFO impl.FsDatasetAsyncDiskService (FsDa= tasetAsyncDiskService.java:shutdown(191)) - Shutting down all async disk se= rvice threads > 2018-07-31 21:42:46,894 [main] INFO impl.FsDatasetAsyncDiskService (FsDa= tasetAsyncDiskService.java:shutdown(199)) - All async disk service threads = have been shut down > 2018-07-31 21:42:46,895 [main] INFO impl.RamDiskAsyncLazyPersistService = (RamDiskAsyncLazyPersistService.java:shutdown(176)) - Shutting down all asy= nc lazy persist service threads > 2018-07-31 21:42:46,895 [main] INFO impl.RamDiskAsyncLazyPersistService = (RamDiskAsyncLazyPersistService.java:shutdown(183)) - All async lazy persis= t service threads have been shut down > 2018-07-31 21:42:46,895 [main] INFO datanode.DataNode (DataNode.java:shu= tdown(2113)) - Shutdown complete. > 2018-07-31 21:42:46,896 [main] INFO hdfs.MiniDFSCluster (MiniDFSCluster.= java:shutdownDataNode(2013)) - Shutting down DataNode 0 > 2018-07-31 21:42:46,896 [main] WARN datanode.DirectoryScanner (Directory= Scanner.java:shutdown(340)) - DirectoryScanner: shutdown has been called > 2018-07-31 21:42:46,896 [org.apache.hadoop.hdfs.server.datanode.DataXcei= verServer@74db12c2] INFO datanode.DataNode (DataXceiverServer.java:closeAll= Peers(281)) - Closing all peers. > 2018-07-31 21:42:46,897 [VolumeScannerThread(/tmp/tmp.u8GhlLcdks/src/CDH= /hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1)] I= NFO datanode.VolumeScanner (VolumeScanner.java:run(642)) - VolumeScanner(/t= mp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/tes= t/data/dfs/data/data1, DS-1d19d47a-b472-463a-a1ed-43be073676f5) exiting. > 2018-07-31 21:42:46,897 [VolumeScannerThread(/tmp/tmp.u8GhlLcdks/src/CDH= /hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2)] I= NFO datanode.VolumeScanner (VolumeScanner.java:run(642)) - VolumeScanner(/t= mp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/tes= t/data/dfs/data/data2, DS-c307fe32-1d7b-4c82-9d8f-6c43bb6ebe3f) exiting. > 2018-07-31 21:42:46,901 [main] INFO handler.ContextHandler (ContextHandl= er.java:doStop(910)) - Stopped o.e.j.w.WebAppContext@68fe48d7 > {/,null,UNAVAILABLE}\{/datanode} > 2018-07-31 21:42:46,902 [main] INFO server.AbstractConnector (AbstractCo= nnector.java:doStop(318)) - Stopped ServerConnector@379ce046\{HTTP/1.1,[htt= p/1.1]}{localhost:0} > 2018-07-31 21:42:46,902 [main] INFO handler.ContextHandler (ContextHandl= er.java:doStop(910)) - Stopped o.e.j.s.ServletContextHandler@67594471\{/sta= tic,file:///tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hd= fs/target/test-classes/webapps/static/,UNAVAILABLE} > 2018-07-31 21:42:46,902 [main] INFO handler.ContextHandler (ContextHandl= er.java:doStop(910)) - Stopped o.e.j.s.ServletContextHandler@38b5f25\{/logs= ,file:///tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/= target/log/,UNAVAILABLE} > 2018-07-31 21:42:46,904 [main] INFO ipc.Server (Server.java:stop(3074)) = - Stopping server on port4 > 2018-07-31 21:42:46,905 [IPC Server listener on port4] INFO ipc.Server (= Server.java:run(1178)) - Stopping IPC Server listener on port4 > 2018-07-31 21:42:46,906 [IPC Server Responder] INFO ipc.Server (Server.j= ava:run(1312)) - Stopping IPC Server Responder > 2018-07-31 21:42:46,906 [BP-1309332346-172.26.0.186-1533098566550 heartb= eating to localhost/x.x.x.x:port1] WARN datanode.IncrementalBlockReportMana= ger (IncrementalBlockReportManager.java:waitTillNextIBR(160)) - Incremental= BlockReportManager interrupted > 2018-07-31 21:42:46,906 [BP-1309332346-172.26.0.186-1533098566550 heartb= eating to localhost/x.x.x.x:port1] WARN datanode.DataNode (BPServiceActor.j= ava:run(852)) - Ending block pool service for: Block pool BP-1309332346-172= .26.0.186-1533098566550 (Datanode Uuid 785a0dd8-bdae-40c1-94f6-a6b639dafb14= ) service to localhost/x.x.x.x:port1 > 2018-07-31 21:42:47,006 [BP-1309332346-172.26.0.186-1533098566550 heartb= eating to localhost/x.x.x.x:port1] INFO datanode.DataNode (BlockPoolManager= .java:remove(102)) - Removed Block pool BP-1309332346-172.26.0.186-15330985= 66550 (Datanode Uuid 785a0dd8-bdae-40c1-94f6-a6b639dafb14) > 2018-07-31 21:42:47,007 [BP-1309332346-172.26.0.186-1533098566550 heartb= eating to localhost/x.x.x.x:port1] INFO impl.FsDatasetImpl (FsDatasetImpl.j= ava:shutdownBlockPool(2666)) - Removing block pool BP-1309332346-172.26.0.1= 86-1533098566550 > 2018-07-31 21:42:47,007 [refreshUsed-/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/= hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/current/BP-= 1309332346-172.26.0.186-1533098566550] WARN fs.CachingGetSpaceUsed (Caching= GetSpaceUsed.java:run(183)) - Thread Interrupted waiting to refresh disk in= formation: sleep interrupted > 2018-07-31 21:42:47,007 [refreshUsed-/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/= hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2/current/BP-= 1309332346-172.26.0.186-1533098566550] WARN fs.CachingGetSpaceUsed (Caching= GetSpaceUsed.java:run(183)) - Thread Interrupted waiting to refresh disk in= formation: sleep interrupted > 2018-07-31 21:42:47,008 [main] INFO impl.FsDatasetAsyncDiskService (FsDa= tasetAsyncDiskService.java:shutdown(191)) - Shutting down all async disk se= rvice threads > 2018-07-31 21:42:47,008 [main] INFO impl.FsDatasetAsyncDiskService (FsDa= tasetAsyncDiskService.java:shutdown(199)) - All async disk service threads = have been shut down > 2018-07-31 21:42:47,009 [main] INFO impl.RamDiskAsyncLazyPersistService = (RamDiskAsyncLazyPersistService.java:shutdown(176)) - Shutting down all asy= nc lazy persist service threads > 2018-07-31 21:42:47,009 [main] INFO impl.RamDiskAsyncLazyPersistService = (RamDiskAsyncLazyPersistService.java:shutdown(183)) - All async lazy persis= t service threads have been shut down > 2018-07-31 21:42:47,009 [main] INFO datanode.DataNode (DataNode.java:shu= tdown(2113)) - Shutdown complete. > 2018-07-31 21:42:47,010 [main] INFO hdfs.MiniDFSCluster (MiniDFSCluster.= java:stopAndJoinNameNode(2046)) - Shutting down the namenode > 2018-07-31 21:42:47,010 [main] INFO namenode.FSNamesystem (FSNamesystem.= java:stopActiveServices(1339)) - Stopping services started for active state > 2018-07-31 21:42:47,010 [main] INFO namenode.FSEditLog (FSEditLog.java:e= ndCurrentLogSegment(1407)) - Ending log segment 1, 1 > 2018-07-31 21:42:47,010 [org.apache.hadoop.hdfs.server.namenode.FSNamesy= stem$NameNodeEditLogRoller@263bbfeb] INFO namenode.FSNamesystem (FSNamesyst= em.java:run(4010)) - NameNodeEditLogRoller was interrupted, exiting > 2018-07-31 21:42:47,011 [org.apache.hadoop.hdfs.server.namenode.FSNamesy= stem$LazyPersistFileScrubber@31edeac] INFO namenode.FSNamesystem (FSNamesys= tem.java:run(4101)) - LazyPersistFileScrubber was interrupted, exiting > 2018-07-31 21:42:47,011 [main] INFO namenode.FSEditLog (FSEditLog.java:p= rintStatistics(775)) - Number of transactions: 2 Total time for transaction= s(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 3 Sync= Times(ms): 1 1=20 > 2018-07-31 21:42:47,011 [main] INFO namenode.FileJournalManager (FileJou= rnalManager.java:finalizeLogSegment(143)) - Finalizing edits file /tmp/tmp.= u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/= dfs/name-0-1/current/edits_inprogress_0000000000000000001 -> /tmp/tmp.u8Ghl= Lcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/n= ame-0-1/current/edits_0000000000000000001-0000000000000000002 > 2018-07-31 21:42:47,012 [main] INFO namenode.FileJournalManager (FileJou= rnalManager.java:finalizeLogSegment(143)) - Finalizing edits file /tmp/tmp.= u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/= dfs/name-0-2/current/edits_inprogress_0000000000000000001 -> /tmp/tmp.u8Ghl= Lcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/n= ame-0-2/current/edits_0000000000000000001-0000000000000000002 > 2018-07-31 21:42:47,012 [FSEditLogAsync] INFO namenode.FSEditLog (FSEdit= LogAsync.java:run(198)) - FSEditLogAsync was interrupted, exiting > 2018-07-31 21:42:47,013 [CacheReplicationMonitor(1111752355)] INFO block= management.CacheReplicationMonitor (CacheReplicationMonitor.java:run(169)) = - Shutting down CacheReplicationMonitor > 2018-07-31 21:42:47,014 [main] INFO ipc.Server (Server.java:stop(3074)) = - Stopping server on port1 > 2018-07-31 21:42:47,015 [IPC Server Responder] INFO ipc.Server (Server.j= ava:run(1312)) - Stopping IPC Server Responder > 2018-07-31 21:42:47,015 [IPC Server listener on port1] INFO ipc.Server (= Server.java:run(1178)) - Stopping IPC Server listener on port1 > 2018-07-31 21:42:47,015 [StorageInfoMonitor] INFO blockmanagement.BlockM= anager (BlockManager.java:run(4479)) - Stopping thread. > 2018-07-31 21:42:47,017 [RedundancyMonitor] INFO blockmanagement.BlockMa= nager (BlockManager.java:run(4444)) - Stopping RedundancyMonitor. > 2018-07-31 21:42:47,026 [main] INFO namenode.FSNamesystem (FSNamesystem.= java:stopActiveServices(1339)) - Stopping services started for active state > 2018-07-31 21:42:47,026 [main] INFO namenode.FSNamesystem (FSNamesystem.= java:stopStandbyServices(1435)) - Stopping services started for standby sta= te > 2018-07-31 21:42:47,028 [main] INFO handler.ContextHandler (ContextHandl= er.java:doStop(910)) - Stopped o.e.j.w.WebAppContext@1c758545\{/,null,UNAVA= ILABLE} \{/hdfs} > 2018-07-31 21:42:47,029 [main] INFO server.AbstractConnector (AbstractCon= nector.java:doStop(318)) - Stopped ServerConnector@117bcfdc > {HTTP/1.1,[http/1.1]} \{localhost:0} > 2018-07-31 21:42:47,029 [main] INFO handler.ContextHandler (ContextHandle= r.java:doStop(910)) - Stopped o.e.j.s.ServletContextHandler@6f4ade6e > {/static,file:///tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/ha= doop-hdfs/target/test-classes/webapps/static/,UNAVAILABLE} > 2018-07-31 21:42:47,029 [main] INFO handler.ContextHandler (ContextHandle= r.java:doStop(910)) - Stopped o.e.j.s.ServletContextHandler@514cd540 > {/logs,file:///tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hado= op-hdfs/target/log/,UNAVAILABLE} > 2018-07-31 21:42:47,030 [main] INFO impl.MetricsSystemImpl (MetricsSystem= Impl.java:stop(210)) - Stopping DataNode metrics system... > 2018-07-31 21:42:47,031 [main] INFO impl.MetricsSystemImpl (MetricsSyste= mImpl.java:stop(216)) - DataNode metrics system stopped. > 2018-07-31 21:42:47,031 [main] INFO impl.MetricsSystemImpl (MetricsSyste= mImpl.java:shutdown(607)) - DataNode metrics system shutdown complete. > {noformat} > =C2=A0 > =C2=A0 -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org