hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Wang Hao (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-8851) datanode fails to start due to a bad disk
Date Thu, 06 Aug 2015 03:48:04 GMT

    [ https://issues.apache.org/jira/browse/HDFS-8851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14659431#comment-14659431
] 

Wang Hao commented on HDFS-8851:
--------------------------------

here is the total log
STARTUP_MSG:   java = 1.7.0_71
************************************************************/
2015-08-04 10:18:50,204 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX
signal handlers for [TERM, HUP, INT]
2015-08-04 10:18:50,344 WARN org.apache.hadoop.hdfs.server.common.Util: Path /data1/dfs/dn
should be specified as a URI in configuration files. Please update hdfs configuration.
2015-08-04 10:18:50,344 WARN org.apache.hadoop.hdfs.server.common.Util: Path /data2/dfs/dn
should be specified as a URI in configuration files. Please update hdfs configuration.
2015-08-04 10:18:50,345 WARN org.apache.hadoop.hdfs.server.common.Util: Path /data3/dfs/dn
should be specified as a URI in configuration files. Please update hdfs configuration.
2015-08-04 10:18:50,345 WARN org.apache.hadoop.hdfs.server.common.Util: Path /data4/dfs/dn
should be specified as a URI in configuration files. Please update hdfs configuration.
2015-08-04 10:18:50,345 WARN org.apache.hadoop.hdfs.server.common.Util: Path /data5/dfs/dn
should be specified as a URI in configuration files. Please update hdfs configuration.
2015-08-04 10:18:50,345 WARN org.apache.hadoop.hdfs.server.common.Util: Path /data6/dfs/dn
should be specified as a URI in configuration files. Please update hdfs configuration.
2015-08-04 10:18:50,345 WARN org.apache.hadoop.hdfs.server.common.Util: Path /data7/dfs/dn
should be specified as a URI in configuration files. Please update hdfs configuration.
2015-08-04 10:18:50,345 WARN org.apache.hadoop.hdfs.server.common.Util: Path /data8/dfs/dn
should be specified as a URI in configuration files. Please update hdfs configuration.
2015-08-04 10:18:50,345 WARN org.apache.hadoop.hdfs.server.common.Util: Path /data9/dfs/dn
should be specified as a URI in configuration files. Please update hdfs configuration.
2015-08-04 10:18:50,345 WARN org.apache.hadoop.hdfs.server.common.Util: Path /data10/dfs/dn
should be specified as a URI in configuration files. Please update hdfs configuration.
2015-08-04 10:18:50,345 WARN org.apache.hadoop.hdfs.server.common.Util: Path /data11/dfs/dn
should be specified as a URI in configuration files. Please update hdfs configuration.
2015-08-04 10:18:50,346 WARN org.apache.hadoop.hdfs.server.common.Util: Path /data12/dfs/dn
should be specified as a URI in configuration files. Please update hdfs configuration.
2015-08-04 10:18:50,673 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties
from hadoop-metrics2.properties
2015-08-04 10:18:50,692 INFO org.apache.hadoop.metrics2.impl.MetricsSinkAdapter: Sink ganglia
started
2015-08-04 10:18:50,705 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled
snapshot period at 10 second(s).
2015-08-04 10:18:50,705 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics
system started
2015-08-04 10:18:50,706 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname
is hadoop070.dx.momo.com
2015-08-04 10:18:50,707 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting DataNode
with maxLockedMemory = 0
2015-08-04 10:18:50,725 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming
server at /0.0.0.0:50010
2015-08-04 10:18:50,728 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith
is 67108864 bytes/s
2015-08-04 10:18:50,728 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Number threads
for balancing is 30
2015-08-04 10:18:50,791 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log)
via org.mortbay.log.Slf4jLog
2015-08-04 10:18:50,794 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.datanode
is not defined
2015-08-04 10:18:50,804 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety'
(class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2015-08-04 10:18:50,807 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
2015-08-04 10:18:50,807 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2015-08-04 10:18:50,807 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2015-08-04 10:18:50,820 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage:
packageName=org.apache.hadoop.hdfs.server.datanode.web.resources;org.apache.hadoop.hdfs.web.resources,
pathSpec=/webhdfs/v1/*
2015-08-04 10:18:50,822 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 50075
2015-08-04 10:18:50,822 INFO org.mortbay.log: jetty-6.1.26
2015-08-04 10:18:50,983 INFO org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50075
2015-08-04 10:18:51,078 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: dnUserName =
hadoop
2015-08-04 10:18:51,078 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: supergroup =
supergroup
2015-08-04 10:18:51,107 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class
java.util.concurrent.LinkedBlockingQueue
2015-08-04 10:18:51,120 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port
50020
2015-08-04 10:18:51,120 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #2 for port
50020
2015-08-04 10:18:51,120 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #3 for port
50020
2015-08-04 10:18:51,120 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #4 for port
50020
2015-08-04 10:18:51,120 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #5 for port
50020
2015-08-04 10:18:51,120 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #6 for port
50020
2015-08-04 10:18:51,121 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #7 for port
50020
2015-08-04 10:18:51,121 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #8 for port
50020
2015-08-04 10:18:51,121 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #9 for port
50020
2015-08-04 10:18:51,121 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #10 for
port 50020
2015-08-04 10:18:51,142 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server
at /0.0.0.0:50020
2015-08-04 10:18:51,208 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Refresh request
received for nameservices: nameservice1
2015-08-04 10:18:51,226 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting BPOfferServices
for nameservices: nameservice1
2015-08-04 10:18:51,230 WARN org.apache.hadoop.hdfs.server.common.Util: Path /data1/dfs/dn
should be specified as a URI in configuration files. Please update hdfs configuration.
2015-08-04 10:18:51,231 WARN org.apache.hadoop.hdfs.server.common.Util: Path /data2/dfs/dn
should be specified as a URI in configuration files. Please update hdfs configuration.
2015-08-04 10:18:51,231 WARN org.apache.hadoop.hdfs.server.common.Util: Path /data3/dfs/dn
should be specified as a URI in configuration files. Please update hdfs configuration.
2015-08-04 10:18:51,231 WARN org.apache.hadoop.hdfs.server.common.Util: Path /data4/dfs/dn
should be specified as a URI in configuration files. Please update hdfs configuration.
2015-08-04 10:18:51,231 WARN org.apache.hadoop.hdfs.server.common.Util: Path /data5/dfs/dn
should be specified as a URI in configuration files. Please update hdfs configuration.
2015-08-04 10:18:51,231 WARN org.apache.hadoop.hdfs.server.common.Util: Path /data6/dfs/dn
should be specified as a URI in configuration files. Please update hdfs configuration.
2015-08-04 10:18:51,231 WARN org.apache.hadoop.hdfs.server.common.Util: Path /data7/dfs/dn
should be specified as a URI in configuration files. Please update hdfs configuration.
2015-08-04 10:18:51,231 WARN org.apache.hadoop.hdfs.server.common.Util: Path /data8/dfs/dn
should be specified as a URI in configuration files. Please update hdfs configuration.
2015-08-04 10:18:51,231 WARN org.apache.hadoop.hdfs.server.common.Util: Path /data9/dfs/dn
should be specified as a URI in configuration files. Please update hdfs configuration.
2015-08-04 10:18:51,231 WARN org.apache.hadoop.hdfs.server.common.Util: Path /data10/dfs/dn
should be specified as a URI in configuration files. Please update hdfs configuration.
2015-08-04 10:18:51,231 WARN org.apache.hadoop.hdfs.server.common.Util: Path /data11/dfs/dn
should be specified as a URI in configuration files. Please update hdfs configuration.
2015-08-04 10:18:51,231 WARN org.apache.hadoop.hdfs.server.common.Util: Path /data12/dfs/dn
should be specified as a URI in configuration files. Please update hdfs configuration.
2015-08-04 10:18:51,234 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering>
(Datanode Uuid unassigned) service to hadoop002.dx.momo.com/10.84.100.191:8022 starting to
offer service
2015-08-04 10:18:51,234 WARN org.apache.hadoop.hdfs.server.common.Util: Path /data1/dfs/dn
should be specified as a URI in configuration files. Please update hdfs configuration.
2015-08-04 10:18:51,234 WARN org.apache.hadoop.hdfs.server.common.Util: Path /data2/dfs/dn
should be specified as a URI in configuration files. Please update hdfs configuration.
2015-08-04 10:18:51,234 WARN org.apache.hadoop.hdfs.server.common.Util: Path /data3/dfs/dn
should be specified as a URI in configuration files. Please update hdfs configuration.
2015-08-04 10:18:51,234 WARN org.apache.hadoop.hdfs.server.common.Util: Path /data4/dfs/dn
should be specified as a URI in configuration files. Please update hdfs configuration.
2015-08-04 10:18:51,234 WARN org.apache.hadoop.hdfs.server.common.Util: Path /data5/dfs/dn
should be specified as a URI in configuration files. Please update hdfs configuration.
2015-08-04 10:18:51,234 WARN org.apache.hadoop.hdfs.server.common.Util: Path /data6/dfs/dn
should be specified as a URI in configuration files. Please update hdfs configuration.
2015-08-04 10:18:51,234 WARN org.apache.hadoop.hdfs.server.common.Util: Path /data7/dfs/dn
should be specified as a URI in configuration files. Please update hdfs configuration.
2015-08-04 10:18:51,234 WARN org.apache.hadoop.hdfs.server.common.Util: Path /data8/dfs/dn
should be specified as a URI in configuration files. Please update hdfs configuration.
2015-08-04 10:18:51,234 WARN org.apache.hadoop.hdfs.server.common.Util: Path /data9/dfs/dn
should be specified as a URI in configuration files. Please update hdfs configuration.
2015-08-04 10:18:51,235 WARN org.apache.hadoop.hdfs.server.common.Util: Path /data10/dfs/dn
should be specified as a URI in configuration files. Please update hdfs configuration.
2015-08-04 10:18:51,235 WARN org.apache.hadoop.hdfs.server.common.Util: Path /data11/dfs/dn
should be specified as a URI in configuration files. Please update hdfs configuration.
2015-08-04 10:18:51,235 WARN org.apache.hadoop.hdfs.server.common.Util: Path /data12/dfs/dn
should be specified as a URI in configuration files. Please update hdfs configuration.
2015-08-04 10:18:51,235 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering>
(Datanode Uuid unassigned) service to hadoop001.dx.momo.com/10.84.100.171:8022 starting to
offer service
2015-08-04 10:18:51,241 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting
2015-08-04 10:18:51,241 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2015-08-04 10:18:51,405 INFO org.apache.hadoop.hdfs.server.common.Storage: Data-node version:
-55 and name-node layout version: -57
2015-08-04 10:18:51,408 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /data1/dfs/dn/in_use.lock
acquired by nodename 23928@hadoop070.dx.momo.com
2015-08-04 10:18:51,409 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /data2/dfs/dn/in_use.lock
acquired by nodename 23928@hadoop070.dx.momo.com
2015-08-04 10:18:51,409 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /data3/dfs/dn/in_use.lock
acquired by nodename 23928@hadoop070.dx.momo.com
2015-08-04 10:18:51,410 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /data4/dfs/dn/in_use.lock
acquired by nodename 23928@hadoop070.dx.momo.com
2015-08-04 10:18:51,410 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /data5/dfs/dn/in_use.lock
acquired by nodename 23928@hadoop070.dx.momo.com
2015-08-04 10:18:51,411 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /data6/dfs/dn/in_use.lock
acquired by nodename 23928@hadoop070.dx.momo.com
2015-08-04 10:18:51,411 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /data7/dfs/dn/in_use.lock
acquired by nodename 23928@hadoop070.dx.momo.com
2015-08-04 10:18:51,412 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /data8/dfs/dn/in_use.lock
acquired by nodename 23928@hadoop070.dx.momo.com
2015-08-04 10:18:51,412 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /data9/dfs/dn/in_use.lock
acquired by nodename 23928@hadoop070.dx.momo.com
2015-08-04 10:18:51,413 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /data10/dfs/dn/in_use.lock
acquired by nodename 23928@hadoop070.dx.momo.com
2015-08-04 10:18:51,413 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /data11/dfs/dn/in_use.lock
acquired by nodename 23928@hadoop070.dx.momo.com
2015-08-04 10:18:51,417 WARN org.apache.hadoop.hdfs.server.common.Storage: Ignoring storage
directory /data12/dfs/dn due to an exception
java.io.FileNotFoundException: /data12/dfs/dn/in_use.lock (Input/output error)
	at java.io.RandomAccessFile.open(Native Method)
	at java.io.RandomAccessFile.<init>(RandomAccessFile.java:241)
	at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:697)
	at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:669)
	at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:493)
	at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:186)
	at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:254)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:975)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:946)
	at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:278)
	at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
	at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:812)
	at java.lang.Thread.run(Thread.java:745)
2015-08-04 10:18:51,470 INFO org.apache.hadoop.hdfs.server.common.Storage: Analyzing storage
directories for bpid BP-454299492-10.84.100.171-1416301904728
2015-08-04 10:18:51,470 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
2015-08-04 10:18:51,471 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
2015-08-04 10:18:51,471 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
2015-08-04 10:18:51,471 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
2015-08-04 10:18:51,471 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
2015-08-04 10:18:51,472 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
2015-08-04 10:18:51,472 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
2015-08-04 10:18:51,472 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
2015-08-04 10:18:51,472 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
2015-08-04 10:18:51,472 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
2015-08-04 10:18:51,472 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
2015-08-04 10:18:51,472 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
2015-08-04 10:18:51,472 INFO org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block
files from trash.
2015-08-04 10:18:51,472 INFO org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block
files from trash.
2015-08-04 10:18:51,472 INFO org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block
files from trash.
2015-08-04 10:18:51,473 INFO org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block
files from trash.
2015-08-04 10:18:51,473 INFO org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block
files from trash.
2015-08-04 10:18:51,473 INFO org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block
files from trash.
2015-08-04 10:18:51,473 INFO org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block
files from trash.
2015-08-04 10:18:51,473 INFO org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block
files from trash.
2015-08-04 10:18:51,473 INFO org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block
files from trash.
2015-08-04 10:18:51,473 INFO org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block
files from trash.
2015-08-04 10:18:51,473 INFO org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block
files from trash.
2015-08-04 10:18:51,473 INFO org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block
files from trash.
2015-08-04 10:18:51,474 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization
failed for Block pool <registering> (Datanode Uuid unassigned) service to hadoop002.dx.momo.com/10.84.100.191:8022
Input/output error
2015-08-04 10:18:56,464 INFO org.apache.hadoop.hdfs.server.common.Storage: Analyzing storage
directories for bpid BP-454299492-10.84.100.171-1416301904728
2015-08-04 10:18:56,464 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
2015-08-04 10:18:56,464 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
2015-08-04 10:18:56,464 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
2015-08-04 10:18:56,464 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
2015-08-04 10:18:56,464 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
2015-08-04 10:18:56,465 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
2015-08-04 10:18:56,465 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
2015-08-04 10:18:56,465 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
2015-08-04 10:18:56,465 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
2015-08-04 10:18:56,465 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
2015-08-04 10:18:56,465 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
2015-08-04 10:18:56,465 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
2015-08-04 10:18:56,465 INFO org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block
files from trash.
2015-08-04 10:18:56,466 INFO org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block
files from trash.
2015-08-04 10:18:56,466 INFO org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block
files from trash.
2015-08-04 10:18:56,466 INFO org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block
files from trash.
2015-08-04 10:18:56,466 INFO org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block
files from trash.
2015-08-04 10:18:56,466 INFO org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block
files from trash.
2015-08-04 10:18:56,466 INFO org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block
files from trash.
2015-08-04 10:18:56,466 INFO org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block
files from trash.
2015-08-04 10:18:56,466 INFO org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block
files from trash.
2015-08-04 10:18:56,467 INFO org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block
files from trash.
2015-08-04 10:18:56,467 INFO org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block
files from trash.
2015-08-04 10:18:56,467 INFO org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block
files from trash.
2015-08-04 10:18:56,467 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization
failed for Block pool <registering> (Datanode Uuid unassigned) service to hadoop001.dx.momo.com/10.84.100.171:8022.
Exiting.
java.io.IOException: Input/output error
	at java.io.FileInputStream.readBytes(Native Method)
	at java.io.FileInputStream.read(FileInputStream.java:243)
	at java.util.Properties$LineReader.readLine(Properties.java:434)
	at java.util.Properties.load0(Properties.java:353)
	at java.util.Properties.load(Properties.java:341)
	at org.apache.hadoop.hdfs.server.common.StorageInfo.readPropertiesFile(StorageInfo.java:247)
	at org.apache.hadoop.hdfs.server.common.StorageInfo.readProperties(StorageInfo.java:227)
	at org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.doTransition(BlockPoolSliceStorage.java:256)
	at org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.recoverTransitionRead(BlockPoolSliceStorage.java:155)
	at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:269)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:975)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:946)
	at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:278)
	at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
	at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:812)
	at java.lang.Thread.run(Thread.java:745)
2015-08-04 10:18:56,468 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block
pool service for: Block pool <registering> (Datanode Uuid unassigned) service to hadoop001.dx.momo.com/10.84.100.171:8022
2015-08-04 10:18:56,523 INFO org.apache.hadoop.hdfs.server.common.Storage: Analyzing storage
directories for bpid BP-454299492-10.84.100.171-1416301904728
2015-08-04 10:18:56,523 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
2015-08-04 10:18:56,523 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
2015-08-04 10:18:56,524 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
2015-08-04 10:18:56,524 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
2015-08-04 10:18:56,524 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
2015-08-04 10:18:56,524 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
2015-08-04 10:18:56,524 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
2015-08-04 10:18:56,524 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
2015-08-04 10:18:56,524 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
2015-08-04 10:18:56,524 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
2015-08-04 10:18:56,524 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
2015-08-04 10:18:56,524 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
2015-08-04 10:18:56,524 INFO org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block
files from trash.
2015-08-04 10:18:56,524 INFO org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block
files from trash.
2015-08-04 10:18:56,525 INFO org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block
files from trash.
2015-08-04 10:18:56,525 INFO org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block
files from trash.
2015-08-04 10:18:56,525 INFO org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block
files from trash.
2015-08-04 10:18:56,525 INFO org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block
files from trash.
2015-08-04 10:18:56,525 INFO org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block
files from trash.
2015-08-04 10:18:56,525 INFO org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block
files from trash.
2015-08-04 10:18:56,525 INFO org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block
files from trash.
2015-08-04 10:18:56,525 INFO org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block
files from trash.
2015-08-04 10:18:56,525 INFO org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block
files from trash.
2015-08-04 10:18:56,525 INFO org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block
files from trash.
2015-08-04 10:18:56,526 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization
failed for Block pool <registering> (Datanode Uuid unassigned) service to hadoop002.dx.momo.com/10.84.100.191:8022.
Exiting.
java.io.IOException: Input/output error
	at java.io.FileInputStream.readBytes(Native Method)
	at java.io.FileInputStream.read(FileInputStream.java:243)
	at java.util.Properties$LineReader.readLine(Properties.java:434)
	at java.util.Properties.load0(Properties.java:353)
	at java.util.Properties.load(Properties.java:341)
	at org.apache.hadoop.hdfs.server.common.StorageInfo.readPropertiesFile(StorageInfo.java:247)
	at org.apache.hadoop.hdfs.server.common.StorageInfo.readProperties(StorageInfo.java:227)
	at org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.doTransition(BlockPoolSliceStorage.java:256)
	at org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.recoverTransitionRead(BlockPoolSliceStorage.java:155)
	at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:269)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:975)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:946)
	at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:278)
	at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
	at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:812)
	at java.lang.Thread.run(Thread.java:745)
2015-08-04 10:18:56,526 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block
pool service for: Block pool <registering> (Datanode Uuid unassigned) service to hadoop002.dx.momo.com/10.84.100.191:8022
2015-08-04 10:18:56,526 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block
pool <registering> (Datanode Uuid unassigned)
2015-08-04 10:18:58,527 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2015-08-04 10:18:58,529 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0
2015-08-04 10:18:58,530 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:

> datanode fails to start due to a bad disk
> -----------------------------------------
>
>                 Key: HDFS-8851
>                 URL: https://issues.apache.org/jira/browse/HDFS-8851
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>    Affects Versions: 2.5.1
>            Reporter: Wang Hao
>
> Data node can not start due to a bad disk. I found a similar issue HDFS-6245 is reported,
but our situation is different.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message