Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 5451D200BAD for ; Tue, 11 Oct 2016 01:51:22 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 5285A160AEB; Mon, 10 Oct 2016 23:51:22 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 71114160AE1 for ; Tue, 11 Oct 2016 01:51:21 +0200 (CEST) Received: (qmail 70048 invoked by uid 500); 10 Oct 2016 23:51:20 -0000 Mailing-List: contact hdfs-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list hdfs-issues@hadoop.apache.org Received: (qmail 70032 invoked by uid 99); 10 Oct 2016 23:51:20 -0000 Received: from arcas.apache.org (HELO arcas) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 10 Oct 2016 23:51:20 +0000 Received: from arcas.apache.org (localhost [127.0.0.1]) by arcas (Postfix) with ESMTP id 7007D2C0B04 for ; Mon, 10 Oct 2016 23:51:20 +0000 (UTC) Date: Mon, 10 Oct 2016 23:51:20 +0000 (UTC) From: "Mingliang Liu (JIRA)" To: hdfs-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Updated] (HDFS-10986) DFSAdmin should log detailed error message if any MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Mon, 10 Oct 2016 23:51:22 -0000 [ https://issues.apache.org/jira/browse/HDFS-10986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HDFS-10986: --------------------------------- Description: There are some subcommands in {{DFSAdmin}} that swallow IOException and give very limited error message, if any, to the stderr. {code} $ hdfs dfsadmin -getBalancerBandwidth 127.0.0.1:9866 Datanode unreachable. $ hdfs dfsadmin -getDatanodeInfo localhost:9866 Datanode unreachable. $ hdfs dfsadmin -evictWriters 127.0.0.1:9866 $ echo $? -1 {code} User is not able to get the exception stack even the LOG level is DEBUG. This is not very user friendly. Fortunately, if the port number is not accessible (say 9999), users can infer the detailed error message by IPC logs: {code} $ hdfs dfsadmin -getBalancerBandwidth 127.0.0.1:9999 2016-10-07 18:01:35,115 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2016-10-07 18:01:36,335 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9999. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) ..... 2016-10-07 18:01:45,361 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9999. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2016-10-07 18:01:45,362 WARN ipc.Client: Failed to connect to server: localhost/127.0.0.1:9999: retries get failed due to exceeded maximum allowed retries number: 10 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) ... at org.apache.hadoop.hdfs.tools.DFSAdmin.run(DFSAdmin.java:2073) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) at org.apache.hadoop.hdfs.tools.DFSAdmin.main(DFSAdmin.java:2225) Datanode unreachable. {code} We should fix this by providing detailed error message. Actually, the {{DFSAdmin#run}} already handles exception carefully, including: # set the exit ret value to -1 # print the error message # log the exception stack trace (in DEBUG level) All we need to do is to not swallow exceptions without good reason. was: There are some subcommands in {{DFSAdmin}} that swallow IOException and give very limited error message, if any, to the stderr. {code} $ hdfs dfsadmin -getBalancerBandwidth 127.0.0.1:9866 Datanode unreachable. $ hdfs dfsadmin -getDatanodeInfo localhost:9866 Datanode unreachable. $ hdfs dfsadmin -evictWriters 127.0.0.1:9866 $ echo $? -1 {code} User is not able to get the exception stack even the LOG level is DEBUG. This is not very user friendly. Fortunately, if the port number is not accessible (say 9999), users can infer the detailed error message by IPC logs: {code} $ hdfs dfsadmin -getBalancerBandwidth 127.0.0.1:9999 2016-10-07 18:01:35,115 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2016-10-07 18:01:36,335 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9690. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) ..... 2016-10-07 18:01:45,361 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9690. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2016-10-07 18:01:45,362 WARN ipc.Client: Failed to connect to server: localhost/127.0.0.1:9690: retries get failed due to exceeded maximum allowed retries number: 10 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) ... at org.apache.hadoop.hdfs.tools.DFSAdmin.run(DFSAdmin.java:2073) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) at org.apache.hadoop.hdfs.tools.DFSAdmin.main(DFSAdmin.java:2225) Datanode unreachable. {code} We should fix this by providing detailed error message. Actually, the {{DFSAdmin#run}} already handles exception carefully, including: # set the exit ret value to -1 # print the error message # log the exception stack trace (in DEBUG level) All we need to do is to not swallow exceptions without good reason. > DFSAdmin should log detailed error message if any > ------------------------------------------------- > > Key: HDFS-10986 > URL: https://issues.apache.org/jira/browse/HDFS-10986 > Project: Hadoop HDFS > Issue Type: Bug > Components: tools > Reporter: Mingliang Liu > Assignee: Mingliang Liu > Attachments: HDFS-10986.000.patch > > > There are some subcommands in {{DFSAdmin}} that swallow IOException and give very limited error message, if any, to the stderr. > {code} > $ hdfs dfsadmin -getBalancerBandwidth 127.0.0.1:9866 > Datanode unreachable. > $ hdfs dfsadmin -getDatanodeInfo localhost:9866 > Datanode unreachable. > $ hdfs dfsadmin -evictWriters 127.0.0.1:9866 > $ echo $? > -1 > {code} > User is not able to get the exception stack even the LOG level is DEBUG. This is not very user friendly. Fortunately, if the port number is not accessible (say 9999), users can infer the detailed error message by IPC logs: > {code} > $ hdfs dfsadmin -getBalancerBandwidth 127.0.0.1:9999 > 2016-10-07 18:01:35,115 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable > 2016-10-07 18:01:36,335 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9999. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) > ..... > 2016-10-07 18:01:45,361 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9999. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) > 2016-10-07 18:01:45,362 WARN ipc.Client: Failed to connect to server: localhost/127.0.0.1:9999: retries get failed due to exceeded maximum allowed retries number: 10 > java.net.ConnectException: Connection refused > at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) > at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) > ... > at org.apache.hadoop.hdfs.tools.DFSAdmin.run(DFSAdmin.java:2073) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) > at org.apache.hadoop.hdfs.tools.DFSAdmin.main(DFSAdmin.java:2225) > Datanode unreachable. > {code} > We should fix this by providing detailed error message. Actually, the {{DFSAdmin#run}} already handles exception carefully, including: > # set the exit ret value to -1 > # print the error message > # log the exception stack trace (in DEBUG level) > All we need to do is to not swallow exceptions without good reason. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org