hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Brahma Reddy Battula <brahmareddy.batt...@huawei.com>
Subject RE: hdfs ls command took more time to get response after update
Date Wed, 08 Apr 2015 03:40:10 GMT
Please send vendor specific questions to that vendor's support mechanism.

Since your issue appears to be with CDH, please use http://community.cloudera.com/<http://community.cloudera.com/>




Thanks & Regards

Brahma Reddy Battula




________________________________
From: ZhuGe [tczg@outlook.com]
Sent: Wednesday, April 08, 2015 8:41 AM
To: user@hadoop.apache.org
Subject: hdfs ls command took more time to get response after update

Hi all:
Recently, I updated my hadoop cluster from hadoop-2.0.0-cdh4.3.0 to  hadoop-2.5.0-cdh5.2.0.
It works fine, however, a small problem is, when i use the hadoop fs -ls command in the terminal
to get the list of files in the hdfs, it took much more time(10+ sec) to get the response
compared to 2-3 secs before i update the version of hadoop.( get is slow too)
Can any one exaplain a little bit of what might cause the problem or some configuration goes
wrong?
Below is the log:

15/04/08 10:51:18 DEBUG util.Shell: setsid exited with exit code 0
15/04/08 10:51:18 DEBUG conf.Configuration: parsing URL jar:file:/data/dbcenter/cdh5/hadoop-2.5.0-cdh5.2.0/share/hadoop/common/hadoop-common-2.5.0-cdh5.2.0.jar!/core-default.xml
15/04/08 10:51:18 DEBUG conf.Configuration: parsing input stream sun.net.www.protocol.jar.JarURLConnection$JarURLInputStream@57316e85
15/04/08 10:51:18 DEBUG conf.Configuration: parsing URL file:/data/dbcenter/cdh5/hadoop-2.5.0-cdh5.2.0/etc/hadoop/core-site.xml
15/04/08 10:51:18 DEBUG conf.Configuration: parsing input stream java.io.BufferedInputStream@31818dbc
15/04/08 10:51:19 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate
org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @org.apache.hadoop.metrics2.annotation.Metric(valueName=Time,
value=[Rate of successful kerberos logins and latency (milliseconds)], about=, type=DEFAULT,
always=false, sampleName=Ops)
15/04/08 10:51:19 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate
org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @org.apache.hadoop.metrics2.annotation.Metric(valueName=Time,
value=[Rate of failed kerberos logins and latency (milliseconds)], about=, type=DEFAULT, always=false,
sampleName=Ops)
15/04/08 10:51:19 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate
org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with annotation @org.apache.hadoop.metrics2.annotation.Metric(valueName=Time,
value=[GetGroups], about=, type=DEFAULT, always=false, sampleName=Ops)
15/04/08 10:51:19 DEBUG impl.MetricsSystemImpl: UgiMetrics, User and group related metrics
15/04/08 10:51:19 DEBUG security.Groups:  Creating new Groups object
15/04/08 10:51:19 DEBUG util.NativeCodeLoader: Trying to load the custom-built native-hadoop
library...
15/04/08 10:51:19 DEBUG util.NativeCodeLoader: Failed to load native-hadoop with error: java.lang.UnsatisfiedLinkError:
no hadoop in java.library.path
15/04/08 10:51:19 DEBUG util.NativeCodeLoader: java.library.path=/data/dbcenter/cdh5/hadoop-2.5.0-cdh5.2.0/lib/native
15/04/08 10:51:19 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your
platform... using builtin-java classes where applicable
15/04/08 10:51:19 DEBUG util.PerformanceAdvisory: Falling back to shell based
15/04/08 10:51:19 DEBUG security.JniBasedUnixGroupsMappingWithFallback: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping
15/04/08 10:51:19 DEBUG security.Groups: Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback;
cacheTimeout=300000; warningDeltaMs=5000
15/04/08 10:51:19 DEBUG security.UserGroupInformation: hadoop login
15/04/08 10:51:19 DEBUG security.UserGroupInformation: hadoop login commit
15/04/08 10:51:19 DEBUG security.UserGroupInformation: using local user:UnixPrincipal: test
15/04/08 10:51:19 DEBUG security.UserGroupInformation: UGI loginUser:test (auth:SIMPLE)
15/04/08 10:51:19 DEBUG hdfs.BlockReaderLocal: dfs.client.use.legacy.blockreader.local = false
15/04/08 10:51:19 DEBUG hdfs.BlockReaderLocal: dfs.client.read.shortcircuit = false
15/04/08 10:51:19 DEBUG hdfs.BlockReaderLocal: dfs.client.domain.socket.data.traffic = false
15/04/08 10:51:19 DEBUG hdfs.BlockReaderLocal: dfs.domain.socket.path =
15/04/08 10:51:19 DEBUG hdfs.DFSClient: No KeyProvider found.
15/04/08 10:51:19 DEBUG hdfs.HAUtil: No HA service delegation token found for logical URI
hdfs://tccluster:8020
15/04/08 10:51:19 DEBUG hdfs.BlockReaderLocal: dfs.client.use.legacy.blockreader.local = false
15/04/08 10:51:19 DEBUG hdfs.BlockReaderLocal: dfs.client.read.shortcircuit = false
15/04/08 10:51:19 DEBUG hdfs.BlockReaderLocal: dfs.client.domain.socket.data.traffic = false
15/04/08 10:51:19 DEBUG hdfs.BlockReaderLocal: dfs.domain.socket.path =
15/04/08 10:51:19 DEBUG retry.RetryUtils: multipleLinearRandomRetry = null
15/04/08 10:51:19 DEBUG ipc.Server: rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=class
org.apache.hadoop.ipc.ProtobufRpcEngine$RpcRequestWrapper, rpcInvoker=org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker@5a2611a6
15/04/08 10:51:19 DEBUG ipc.Client: getting client out of cache: org.apache.hadoop.ipc.Client@285d4a6a
15/04/08 10:51:30 DEBUG util.PerformanceAdvisory: Both short-circuit local reads and UNIX
domain socket are disabled.
15/04/08 10:51:30 DEBUG sasl.DataTransferSaslUtil: DataTransferProtocol not using SaslPropertiesResolver,
no QOP found in configuration for dfs.data.transfer.protection
15/04/08 10:51:30 DEBUG ipc.Client: The ping interval is 60000 ms.
15/04/08 10:51:30 DEBUG ipc.Client: Connecting to master/192.168.1.13:8020
15/04/08 10:51:30 DEBUG ipc.Client: IPC Client (246890776) connection to master/192.168.1.13:8020
from test: starting, having connections 1
15/04/08 10:51:30 DEBUG ipc.Client: IPC Client (246890776) connection to master/192.168.1.13:8020
from test sending #0
15/04/08 10:51:30 DEBUG ipc.Client: IPC Client (246890776) connection to master/192.168.1.13:8020
from test got value #0
15/04/08 10:51:30 DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo took 88ms
15/04/08 10:51:30 DEBUG ipc.Client: IPC Client (246890776) connection to master/192.168.1.13:8020
from test sending #1
15/04/08 10:51:30 DEBUG ipc.Client: IPC Client (246890776) connection to master/192.168.1.13:8020
from test got value #1
15/04/08 10:51:30 DEBUG ipc.ProtobufRpcEngine: Call: getListing took 2ms
15/04/08 10:51:30 DEBUG ipc.Client: stopping client from cache: org.apache.hadoop.ipc.Client@285d4a6a
15/04/08 10:51:30 DEBUG ipc.Client: removing client from cache: org.apache.hadoop.ipc.Client@285d4a6a
15/04/08 10:51:30 DEBUG ipc.Client: stopping actual client because no more references remain:
org.apache.hadoop.ipc.Client@285d4a6a
15/04/08 10:51:30 DEBUG ipc.Client: Stopping client
15/04/08 10:51:30 DEBUG ipc.Client: IPC Client (246890776) connection to master/192.168.1.13:8020
from test: closed
15/04/08 10:51:30 DEBUG ipc.Client: IPC Client (246890776) connection to master/192.168.1.13:8020
from test: stopped, remaining connections 0


Mime
View raw message