hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Abraham Tom <work2m...@gmail.com>
Subject Re: Thrift reverse scan out of order exception
Date Tue, 05 May 2015 19:42:47 GMT
sorry took me a while.
We are using CDH 5.3 distribution  HBase 0.98.6-cdh5.3.3
but I exchanged the hbase-thrift-0.98.6.jar with ASF version
Thrift is enabled as hsha framed transport
The following is passed from CDH
-XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:-CMSConcurrentMTEnabled
-XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled

Below is the stack trace

7:09:59.694 PM    TRACE    org.apache.hadoop.ipc.RpcClient
Call: Get, callTime: 9ms
7:09:59.695 PM    DEBUG    org.apache.hadoop.ipc.RpcClient
IPC Client (1615948530) connection to hdpnode1/172.30.1.73:60020 from
hbase: wrote request header call_id: 253149 method_name: "Scan"
request_param: true
7:09:59.697 PM    DEBUG    org.apache.hadoop.ipc.RpcClient
IPC Client (1615948530) connection to hdpnode1/172.30.1.73:60020 from
hbase: got response header call_id: 253149 exception {
exception_class_name:
"org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException"
stack_trace:
"org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException:
Expected nextCallSeq: 1 But the nextCallSeq got from client: 0;
request=scanner_id: 158294 number_of_rows: 100 close_scanner: false
next_call_seq: 0\n\tat
org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3198)\n\tat
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)\n\tat
org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)\n\tat
org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)\n\tat
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116)\n\tat
org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96)\n\tat
java.lang.Thread.run(Thread.java:745)\n" do_not_retry: true }, totalSize:
830 bytes
7:09:59.697 PM    TRACE
org.apache.hadoop.hbase.client.RpcRetryingCaller
Call exception, tries=1, retries=35, retryTime=4277ms
org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException:
org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: Expected
nextCallSeq: 1 But the nextCallSeq got from client: 0; request=scanner_id:
158294 number_of_rows: 100 close_scanner: false next_call_seq: 0
    at
org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3198)
    at
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
    at
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96)
    at java.lang.Thread.run(Thread.java:745)

    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
    at
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
    at
org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:304)
    at
org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:204)
    at
org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:59)
    at
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:114)
    at
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:90)
    at
org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:355)
    at
org.apache.hadoop.hbase.client.AbstractClientScanner.next(AbstractClientScanner.java:67)
    at
org.apache.hadoop.hbase.thrift.ThriftServerRunner$HBaseHandler.scannerGetList(ThriftServerRunner.java:1296)
    at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at
org.apache.hadoop.hbase.thrift.HbaseHandlerMetricsProxy.invoke(HbaseHandlerMetricsProxy.java:67)
    at com.sun.proxy.$Proxy13.scannerGetList(Unknown Source)
    at
org.apache.hadoop.hbase.thrift.generated.Hbase$Processor$scannerGetList.getResult(Hbase.java:4609)
    at
org.apache.hadoop.hbase.thrift.generated.Hbase$Processor$scannerGetList.getResult(Hbase.java:4593)
    at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
    at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
    at
org.apache.thrift.server.AbstractNonblockingServer$FrameBuffer.invoke(AbstractNonblockingServer.java:478)
    at org.apache.thrift.server.Invocation.run(Invocation.java:18)
    at org.apache.hadoop.hbase.thrift.CallQueue$Call.run(CallQueue.java:64)
    at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
Caused by:
org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException):
org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: Expected
nextCallSeq: 1 But the nextCallSeq got from client: 0; request=scanner_id:
158294 number_of_rows: 100 close_scanner: false next_call_seq: 0
    at
org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3198)
    at
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
    at
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96)
    at java.lang.Thread.run(Thread.java:745)

    at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1457)
    at
org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1661)
    at
org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1719)
    at
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:30328)
    at
org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:174)
    ... 21 more
7:09:59.698 PM    WARN
org.apache.hadoop.hbase.thrift.ThriftServerRunner$HBaseHandler
Failed after retry of OutOfOrderScannerNextException: was there a rpc
timeout?
org.apache.hadoop.hbase.DoNotRetryIOException: Failed after retry of
OutOfOrderScannerNextException: was there a rpc timeout?
    at
org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:410)
    at
org.apache.hadoop.hbase.client.AbstractClientScanner.next(AbstractClientScanner.java:67)
    at
org.apache.hadoop.hbase.thrift.ThriftServerRunner$HBaseHandler.scannerGetList(ThriftServerRunner.java:1296)
    at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at
org.apache.hadoop.hbase.thrift.HbaseHandlerMetricsProxy.invoke(HbaseHandlerMetricsProxy.java:67)
    at com.sun.proxy.$Proxy13.scannerGetList(Unknown Source)
    at
org.apache.hadoop.hbase.thrift.generated.Hbase$Processor$scannerGetList.getResult(Hbase.java:4609)
    at
org.apache.hadoop.hbase.thrift.generated.Hbase$Processor$scannerGetList.getResult(Hbase.java:4593)
    at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
    at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
    at
org.apache.thrift.server.AbstractNonblockingServer$FrameBuffer.invoke(AbstractNonblockingServer.java:478)
    at org.apache.thrift.server.Invocation.run(Invocation.java:18)
    at org.apache.hadoop.hbase.thrift.CallQueue$Call.run(CallQueue.java:64)
    at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
Caused by:
org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException:
org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: Expected
nextCallSeq: 1 But the nextCallSeq got from client: 0; request=scanner_id:
158294 number_of_rows: 100 close_scanner: false next_call_seq: 0
    at
org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3198)
    at
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
    at
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96)
    at java.lang.Thread.run(Thread.java:745)

    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
    at
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
    at
org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:304)
    at
org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:204)
    at
org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:59)
    at
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:114)
    at
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:90)
    at
org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:355)
    ... 17 more
Caused by:
org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException):
org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: Expected
nextCallSeq: 1 But the nextCallSeq got from client: 0; request=scanner_id:
158294 number_of_rows: 100 close_scanner: false next_call_seq: 0
    at
org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3198)
    at
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
    at
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96)
    at java.lang.Thread.run(Thread.java:745)

    at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1457)
    at
org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1661)
    at
org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1719)
    at
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:30328)
    at
org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:174)
    ... 21 more
7:09:59.773 PM    DEBUG
org.apache.hadoop.hbase.thrift.ThriftServerRunner$HBaseHandler
scannerClose: id=46225
7:14:15.008 PM    TRACE
org.apache.hadoop.hbase.client.ZooKeeperRegistry
Looking up meta region location in ZK,
connection=org.apache.hadoop.hbase.client.ZooKeeperRegistry@521e9ffa
7:14:15.010 PM    TRACE
org.apache.hadoop.hbase.client.ZooKeeperRegistry
Looked up meta region location,
connection=org.apache.hadoop.hbase.client.ZooKeeperRegistry@521e9ffa;
serverName=hdpnode5,60020,1430432113223
7:14:15.010 PM    DEBUG
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation

Removed hdpnode1:60020 as a location of
prod_payments,20150305030700|000012070050|00021,1429765399978.8e25c45e4ef97376ccc14aaaebe955e0.
for tableName=prod_payments from cache
7:14:15.010 PM    DEBUG    org.apache.hadoop.ipc.RpcClient
IPC Client (1615948530) connection to hdpnode5/172.30.2.55:60020 from
hbase: wrote request header call_id: 253166 method_name: "Get"
request_param: true
7:14:15.014 PM    DEBUG    org.apache.hadoop.ipc.RpcClient
IPC Client (1615948530) connection to hdpnode5/172.30.2.55:60020 from
hbase: got response header call_id: 253166, totalSize: 641 bytes
7:14:15.014 PM    TRACE    org.apache.hadoop.ipc.RpcClient
Call: Get, callTime: 4ms
7:14:15.014 PM    DEBUG    org.apache.hadoop.ipc.RpcClient
IPC Client (1615948530) connection to hdpnode1/172.30.1.73:60020 from
hbase: wrote request header call_id: 253167 method_name: "Scan"
request_param: true
7:14:15.018 PM    DEBUG    org.apache.hadoop.ipc.RpcClient
IPC Client (1615948530) connection to hdpnode1/172.30.1.73:60020 from
hbase: got response header call_id: 253167 exception {
exception_class_name:
"org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException"
stack_trace:
"org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException:
Expected nextCallSeq: 1 But the nextCallSeq got from client: 0;
request=scanner_id: 158362 number_of_rows: 100 close_scanner: false
next_call_seq: 0\n\tat
org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3198)\n\tat
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)\n\tat
org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)\n\tat
org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)\n\tat
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116)\n\tat
org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96)\n\tat
java.lang.Thread.run(Thread.java:745)\n" do_not_retry: true }, totalSize:
830 bytes
7:14:15.018 PM    TRACE
org.apache.hadoop.hbase.client.RpcRetryingCaller
Call exception, tries=1, retries=35, retryTime=3346ms
org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException:
org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: Expected
nextCallSeq: 1 But the nextCallSeq got from client: 0; request=scanner_id:
158362 number_of_rows: 100 close_scanner: false next_call_seq: 0
    at
org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3198)
    at
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
    at
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96)
    at java.lang.Thread.run(Thread.java:745)

    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
    at
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
    at
org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:304)
    at
org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:204)
    at
org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:59)
    at
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:114)
    at
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:90)
    at
org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:355)
    at
org.apache.hadoop.hbase.client.AbstractClientScanner.next(AbstractClientScanner.java:67)
    at
org.apache.hadoop.hbase.thrift.ThriftServerRunner$HBaseHandler.scannerGetList(ThriftServerRunner.java:1296)
    at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at
org.apache.hadoop.hbase.thrift.HbaseHandlerMetricsProxy.invoke(HbaseHandlerMetricsProxy.java:67)
    at com.sun.proxy.$Proxy13.scannerGetList(Unknown Source)
    at
org.apache.hadoop.hbase.thrift.generated.Hbase$Processor$scannerGetList.getResult(Hbase.java:4609)
    at
org.apache.hadoop.hbase.thrift.generated.Hbase$Processor$scannerGetList.getResult(Hbase.java:4593)
    at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
    at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
    at
org.apache.thrift.server.AbstractNonblockingServer$FrameBuffer.invoke(AbstractNonblockingServer.java:478)
    at org.apache.thrift.server.Invocation.run(Invocation.java:18)
    at org.apache.hadoop.hbase.thrift.CallQueue$Call.run(CallQueue.java:64)
    at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
Caused by:
org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException):
org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: Expected
nextCallSeq: 1 But the nextCallSeq got from client: 0; request=scanner_id:
158362 number_of_rows: 100 close_scanner: false next_call_seq: 0
    at
org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3198)
    at
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
    at
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96)
    at java.lang.Thread.run(Thread.java:745)

    at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1457)
    at
org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1661)
    at
org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1719)
    at
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:30328)
    at
org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:174)
    ... 21 more
7:14:15.019 PM    WARN
org.apache.hadoop.hbase.thrift.ThriftServerRunner$HBaseHandler
Failed after retry of OutOfOrderScannerNextException: was there a rpc
timeout?
org.apache.hadoop.hbase.DoNotRetryIOException: Failed after retry of
OutOfOrderScannerNextException: was there a rpc timeout?
    at
org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:410)
    at
org.apache.hadoop.hbase.client.AbstractClientScanner.next(AbstractClientScanner.java:67)
    at
org.apache.hadoop.hbase.thrift.ThriftServerRunner$HBaseHandler.scannerGetList(ThriftServerRunner.java:1296)
    at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at
org.apache.hadoop.hbase.thrift.HbaseHandlerMetricsProxy.invoke(HbaseHandlerMetricsProxy.java:67)
    at com.sun.proxy.$Proxy13.scannerGetList(Unknown Source)
    at
org.apache.hadoop.hbase.thrift.generated.Hbase$Processor$scannerGetList.getResult(Hbase.java:4609)
    at
org.apache.hadoop.hbase.thrift.generated.Hbase$Processor$scannerGetList.getResult(Hbase.java:4593)
    at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
    at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
    at
org.apache.thrift.server.AbstractNonblockingServer$FrameBuffer.invoke(AbstractNonblockingServer.java:478)
    at org.apache.thrift.server.Invocation.run(Invocation.java:18)
    at org.apache.hadoop.hbase.thrift.CallQueue$Call.run(CallQueue.java:64)
    at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
Caused by:
org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException:
org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: Expected
nextCallSeq: 1 But the nextCallSeq got from client: 0; request=scanner_id:
158362 number_of_rows: 100 close_scanner: false next_call_seq: 0
    at
org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3198)
    at
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
    at
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96)
    at java.lang.Thread.run(Thread.java:745)

    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
    at
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
    at
org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:304)
    at
org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:204)
    at
org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:59)
    at
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:114)
    at
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:90)
    at
org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:355)
    ... 17 more
Caused by:
org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException):
org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: Expected
nextCallSeq: 1 But the nextCallSeq got from client: 0; request=scanner_id:
158362 number_of_rows: 100 close_scanner: false next_call_seq: 0
    at
org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3198)
    at
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
    at
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96)
    at java.lang.Thread.run(Thread.java:745)

    at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1457)
    at
org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1661)
    at
org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1719)
    at
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:30328)
    at
org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:174)
    ... 21 more
7:14:15.096 PM    DEBUG
org.apache.hadoop.hbase.thrift.ThriftServerRunner$HBaseHandler
scannerClose: id=46227
7:14:15.096 PM    DEBUG    org.apache.hadoop.ipc.RpcClient
IPC Client (1615948530) connection to hdpnode1/172.30.1.73:60020 from
hbase: wrote request header call_id: 253168 method_name: "Scan"
request_param: true
7:14:15.097 PM    DEBUG    org.apache.hadoop.ipc.RpcClient
IPC Client (1615948530) connection to hdpnode1/172.30.1.73:60020 from
hbase: got response header call_id: 253168, totalSize: 12 bytes
7:14:15.097 PM    TRACE    org.apache.hadoop.ipc.RpcClient
Call: Scan, callTime: 1ms


hbase config dump

Master status for hdpnode5,60000,1430432112654 as of Tue May 05
19:39:48 UTC 2015


Version Info:
===========================================================
HBase 0.98.6-cdh5.3.3
Subversion file:///data/jenkins/workspace/generic-package-ubuntu64-12-04/CDH5.3.3-Packaging-HBase-2015-04-08_14-43-16/hbase-0.98.6+cdh5.3.3+85-1.cdh5.3.3.p0.8~precise
-r Unknown
Compiled by jenkins on Wed Apr  8 15:00:41 PDT 2015
Hadoop 2.5.0-cdh5.3.3
Subversion http://github.com/cloudera/hadoop -r
82a65209d6e9e4a2b41fdbcd8190c7ea38730627
Compiled by jenkins on 2015-04-08T21:56Z


Tasks:
===========================================================
Task: RpcServer.reader=2,port=60000
Status: WAITING:Waiting for a call
Running for 422670s

Task: RpcServer.reader=4,port=60000
Status: WAITING:Waiting for a call
Running for 422670s

Task: RpcServer.reader=3,port=60000
Status: WAITING:Waiting for a call
Running for 422670s

Task: RpcServer.reader=1,port=60000
Status: WAITING:Waiting for a call
Running for 422670s

Task: RpcServer.reader=5,port=60000
Status: WAITING:Waiting for a call
Running for 422666s

Task: RpcServer.reader=6,port=60000
Status: WAITING:Waiting for a call
Running for 422665s

Task: RpcServer.reader=7,port=60000
Status: WAITING:Waiting for a call
Running for 422646s

Task: RpcServer.reader=8,port=60000
Status: WAITING:Waiting for a call
Running for 422645s

Task: RpcServer.reader=9,port=60000
Status: WAITING:Waiting for a call
Running for 422645s

Task: RpcServer.reader=0,port=60000
Status: WAITING:Waiting for a call
Running for 422645s



Servers:
===========================================================
hdpnode4,60020,1430432112731: requestsPerSecond=23.0,
numberOfOnlineRegions=67, usedHeapMB=1223, maxHeapMB=1830,
numberOfStores=133, numberOfStorefiles=206,
storefileUncompressedSizeMB=447206, storefileSizeMB=136487,
compressionRatio=0.3052, memstoreSizeMB=15, storefileIndexSizeMB=0,
readRequestsCount=-1297090836, writeRequestsCount=1738839,
rootIndexSizeKB=3402, totalStaticIndexSizeKB=668375,
totalStaticBloomSizeKB=1964676, totalCompactingKVs=45000591,
currentCompactedKVs=45000591, compactionProgressPct=1.0,
coprocessors=[]
hdpnode1,60020,1430432113362: requestsPerSecond=7.0,
numberOfOnlineRegions=63, usedHeapMB=1081, maxHeapMB=1830,
numberOfStores=126, numberOfStorefiles=184,
storefileUncompressedSizeMB=494821, storefileSizeMB=150587,
compressionRatio=0.3043, memstoreSizeMB=13, storefileIndexSizeMB=0,
readRequestsCount=-1085332652, writeRequestsCount=1967706,
rootIndexSizeKB=2452, totalStaticIndexSizeKB=722866,
totalStaticBloomSizeKB=2298886, totalCompactingKVs=99319350,
currentCompactedKVs=99319350, compactionProgressPct=1.0,
coprocessors=[]
hdpnode5,60020,1430432113223: requestsPerSecond=128.0,
numberOfOnlineRegions=66, usedHeapMB=916, maxHeapMB=1830,
numberOfStores=132, numberOfStorefiles=206,
storefileUncompressedSizeMB=608815, storefileSizeMB=167718,
compressionRatio=0.2755, memstoreSizeMB=22, storefileIndexSizeMB=0,
readRequestsCount=577986971, writeRequestsCount=3532785,
rootIndexSizeKB=2856, totalStaticIndexSizeKB=920817,
totalStaticBloomSizeKB=2916158, totalCompactingKVs=63516142,
currentCompactedKVs=63516142, compactionProgressPct=1.0,
coprocessors=[]
hdpnode3,60020,1430432112597: requestsPerSecond=28.0,
numberOfOnlineRegions=66, usedHeapMB=1572, maxHeapMB=2875,
numberOfStores=128, numberOfStorefiles=202,
storefileUncompressedSizeMB=533528, storefileSizeMB=158024,
compressionRatio=0.2962, memstoreSizeMB=18, storefileIndexSizeMB=0,
readRequestsCount=-968245474, writeRequestsCount=1773956,
rootIndexSizeKB=2654, totalStaticIndexSizeKB=810507,
totalStaticBloomSizeKB=2251813, totalCompactingKVs=60361924,
currentCompactedKVs=60361924, compactionProgressPct=1.0,
coprocessors=[]
hdpnode6,60020,1430432119065: requestsPerSecond=19.0,
numberOfOnlineRegions=69, usedHeapMB=1008, maxHeapMB=1877,
numberOfStores=139, numberOfStorefiles=201,
storefileUncompressedSizeMB=475462, storefileSizeMB=135666,
compressionRatio=0.2853, memstoreSizeMB=9, storefileIndexSizeMB=0,
readRequestsCount=2146376407, writeRequestsCount=1344441,
rootIndexSizeKB=2024, totalStaticIndexSizeKB=719340,
totalStaticBloomSizeKB=2175311, totalCompactingKVs=59750678,
currentCompactedKVs=59750678, compactionProgressPct=1.0,
coprocessors=[]
hdpnode2,60020,1430432120211: requestsPerSecond=60.0,
numberOfOnlineRegions=67, usedHeapMB=1280, maxHeapMB=2439,
numberOfStores=135, numberOfStorefiles=219,
storefileUncompressedSizeMB=504602, storefileSizeMB=153622,
compressionRatio=0.3044, memstoreSizeMB=19, storefileIndexSizeMB=0,
readRequestsCount=-1212167664, writeRequestsCount=2320024,
rootIndexSizeKB=2752, totalStaticIndexSizeKB=755669,
totalStaticBloomSizeKB=2333458, totalCompactingKVs=116096517,
currentCompactedKVs=116096517, compactionProgressPct=1.0,
coprocessors=[]


Regions-in-transition:
===========================================================
Region bfdf0cf7760df61bdbe6b5f62b0d5184:
SYSTEM.CATALOG,,1420485411002.bfdf0cf7760df61bdbe6b5f62b0d5184.
state=FAILED_OPEN, ts=Thu Apr 30 22:15:30 UTC 2015 (422657s ago),
server=hdpnode4,60020,1430432112731
Region 7b60115dc00f4c44ed321514e002a0c6:
SYSTEM.STATS,,1420485424511.7b60115dc00f4c44ed321514e002a0c6.
state=FAILED_OPEN, ts=Thu Apr 30 22:15:42 UTC 2015 (422646s ago),
server=hdpnode3,60020,1430432112597


Executors:
===========================================================
  Status for executor: Executor-1-MASTER_OPEN_REGION-hdpnode5:60000
  =======================================
  0 events queued, 0 running
  Status for executor: Executor-6-MASTER_TABLE_OPERATIONS-hdpnode5:60000
  =======================================
  0 events queued, 0 running
  Status for executor: Executor-4-MASTER_META_SERVER_OPERATIONS-hdpnode5:60000
  =======================================
  0 events queued, 0 running
  Status for executor: Executor-2-MASTER_CLOSE_REGION-hdpnode5:60000
  =======================================
  0 events queued, 0 running
  Status for executor: Executor-5-M_LOG_REPLAY_OPS-hdpnode5:60000
  =======================================
  0 events queued, 0 running
  Status for executor: Executor-3-MASTER_SERVER_OPERATIONS-hdpnode5:60000
  =======================================
  0 events queued, 0 running


Stacks:
===========================================================
Process Thread Dump:
72 active threads
Thread 9684 (1402064838@qtp-1989625974-561):
  State: RUNNABLE
  Blocked count: 195
  Waited count: 194
  Stack:
    sun.management.ThreadImpl.getThreadInfo1(Native Method)
    sun.management.ThreadImpl.getThreadInfo(ThreadImpl.java:174)
    sun.management.ThreadImpl.getThreadInfo(ThreadImpl.java:139)
    org.apache.hadoop.util.ReflectionUtils.printThreadInfo(ReflectionUtils.java:165)
    org.apache.hadoop.hbase.master.MasterDumpServlet.doGet(MasterDumpServlet.java:80)
    javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
    javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
    org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
    org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
    org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109)
    org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
    org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1122)
    org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
    org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
    org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
    org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
    org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
    org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
    org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:767)
    org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
Thread 9025 (IPC Parameter Sending Thread #13):
  State: TIMED_WAITING
  Blocked count: 336
  Waited count: 356932
  Stack:
    sun.misc.Unsafe.park(Native Method)
    java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
    java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
    java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359)
    java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942)
    java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    java.lang.Thread.run(Thread.java:745)
Thread 7374 (MASTER_TABLE_OPERATIONS-hdpnode5:60000-0):
  State: WAITING
  Blocked count: 1461
  Waited count: 2367
  Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@276d0d8a
  Stack:
    sun.misc.Unsafe.park(Native Method)
    java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
    java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
    java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
    java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    java.lang.Thread.run(Thread.java:745)
Thread 146 (hdpnode5,60000,1430432112654-ExpiredMobFileCleanerChore):
  State: TIMED_WAITING
  Blocked count: 0
  Waited count: 5
  Stack:
    java.lang.Object.wait(Native Method)
    org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:92)
    org.apache.hadoop.hbase.Chore.run(Chore.java:95)
    java.lang.Thread.run(Thread.java:745)
Thread 144 (FifoRpcScheduler.handler1-thread-25):
  State: WAITING
  Blocked count: 1645
  Waited count: 80120
  Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@622ebe4a
  Stack:
    sun.misc.Unsafe.park(Native Method)
    java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
    java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
    java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:374)
    java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    java.lang.Thread.run(Thread.java:745)
Thread 143 (FifoRpcScheduler.handler1-thread-24):
  State: WAITING
  Blocked count: 1554
  Waited count: 80030
  Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@622ebe4a
  Stack:
    sun.misc.Unsafe.park(Native Method)
    java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
    java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
    java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:374)
    java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    java.lang.Thread.run(Thread.java:745)
Thread 142 (FifoRpcScheduler.handler1-thread-23):
  State: WAITING
  Blocked count: 1680
  Waited count: 80119
  Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@622ebe4a
  Stack:
    sun.misc.Unsafe.park(Native Method)
    java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
    java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
    java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:374)
    java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    java.lang.Thread.run(Thread.java:745)
Thread 141 (FifoRpcScheduler.handler1-thread-22):
  State: WAITING
  Blocked count: 1515
  Waited count: 79997
  Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@622ebe4a
  Stack:
    sun.misc.Unsafe.park(Native Method)
    java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
    java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
    java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:374)
    java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    java.lang.Thread.run(Thread.java:745)
Thread 134 (FifoRpcScheduler.handler1-thread-21):
  State: WAITING
  Blocked count: 1518
  Waited count: 79975
  Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@622ebe4a
  Stack:
    sun.misc.Unsafe.park(Native Method)
    java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
    java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
    java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:374)
    java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    java.lang.Thread.run(Thread.java:745)
Thread 133 (CatalogJanitor-hdpnode5:60000):
  State: TIMED_WAITING
  Blocked count: 11185
  Waited count: 13904
  Stack:
    java.lang.Object.wait(Native Method)
    org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:92)
    org.apache.hadoop.hbase.Chore.run(Chore.java:95)
    java.lang.Thread.run(Thread.java:745)
Thread 132 (hdpnode5,60000,1430432112654-BalancerChore):
  State: TIMED_WAITING
  Blocked count: 0
  Waited count: 1409
  Stack:
    java.lang.Object.wait(Native Method)
    org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:92)
    org.apache.hadoop.hbase.Chore.run(Chore.java:95)
    java.lang.Thread.run(Thread.java:745)
Thread 131 (hdpnode5,60000,1430432112654-ClusterStatusChore):
  State: TIMED_WAITING
  Blocked count: 14088
  Waited count: 21133
  Stack:
    java.lang.Object.wait(Native Method)
    org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:92)
    org.apache.hadoop.hbase.Chore.run(Chore.java:95)
    java.lang.Thread.run(Thread.java:745)
Thread 79 (MASTER_SERVER_OPERATIONS-hdpnode5:60000-4):
  State: WAITING
  Blocked count: 15
  Waited count: 24
  Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@61b5436
  Stack:
    sun.misc.Unsafe.park(Native Method)
    java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
    java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
    java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
    java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    java.lang.Thread.run(Thread.java:745)
Thread 78 (MASTER_SERVER_OPERATIONS-hdpnode5:60000-3):
  State: WAITING
  Blocked count: 23
  Waited count: 32
  Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@61b5436
  Stack:
    sun.misc.Unsafe.park(Native Method)
    java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
    java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
    java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
    java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    java.lang.Thread.run(Thread.java:745)
Thread 77 (MASTER_SERVER_OPERATIONS-hdpnode5:60000-2):
  State: WAITING
  Blocked count: 28
  Waited count: 40
  Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@61b5436
  Stack:
    sun.misc.Unsafe.park(Native Method)
    java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
    java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
    java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
    java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    java.lang.Thread.run(Thread.java:745)
Thread 76 (MASTER_SERVER_OPERATIONS-hdpnode5:60000-1):
  State: WAITING
  Blocked count: 24
  Waited count: 37
  Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@61b5436
  Stack:
    sun.misc.Unsafe.park(Native Method)
    java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
    java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
    java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
    java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    java.lang.Thread.run(Thread.java:745)
Thread 75 (MASTER_SERVER_OPERATIONS-hdpnode5:60000-0):
  State: WAITING
  Blocked count: 14
  Waited count: 24
  Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@61b5436
  Stack:
    sun.misc.Unsafe.park(Native Method)
    java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
    java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
    java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
    java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    java.lang.Thread.run(Thread.java:745)
Thread 74 (FifoRpcScheduler.handler1-thread-20):
  State: WAITING
  Blocked count: 1506
  Waited count: 79989
  Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@622ebe4a
  Stack:
    sun.misc.Unsafe.park(Native Method)
    java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
    java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
    java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:374)
    java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    java.lang.Thread.run(Thread.java:745)
Thread 73 (FifoRpcScheduler.handler1-thread-19):
  State: WAITING
  Blocked count: 1642
  Waited count: 80106
  Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@622ebe4a
  Stack:
    sun.misc.Unsafe.park(Native Method)
    java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
    java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
    java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:374)
    java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    java.lang.Thread.run(Thread.java:745)
Thread 72 (FifoRpcScheduler.handler1-thread-18):
  State: WAITING
  Blocked count: 1622
  Waited count: 80206
  Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@622ebe4a
  Stack:
    sun.misc.Unsafe.park(Native Method)
    java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
    java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
    java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:374)
    java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    java.lang.Thread.run(Thread.java:745)
Thread 71 (FifoRpcScheduler.handler1-thread-17):
  State: WAITING
  Blocked count: 1631
  Waited count: 80069
  Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@622ebe4a
  Stack:
    sun.misc.Unsafe.park(Native Method)
    java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
    java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
    java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:374)
    java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    java.lang.Thread.run(Thread.java:745)
Thread 70 (FifoRpcScheduler.handler1-thread-16):
  State: WAITING
  Blocked count: 1730
  Waited count: 80146
  Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@622ebe4a
  Stack:
    sun.misc.Unsafe.park(Native Method)
    java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
    java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
    java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:374)
    java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    java.lang.Thread.run(Thread.java:745)
Thread 63 (FifoRpcScheduler.handler1-thread-15):
  State: WAITING
  Blocked count: 1778
  Waited count: 80306
  Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@622ebe4a
  Stack:
    sun.misc.Unsafe.park(Native Method)
    java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
    java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
    java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:374)
    java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    java.lang.Thread.run(Thread.java:745)
Thread 62 (FifoRpcScheduler.handler1-thread-14):
  State: WAITING
  Blocked count: 1627
  Waited count: 80093
  Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@622ebe4a
  Stack:
    sun.misc.Unsafe.park(Native Method)
    java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
    java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
    java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:374)
    java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    java.lang.Thread.run(Thread.java:745)
Thread 61 (FifoRpcScheduler.handler1-thread-13):
  State: WAITING
  Blocked count: 1641
  Waited count: 80111
  Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@622ebe4a
  Stack:
    sun.misc.Unsafe.park(Native Method)
    java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
    java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
    java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:374)
    java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    java.lang.Thread.run(Thread.java:745)
Thread 60 (FifoRpcScheduler.handler1-thread-12):
  State: WAITING
  Blocked count: 1597
  Waited count: 80034
  Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@622ebe4a
  Stack:
    sun.misc.Unsafe.park(Native Method)
    java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
    java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
    java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:374)
    java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    java.lang.Thread.run(Thread.java:745)
Thread 59 (FifoRpcScheduler.handler1-thread-11):
  State: WAITING
  Blocked count: 1665
  Waited count: 80105
  Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@622ebe4a
  Stack:
    sun.misc.Unsafe.park(Native Method)
    java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
    java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
    java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:374)
    java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    java.lang.Thread.run(Thread.java:745)
Thread 58 (FifoRpcScheduler.handler1-thread-10):
  State: WAITING
  Blocked count: 1567
  Waited count: 80011
  Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@622ebe4a
  Stack:
    sun.misc.Unsafe.park(Native Method)
    java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
    java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
    java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:374)
    java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    java.lang.Thread.run(Thread.java:745)
Thread 57 (FifoRpcScheduler.handler1-thread-9):
  State: WAITING
  Blocked count: 1583
  Waited count: 80045
  Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@622ebe4a
  Stack:
    sun.misc.Unsafe.park(Native Method)
    java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
    java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
    java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:374)
    java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    java.lang.Thread.run(Thread.java:745)
Thread 56 (FifoRpcScheduler.handler1-thread-8):
  State: WAITING
  Blocked count: 1549
  Waited count: 80039
  Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@622ebe4a
  Stack:
    sun.misc.Unsafe.park(Native Method)
    java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
    java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
    java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:374)
    java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    java.lang.Thread.run(Thread.java:745)
Thread 55 (FifoRpcScheduler.handler1-thread-7):
  State: WAITING
  Blocked count: 1642
  Waited count: 80087
  Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@622ebe4a
  Stack:
    sun.misc.Unsafe.park(Native Method)
    java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
    java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
    java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:374)
    java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    java.lang.Thread.run(Thread.java:745)
Thread 54 (FifoRpcScheduler.handler1-thread-6):
  State: WAITING
  Blocked count: 1728
  Waited count: 80263
  Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@622ebe4a
  Stack:
    sun.misc.Unsafe.park(Native Method)
    java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
    java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
    java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:374)
    java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    java.lang.Thread.run(Thread.java:745)
Thread 53 (FifoRpcScheduler.handler1-thread-5):
  State: WAITING
  Blocked count: 1669
  Waited count: 80211
  Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@622ebe4a
  Stack:
    sun.misc.Unsafe.park(Native Method)
    java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
    java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
    java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:374)
    java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    java.lang.Thread.run(Thread.java:745)
Thread 52 (FifoRpcScheduler.handler1-thread-4):
  State: WAITING
  Blocked count: 1581
  Waited count: 80032
  Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@622ebe4a
  Stack:
    sun.misc.Unsafe.park(Native Method)
    java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
    java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
    java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:374)
    java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    java.lang.Thread.run(Thread.java:745)
Thread 50 (FifoRpcScheduler.handler1-thread-2):
  State: WAITING
  Blocked count: 1578
  Waited count: 80042
  Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@622ebe4a
  Stack:
    sun.misc.Unsafe.park(Native Method)
    java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
    java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
    java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:374)
    java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    java.lang.Thread.run(Thread.java:745)
Thread 51 (FifoRpcScheduler.handler1-thread-3):
  State: WAITING
  Blocked count: 1474
  Waited count: 79965
  Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@622ebe4a
  Stack:
    sun.misc.Unsafe.park(Native Method)
    java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
    java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
    java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:374)
    java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    java.lang.Thread.run(Thread.java:745)
Thread 49 (FifoRpcScheduler.handler1-thread-1):
  State: WAITING
  Blocked count: 1560
  Waited count: 80034
  Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@622ebe4a
  Stack:
    sun.misc.Unsafe.park(Native Method)
    java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
    java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
    java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:374)
    java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    java.lang.Thread.run(Thread.java:745)
Thread 47 (master:hdpnode5:60000.archivedHFileCleaner):
  State: TIMED_WAITING
  Blocked count: 9314944
  Waited count: 17758522
  Stack:
    java.lang.Object.wait(Native Method)
    org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:92)
    org.apache.hadoop.hbase.Chore.run(Chore.java:95)
    java.lang.Thread.run(Thread.java:745)
Thread 48 (snapshot-hfile-cleaner-cache-refresher):
  State: TIMED_WAITING
  Blocked count: 5588
  Waited count: 9844
  Stack:
    java.lang.Object.wait(Native Method)
    java.util.TimerThread.mainLoop(Timer.java:552)
    java.util.TimerThread.run(Timer.java:505)
Thread 43 (master:hdpnode5:60000.oldLogCleaner):
  State: TIMED_WAITING
  Blocked count: 13296
  Waited count: 30494
  Stack:
    java.lang.Object.wait(Native Method)
    org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:92)
    org.apache.hadoop.hbase.Chore.run(Chore.java:95)
    java.lang.Thread.run(Thread.java:745)
Thread 46 (snapshot-log-cleaner-cache-refresher):
  State: TIMED_WAITING
  Blocked count: 4306
  Waited count: 9837
  Stack:
    java.lang.Object.wait(Native Method)
    java.util.TimerThread.mainLoop(Timer.java:552)
    java.util.TimerThread.run(Timer.java:505)
Thread 45 (master:hdpnode5:60000-EventThread):
  State: WAITING
  Blocked count: 0
  Waited count: 6
  Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@186bd87
  Stack:
    sun.misc.Unsafe.park(Native Method)
    java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
    java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
    java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
    org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494)
Thread 44 (master:hdpnode5:60000-SendThread(hdpnode2:2181)):
  State: RUNNABLE
  Blocked count: 2
  Waited count: 2
  Stack:
    sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
    sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
    sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
    sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
    sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
    org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:338)
    org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
Thread 42 (master:hdpnode5:60000-EventThread):
  State: WAITING
  Blocked count: 0
  Waited count: 3
  Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@3504ead9
  Stack:
    sun.misc.Unsafe.park(Native Method)
    java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
    java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
    java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
    org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494)
Thread 41 (master:hdpnode5:60000-SendThread(hdpnode2:2181)):
  State: RUNNABLE
  Blocked count: 8
  Waited count: 0
  Stack:
    sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
    sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
    sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
    sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
    sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
    org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:338)
    org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
Thread 39 (hdpnode5,60000,1430432112654.splitLogManagerTimeoutMonitor):
  State: TIMED_WAITING
  Blocked count: 0
  Waited count: 422523
  Stack:
    java.lang.Object.wait(Native Method)
    org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:92)
    org.apache.hadoop.hbase.Chore.run(Chore.java:95)
    java.lang.Thread.run(Thread.java:745)
Thread 38 (org.apache.hadoop.hdfs.PeerCache@7fb1ae1d):
  State: TIMED_WAITING
  Blocked count: 0
  Waited count: 140873
  Stack:
    java.lang.Thread.sleep(Native Method)
    org.apache.hadoop.hdfs.PeerCache.run(PeerCache.java:250)
    org.apache.hadoop.hdfs.PeerCache.access$000(PeerCache.java:41)
    org.apache.hadoop.hdfs.PeerCache$1.run(PeerCache.java:119)
    java.lang.Thread.run(Thread.java:745)
Thread 35 (Timer-0):
  State: TIMED_WAITING
  Blocked count: 0
  Waited count: 14089
  Stack:
    java.lang.Object.wait(Native Method)
    java.util.TimerThread.mainLoop(Timer.java:552)
    java.util.TimerThread.run(Timer.java:505)
Thread 34 (520843222@qtp-1989625974-1 - Acceptor0
HttpServer$SelectChannelConnectorWithSafeStartup@0.0.0.0:60010):
  State: RUNNABLE
  Blocked count: 32380
  Waited count: 1
  Stack:
    sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
    sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
    sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
    sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
    sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
    org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:511)
    org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:193)
    org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124)
    org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708)
    org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
Thread 13 (master:hdpnode5:60000):
  State: TIMED_WAITING
  Blocked count: 974
  Waited count: 4214358
  Stack:
    java.lang.Object.wait(Native Method)
    org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:92)
    org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:56)
    org.apache.hadoop.hbase.master.HMaster.loop(HMaster.java:756)
    org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:614)
    java.lang.Thread.run(Thread.java:745)
Thread 31 (JvmPauseMonitor):
  State: TIMED_WAITING
  Blocked count: 0
  Waited count: 844718
  Stack:
    java.lang.Thread.sleep(Native Method)
    org.apache.hadoop.hbase.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:159)
    java.lang.Thread.run(Thread.java:745)
Thread 14 (RpcServer.listener,port=60000):
  State: RUNNABLE
  Blocked count: 62312
  Waited count: 0
  Stack:
    sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
    sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
    sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
    sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
    sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
    sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
    org.apache.hadoop.hbase.ipc.RpcServer$Listener.run(RpcServer.java:688)
Thread 26 (RpcServer.responder):
  State: RUNNABLE
  Blocked count: 10334
  Waited count: 10300
  Stack:
    sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
    sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
    sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
    sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
    sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
    org.apache.hadoop.hbase.ipc.RpcServer$Responder.doRunLoop(RpcServer.java:878)
    org.apache.hadoop.hbase.ipc.RpcServer$Responder.run(RpcServer.java:861)
Thread 30 (main-EventThread):
  State: WAITING
  Blocked count: 282
  Waited count: 2341
  Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@64862dc1
  Stack:
    sun.misc.Unsafe.park(Native Method)
    java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
    java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
    java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
    org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494)
Thread 29 (main-SendThread(hdpnode2:2181)):
  State: RUNNABLE
  Blocked count: 494
  Waited count: 2
  Stack:
    sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
    sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
    sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
    sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
    sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
    org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:338)
    org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
Thread 27 (Thread-6):
  State: RUNNABLE
  Blocked count: 0
  Waited count: 0
  Stack:
    org.apache.hadoop.net.unix.DomainSocketWatcher.doPoll0(Native Method)
    org.apache.hadoop.net.unix.DomainSocketWatcher.access$900(DomainSocketWatcher.java:52)
    org.apache.hadoop.net.unix.DomainSocketWatcher$1.run(DomainSocketWatcher.java:474)
    java.lang.Thread.run(Thread.java:745)
Thread 25 (Timer for 'HBase' metrics system):
  State: TIMED_WAITING
  Blocked count: 0
  Waited count: 42268
  Stack:
    java.lang.Object.wait(Native Method)
    java.util.TimerThread.mainLoop(Timer.java:552)
    java.util.TimerThread.run(Timer.java:505)
Thread 24 (RpcServer.reader=9,port=60000):
  State: RUNNABLE
  Blocked count: 6266
  Waited count: 6260
  Stack:
    sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
    sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
    sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
    sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
    sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
    sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
    org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:582)
    org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:568)
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    java.lang.Thread.run(Thread.java:745)
Thread 23 (RpcServer.reader=8,port=60000):
  State: RUNNABLE
  Blocked count: 6270
  Waited count: 6256
  Stack:
    sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
    sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
    sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
    sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
    sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
    sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
    org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:582)
    org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:568)
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    java.lang.Thread.run(Thread.java:745)
Thread 22 (RpcServer.reader=7,port=60000):
  State: RUNNABLE
  Blocked count: 6266
  Waited count: 6255
  Stack:
    sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
    sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
    sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
    sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
    sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
    sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
    org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:582)
    org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:568)
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    java.lang.Thread.run(Thread.java:745)
Thread 21 (RpcServer.reader=6,port=60000):
  State: RUNNABLE
  Blocked count: 6273
  Waited count: 6263
  Stack:
    sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
    sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
    sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
    sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
    sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
    sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
    org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:582)
    org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:568)
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    java.lang.Thread.run(Thread.java:745)
Thread 20 (RpcServer.reader=5,port=60000):
  State: RUNNABLE
  Blocked count: 6272
  Waited count: 6269
  Stack:
    sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
    sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
    sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
    sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
    sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
    sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
    org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:582)
    org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:568)
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    java.lang.Thread.run(Thread.java:745)
Thread 19 (RpcServer.reader=4,port=60000):
  State: RUNNABLE
  Blocked count: 6277
  Waited count: 6263
  Stack:
    sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
    sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
    sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
    sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
    sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
    sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
    org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:582)
    org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:568)
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    java.lang.Thread.run(Thread.java:745)
Thread 18 (RpcServer.reader=3,port=60000):
  State: RUNNABLE
  Blocked count: 6283
  Waited count: 6261
  Stack:
    sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
    sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
    sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
    sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
    sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
    sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
    org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:582)
    org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:568)
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    java.lang.Thread.run(Thread.java:745)
Thread 17 (RpcServer.reader=2,port=60000):
  State: RUNNABLE
  Blocked count: 6273
  Waited count: 6272
  Stack:
    sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
    sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
    sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
    sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
    sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
    sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
    org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:582)
    org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:568)
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    java.lang.Thread.run(Thread.java:745)
Thread 16 (RpcServer.reader=1,port=60000):
  State: RUNNABLE
  Blocked count: 6280
  Waited count: 6258
  Stack:
    sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
    sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
    sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
    sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
    sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
    sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
    org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:582)
    org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:568)
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    java.lang.Thread.run(Thread.java:745)
Thread 15 (RpcServer.reader=0,port=60000):
  State: RUNNABLE
  Blocked count: 6271
  Waited count: 6249
  Stack:
    sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
    sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
    sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
    sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
    sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
    sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
    org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:582)
    org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:568)
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    java.lang.Thread.run(Thread.java:745)
Thread 9 (event filterer):
  State: TIMED_WAITING
  Blocked count: 0
  Waited count: 211359
  Stack:
    sun.misc.Unsafe.park(Native Method)
    java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
    java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
    java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
    com.cloudera.cmf.eventcatcher.client.logs.LogEventProcessor.runFiltering(LogEventProcessor.java:132)
    com.cloudera.cmf.eventcatcher.client.logs.LogEventProcessor.access$000(LogEventProcessor.java:28)
    com.cloudera.cmf.eventcatcher.client.logs.LogEventProcessor$1.run(LogEventProcessor.java:81)
Thread 5 (Signal Dispatcher):
  State: RUNNABLE
  Blocked count: 0
  Waited count: 0
  Stack:
Thread 3 (Finalizer):
  State: WAITING
  Blocked count: 50
  Waited count: 33
  Waiting on java.lang.ref.ReferenceQueue$Lock@119cb1a8
  Stack:
    java.lang.Object.wait(Native Method)
    java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
    java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
    java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:209)
Thread 2 (Reference Handler):
  State: WAITING
  Blocked count: 3023
  Waited count: 3021
  Waiting on java.lang.ref.Reference$Lock@465395a0
  Stack:
    java.lang.Object.wait(Native Method)
    java.lang.Object.wait(Object.java:503)
    java.lang.ref.Reference$ReferenceHandler.run(Reference.java:133)
Thread 1 (main):
  State: WAITING
  Blocked count: 10
  Waited count: 12
  Waiting on java.lang.Thread@25260d76
  Stack:
    java.lang.Object.wait(Native Method)
    java.lang.Thread.join(Thread.java:1281)
    java.lang.Thread.join(Thread.java:1355)
    org.apache.hadoop.hbase.util.HasThread.join(HasThread.java:89)
    org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:192)
    org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:135)
    org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
    org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
    org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2822)


Master configuration:
===========================================================
<?xml version="1.0" encoding="UTF-8" standalone="no"?><configuration>
<property><name>dfs.journalnode.rpc-address</name><value>0.0.0.0:8485</value><source>hdfs-default.xml</source></property>
<property><name>io.storefile.bloom.block.size</name><value>131072</value><source>hbase-default.xml</source></property>
<property><name>yarn.ipc.rpc.class</name><value>org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC</value><source>yarn-default.xml</source></property>
<property><name>mapreduce.job.maxtaskfailures.per.tracker</name><value>3</value><source>mapred-default.xml</source></property>
<property><name>yarn.client.max-cached-nodemanagers-proxies</name><value>0</value><source>yarn-default.xml</source></property>
<property><name>hbase.rest.threads.min</name><value>2</value><source>hbase-default.xml</source></property>
<property><name>hbase.rs.cacheblocksonwrite</name><value>false</value><source>hbase-default.xml</source></property>
<property><name>ha.health-monitor.connect-retry-interval.ms</name><value>1000</value><source>core-default.xml</source></property>
<property><name>yarn.resourcemanager.work-preserving-recovery.enabled</name><value>false</value><source>yarn-default.xml</source></property>
<property><name>dfs.client.mmap.cache.size</name><value>256</value><source>hdfs-default.xml</source></property>
<property><name>mapreduce.reduce.markreset.buffer.percent</name><value>0.0</value><source>mapred-default.xml</source></property>
<property><name>dfs.datanode.data.dir</name><value>file://${hadoop.tmp.dir}/dfs/data</value><source>hdfs-default.xml</source></property>
<property><name>mapreduce.jobhistory.max-age-ms</name><value>604800000</value><source>mapred-default.xml</source></property>
<property><name>mapreduce.job.ubertask.enable</name><value>false</value><source>mapred-default.xml</source></property>
<property><name>dfs.namenode.delegation.token.renew-interval</name><value>86400000</value><source>hdfs-default.xml</source></property>
<property><name>yarn.nodemanager.log-aggregation.compression-type</name><value>none</value><source>yarn-default.xml</source></property>
<property><name>dfs.namenode.replication.considerLoad</name><value>true</value><source>hdfs-default.xml</source></property>
<property><name>mapreduce.job.complete.cancel.delegation.tokens</name><value>true</value><source>mapred-default.xml</source></property>
<property><name>mapreduce.jobhistory.datestring.cache.size</name><value>200000</value><source>mapred-default.xml</source></property>
<property><name>hadoop.security.kms.client.authentication.retry-count</name><value>1</value><source>core-default.xml</source></property>
<property><name>hadoop.ssl.enabled.protocols</name><value>TLSv1</value><source>core-default.xml</source></property>
<property><name>hbase.status.multicast.address.ip</name><value>226.1.1.3</value><source>hbase-default.xml</source></property>
<property><name>dfs.namenode.retrycache.heap.percent</name><value>0.03f</value><source>hdfs-default.xml</source></property>
<property><name>yarn.resourcemanager.scheduler.address</name><value>${yarn.resourcemanager.hostname}:8030</value><source>yarn-default.xml</source></property>
<property><name>dfs.namenode.logging.level</name><value>info</value><source>hdfs-default.xml</source></property>
<property><name>hadoop.proxyuser.HTTP.groups</name><value>*</value><source>core-site.xml</source></property>
<property><name>dfs.client.file-block-storage-locations.num-threads</name><value>10</value><source>hdfs-default.xml</source></property>
<property><name>yarn.resourcemanager.proxy-user-privileges.enabled</name><value>false</value><source>yarn-default.xml</source></property>
<property><name>dfs.datanode.balance.bandwidthPerSec</name><value>1048576</value><source>hdfs-default.xml</source></property>
<property><name>mapreduce.reduce.shuffle.fetch.retry.enabled</name><value>${yarn.nodemanager.recovery.enabled}</value><source>mapred-default.xml</source></property>
<property><name>io.mapfile.bloom.error.rate</name><value>0.005</value><source>core-default.xml</source></property>
<property><name>yarn.nodemanager.resourcemanager.minimum.version</name><value>NONE</value><source>yarn-default.xml</source></property>
<property><name>yarn.resourcemanager.nodemanagers.heartbeat-interval-ms</name><value>1000</value><source>yarn-default.xml</source></property>
<property><name>dfs.secondary.namenode.kerberos.internal.spnego.principal</name><value>${dfs.web.authentication.kerberos.principal}</value><source>hdfs-default.xml</source></property>
<property><name>yarn.nodemanager.delete.debug-delay-sec</name><value>0</value><source>yarn-default.xml</source></property>
<property><name>hadoop.proxyuser.flume.groups</name><value>*</value><source>core-site.xml</source></property>
<property><name>dfs.client.read.shortcircuit.streams.cache.size</name><value>256</value><source>hdfs-default.xml</source></property>
<property><name>hadoop.proxyuser.hue.hosts</name><value>*</value><source>core-site.xml</source></property>
<property><name>yarn.scheduler.maximum-allocation-vcores</name><value>32</value><source>yarn-default.xml</source></property>
<property><name>dfs.image.transfer.bandwidthPerSec</name><value>0</value><source>hdfs-default.xml</source></property>
<property><name>hfile.block.bloom.cacheonwrite</name><value>false</value><source>hbase-default.xml</source></property>
<property><name>hbase.zookeeper.quorum</name><value>hdpnode1,hdpnode2,hdpnode3,hdpnode4,hdpnode5</value><source>hbase-site.xml</source></property>
<property><name>yarn.timeline-service.address</name><value>${yarn.timeline-service.hostname}:10200</value><source>yarn-default.xml</source></property>
<property><name>yarn.nodemanager.disk-health-checker.min-free-space-per-disk-mb</name><value>0</value><source>yarn-default.xml</source></property>
<property><name>mapreduce.job.hdfs-servers</name><value>${fs.defaultFS}</value><source>yarn-default.xml</source></property>
<property><name>mapreduce.task.profile.reduce.params</name><value>${mapreduce.task.profile.params}</value><source>mapred-default.xml</source></property>
<property><name>hbase.zookeeper.property.syncLimit</name><value>5</value><source>hbase-default.xml</source></property>
<property><name>dfs.namenode.fs-limits.min-block-size</name><value>1048576</value><source>hdfs-default.xml</source></property>
<property><name>ftp.stream-buffer-size</name><value>4096</value><source>core-default.xml</source></property>
<property><name>dfs.client.use.legacy.blockreader.local</name><value>false</value><source>hdfs-default.xml</source></property>
<property><name>dfs.short.circuit.shared.memory.watcher.interrupt.check.ms</name><value>60000</value><source>hdfs-default.xml</source></property>
<property><name>dfs.datanode.directoryscan.threads</name><value>1</value><source>hdfs-default.xml</source></property>
<property><name>fs.s3a.buffer.dir</name><value>${hadoop.tmp.dir}/s3a</value><source>core-default.xml</source></property>
<property><name>yarn.client.application-client-protocol.poll-interval-ms</name><value>200</value><source>yarn-default.xml</source></property>
<property><name>yarn.timeline-service.leveldb-timeline-store.path</name><value>${hadoop.tmp.dir}/yarn/timeline</value><source>yarn-default.xml</source></property>
<property><name>mapreduce.job.split.metainfo.maxsize</name><value>10000000</value><source>mapred-default.xml</source></property>
<property><name>dfs.namenode.edits.noeditlogchannelflush</name><value>false</value><source>hdfs-default.xml</source></property>
<property><name>s3native.bytes-per-checksum</name><value>512</value><source>core-default.xml</source></property>
<property><name>hbase.rest.filter.classes</name><value>org.apache.hadoop.hbase.rest.filter.GzipFilter</value><source>hbase-default.xml</source></property>
<property><name>yarn.client.failover-retries-on-socket-timeouts</name><value>0</value><source>yarn-default.xml</source></property>
<property><name>dfs.namenode.startup.delay.block.deletion.sec</name><value>0</value><source>hdfs-default.xml</source></property>
<property><name>dfs.webhdfs.user.provider.user.pattern</name><value>^[A-Za-z_][A-Za-z0-9._-]*[$]?$</value><source>hdfs-default.xml</source></property>
<property><name>mapreduce.tasktracker.tasks.sleeptimebeforesigkill</name><value>5000</value><source>mapred-default.xml</source></property>
<property><name>hadoop.http.authentication.type</name><value>simple</value><source>core-default.xml</source></property>
<property><name>dfs.namenode.path.based.cache.refresh.interval.ms</name><value>30000</value><source>hdfs-default.xml</source></property>
<property><name>mapreduce.local.clientfactory.class.name</name><value>org.apache.hadoop.mapred.LocalClientFactory</value><source>mapred-default.xml</source></property>
<property><name>dfs.datanode.cache.revocation.timeout.ms</name><value>900000</value><source>hdfs-default.xml</source></property>
<property><name>ipc.client.connection.maxidletime</name><value>10000</value><source>core-default.xml</source></property>
<property><name>dfs.namenode.safemode.threshold-pct</name><value>0.999f</value><source>hdfs-default.xml</source></property>
<property><name>hfile.block.cache.size</name><value>0.0</value><source>programatically</source></property>
<property><name>fs.s3a.multipart.purge.age</name><value>86400</value><source>core-default.xml</source></property>
<property><name>dfs.namenode.num.checkpoints.retained</name><value>2</value><source>hdfs-default.xml</source></property>
<property><name>hbase.hregion.memstore.mslab.enabled</name><value>true</value><source>hbase-default.xml</source></property>
<property><name>hbase.master.ipc.address</name><value>0.0.0.0</value><source>hbase-site.xml</source></property>
<property><name>mapreduce.job.ubertask.maxmaps</name><value>9</value><source>mapred-default.xml</source></property>
<property><name>dfs.namenode.stale.datanode.interval</name><value>30000</value><source>hdfs-default.xml</source></property>
<property><name>yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage</name><value>90.0</value><source>yarn-default.xml</source></property>
<property><name>mapreduce.tasktracker.http.address</name><value>0.0.0.0:50060</value><source>mapred-default.xml</source></property>
<property><name>mapreduce.ifile.readahead.bytes</name><value>4194304</value><source>mapred-default.xml</source></property>
<property><name>mapreduce.jobhistory.admin.address</name><value>0.0.0.0:10033</value><source>mapred-default.xml</source></property>
<property><name>s3.client-write-packet-size</name><value>65536</value><source>core-default.xml</source></property>
<property><name>hbase.master.port</name><value>60000</value><source>hbase-site.xml</source></property>
<property><name>dfs.block.access.token.lifetime</name><value>600</value><source>hdfs-default.xml</source></property>
<property><name>yarn.app.mapreduce.am.resource.cpu-vcores</name><value>1</value><source>mapred-default.xml</source></property>
<property><name>mapreduce.input.lineinputformat.linespermap</name><value>1</value><source>mapred-default.xml</source></property>
<property><name>hbase.regionserver.checksum.verify</name><value>true</value><source>hbase-default.xml</source></property>
<property><name>dfs.namenode.num.extra.edits.retained</name><value>1000000</value><source>hdfs-default.xml</source></property>
<property><name>hbase.security.visibility.mutations.checkauths</name><value>false</value><source>hbase-default.xml</source></property>
<property><name>mapreduce.reduce.shuffle.input.buffer.percent</name><value>0.70</value><source>mapred-default.xml</source></property>
<property><name>hadoop.http.staticuser.user</name><value>dr.who</value><source>core-default.xml</source></property>
<property><name>mapreduce.reduce.maxattempts</name><value>4</value><source>mapred-default.xml</source></property>
<property><name>hbase.security.authorization</name><value>false</value><source>hbase-site.xml</source></property>
<property><name>hadoop.security.group.mapping.ldap.search.filter.user</name><value>(&amp;(objectClass=user)(sAMAccountName={0}))</value><source>core-default.xml</source></property>
<property><name>mapreduce.jobhistory.admin.acl</name><value>*</value><source>mapred-default.xml</source></property>
<property><name>dfs.client.context</name><value>default</value><source>hdfs-default.xml</source></property>
<property><name>mapreduce.map.maxattempts</name><value>4</value><source>mapred-default.xml</source></property>
<property><name>yarn.resourcemanager.zk-retry-interval-ms</name><value>1000</value><source>yarn-default.xml</source></property>
<property><name>mapreduce.jobhistory.cleaner.interval-ms</name><value>86400000</value><source>mapred-default.xml</source></property>
<property><name>dfs.datanode.drop.cache.behind.reads</name><value>false</value><source>hdfs-default.xml</source></property>
<property><name>hbase.server.versionfile.writeattempts</name><value>3</value><source>hbase-default.xml</source></property>
<property><name>dfs.permissions.superusergroup</name><value>supergroup</value><source>hdfs-default.xml</source></property>
<property><name>hbase.zookeeper.useMulti</name><value>true</value><source>hbase-default.xml</source></property>
<property><name>fs.s3n.block.size</name><value>67108864</value><source>core-default.xml</source></property>
<property><name>dfs.namenode.list.cache.pools.num.responses</name><value>100</value><source>hdfs-default.xml</source></property>
<property><name>hbase.zookeeper.leaderport</name><value>3888</value><source>hbase-default.xml</source></property>
<property><name>dfs.datanode.slow.io.warning.threshold.ms</name><value>300</value><source>hdfs-default.xml</source></property>
<property><name>hbase.master.info.port</name><value>60010</value><source>hbase-site.xml</source></property>
<property><name>dfs.namenode.fs-limits.max-blocks-per-file</name><value>1048576</value><source>hdfs-default.xml</source></property>
<property><name>yarn.nodemanager.vmem-check-enabled</name><value>false</value><source>yarn-default.xml</source></property>
<property><name>hadoop.proxyuser.HTTP.hosts</name><value>*</value><source>core-site.xml</source></property>
<property><name>hadoop.security.authentication</name><value>simple</value><source>core-site.xml</source></property>
<property><name>mapreduce.reduce.cpu.vcores</name><value>1</value><source>mapred-default.xml</source></property>
<property><name>net.topology.node.switch.mapping.impl</name><value>org.apache.hadoop.net.ScriptBasedMapping</value><source>core-default.xml</source></property>
<property><name>fs.s3.sleepTimeSeconds</name><value>10</value><source>core-default.xml</source></property>
<property><name>yarn.timeline-service.ttl-ms</name><value>604800000</value><source>yarn-default.xml</source></property>
<property><name>yarn.resourcemanager.keytab</name><value>/etc/krb5.keytab</value><source>yarn-default.xml</source></property>
<property><name>yarn.resourcemanager.container.liveness-monitor.interval-ms</name><value>600000</value><source>yarn-default.xml</source></property>
<property><name>mapreduce.jobtracker.heartbeats.in.second</name><value>100</value><source>mapred-default.xml</source></property>
<property><name>hbase.mob.cache.evict.period</name><value>3600</value><source>hbase-default.xml</source></property>
<property><name>yarn.app.mapreduce.am.scheduler.heartbeat.interval-ms</name><value>1000</value><source>mapred-default.xml</source></property>
<property><name>yarn.app.mapreduce.client-am.ipc.max-retries-on-timeouts</name><value>3</value><source>mapred-default.xml</source></property>
<property><name>yarn.nodemanager.linux-container-executor.cgroups.hierarchy</name><value>/hadoop-yarn</value><source>yarn-default.xml</source></property>
<property><name>s3.bytes-per-checksum</name><value>512</value><source>core-default.xml</source></property>
<property><name>hbase.regionserver.dns.nameserver</name><value>default</value><source>hbase-default.xml</source></property>
<property><name>hadoop.ssl.require.client.cert</name><value>false</value><source>core-site.xml</source></property>
<property><name>dfs.journalnode.http-address</name><value>0.0.0.0:8480</value><source>hdfs-default.xml</source></property>
<property><name>mapreduce.output.fileoutputformat.compress</name><value>false</value><source>mapred-default.xml</source></property>
<property><name>fs.default.name</name><value>hdfs://hdpnode2:8020</value></property>
<property><name>dfs.ha.automatic-failover.enabled</name><value>false</value><source>hdfs-default.xml</source></property>
<property><name>hbase.ipc.server.callqueue.read.ratio</name><value>0</value><source>hbase-default.xml</source></property>
<property><name>hbase.cluster.distributed</name><value>true</value><source>hbase-site.xml</source></property>
<property><name>hbase.rootdir</name><value>hdfs://hdpnode2:8020/hbase</value><source>hbase-site.xml</source></property>
<property><name>yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled</name><value>true</value><source>yarn-default.xml</source></property>
<property><name>mapreduce.shuffle.max.threads</name><value>0</value><source>mapred-default.xml</source></property>
<property><name>dfs.namenode.invalidate.work.pct.per.iteration</name><value>0.32f</value><source>hdfs-default.xml</source></property>
<property><name>s3native.client-write-packet-size</name><value>65536</value><source>core-default.xml</source></property>
<property><name>dfs.client.block.write.replace-datanode-on-failure.policy</name><value>DEFAULT</value><source>hdfs-default.xml</source></property>
<property><name>mapreduce.client.submit.file.replication</name><value>10</value><source>mapred-default.xml</source></property>
<property><name>yarn.app.mapreduce.am.job.committer.commit-window</name><value>10000</value><source>mapred-default.xml</source></property>
<property><name>yarn.nodemanager.sleep-delay-before-sigkill.ms</name><value>250</value><source>yarn-default.xml</source></property>
<property><name>yarn.nodemanager.env-whitelist</name><value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,HADOOP_YARN_HOME</value><source>yarn-default.xml</source></property>
<property><name>dfs.namenode.acls.enabled</name><value>false</value><source>hdfs-site.xml</source></property>
<property><name>dfs.namenode.secondary.http-address</name><value>0.0.0.0:50090</value><source>hdfs-default.xml</source></property>
<property><name>mapreduce.map.speculative</name><value>true</value><source>mapred-default.xml</source></property>
<property><name>mapreduce.job.speculative.slowtaskthreshold</name><value>1.0</value><source>mapred-default.xml</source></property>
<property><name>mapreduce.task.tmp.dir</name><value>./tmp</value><source>mapred-default.xml</source></property>
<property><name>yarn.nodemanager.linux-container-executor.cgroups.mount</name><value>false</value><source>yarn-default.xml</source></property>
<property><name>hbase.auth.token.max.lifetime</name><value>604800000</value><source>hbase-default.xml</source></property>
<property><name>hbase.regionserver.msginterval</name><value>3000</value><source>hbase-default.xml</source></property>
<property><name>mapreduce.tasktracker.http.threads</name><value>40</value><source>mapred-default.xml</source></property>
<property><name>mapreduce.jobhistory.http.policy</name><value>HTTP_ONLY</value><source>mapred-default.xml</source></property>
<property><name>hbase.ipc.client.fallback-to-simple-auth-allowed</name><value>false</value><source>hbase-default.xml</source></property>
<property><name>fs.s3a.paging.maximum</name><value>5000</value><source>core-default.xml</source></property>
<property><name>hbase.rest.threads.max</name><value>100</value><source>hbase-default.xml</source></property>
<property><name>fs.s3.buffer.dir</name><value>${hadoop.tmp.dir}/s3</value><source>core-default.xml</source></property>
<property><name>hadoop.proxyuser.flume.hosts</name><value>*</value><source>core-site.xml</source></property>
<property><name>hbase.snapshot.enabled</name><value>true</value><source>hbase-site.xml</source></property>
<property><name>hbase.dynamic.jars.dir</name><value>${hbase.rootdir}/lib</value><source>hbase-default.xml</source></property>
<property><name>hbase.defaults.for.version</name><value>0.98.6-cdh5.3.3</value><source>hbase-default.xml</source></property>
<property><name>io.native.lib.available</name><value>true</value><source>core-default.xml</source></property>
<property><name>mapreduce.jobhistory.done-dir</name><value>${yarn.app.mapreduce.am.staging-dir}/history/done</value><source>mapred-default.xml</source></property>
<property><name>hbase.regions.slop</name><value>0.2</value><source>hbase-default.xml</source></property>
<property><name>dfs.namenode.avoid.write.stale.datanode</name><value>false</value><source>hdfs-default.xml</source></property>
<property><name>dfs.namenode.checkpoint.txns</name><value>1000000</value><source>hdfs-default.xml</source></property>
<property><name>hadoop.ssl.hostname.verifier</name><value>DEFAULT</value><source>core-default.xml</source></property>
<property><name>zookeeper.znode.rootserver</name><value>root-region-server</value><source>hbase-site.xml</source></property>
<property><name>mapreduce.task.timeout</name><value>600000</value><source>mapred-default.xml</source></property>
<property><name>hbase.client.max.perserver.tasks</name><value>5</value><source>hbase-default.xml</source></property>
<property><name>yarn.nodemanager.disk-health-checker.interval-ms</name><value>120000</value><source>yarn-default.xml</source></property>
<property><name>dfs.journalnode.https-address</name><value>0.0.0.0:8481</value><source>hdfs-default.xml</source></property>
<property><name>hadoop.security.groups.cache.secs</name><value>300</value><source>core-default.xml</source></property>
<property><name>mapreduce.input.fileinputformat.split.minsize</name><value>0</value><source>mapred-default.xml</source></property>
<property><name>dfs.datanode.sync.behind.writes</name><value>false</value><source>hdfs-default.xml</source></property>
<property><name>zookeeper.session.timeout</name><value>60000</value><source>hbase-site.xml</source></property>
<property><name>hadoop.proxyuser.hue.groups</name><value>*</value><source>core-site.xml</source></property>
<property><name>ipc.server.tcpnodelay</name><value>false</value><source>core-default.xml</source></property>
<property><name>mapreduce.shuffle.port</name><value>13562</value><source>mapred-default.xml</source></property>
<property><name>hadoop.rpc.protection</name><value>authentication</value><source>core-site.xml</source></property>
<property><name>replication.source.ratio</name><value>1.0</value><source>hbase-site.xml</source></property>
<property><name>dfs.client.https.keystore.resource</name><value>ssl-client.xml</value><source>hdfs-default.xml</source></property>
<property><name>dfs.namenode.list.encryption.zones.num.responses</name><value>100</value><source>hdfs-default.xml</source></property>
<property><name>yarn.client.failover-proxy-provider</name><value>org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider</value><source>yarn-default.xml</source></property>
<property><name>mapreduce.jobtracker.retiredjobs.cache.size</name><value>1000</value><source>mapred-default.xml</source></property>
<property><name>hbase.balancer.period</name><value>300000</value><source>hbase-default.xml</source></property>
<property><name>yarn.nodemanager.resourcemanager.connect.retry_interval.secs</name><value>30</value><source>yarn-default.xml</source></property>
<property><name>ipc.client.tcpnodelay</name><value>false</value><source>core-default.xml</source></property>
<property><name>dfs.ha.tail-edits.period</name><value>60</value><source>hdfs-default.xml</source></property>
<property><name>fs.s3.maxRetries</name><value>4</value><source>core-default.xml</source></property>
<property><name>dfs.datanode.drop.cache.behind.writes</name><value>false</value><source>hdfs-default.xml</source></property>
<property><name>mapreduce.jobtracker.address</name><value>local</value><source>mapred-default.xml</source></property>
<property><name>hadoop.http.authentication.kerberos.principal</name><value>HTTP/_HOST@LOCALHOST</value><source>core-default.xml</source></property>
<property><name>nfs.server.port</name><value>2049</value><source>hdfs-default.xml</source></property>
<property><name>yarn.resourcemanager.webapp.address</name><value>${yarn.resourcemanager.hostname}:8088</value><source>yarn-default.xml</source></property>
<property><name>mapreduce.task.profile.reduces</name><value>0-2</value><source>mapred-default.xml</source></property>
<property><name>yarn.resourcemanager.am.max-attempts</name><value>2</value><source>yarn-default.xml</source></property>
<property><name>hbase.hstore.blockingWaitTime</name><value>90000</value><source>hbase-default.xml</source></property>
<property><name>nfs.dump.dir</name><value>/tmp/.hdfs-nfs</value><source>hdfs-default.xml</source></property>
<property><name>hbase.client.pause</name><value>100</value><source>hbase-site.xml</source></property>
<property><name>hbase.client.write.buffer</name><value>2097152</value><source>hbase-site.xml</source></property>
<property><name>dfs.bytes-per-checksum</name><value>512</value><source>hdfs-default.xml</source></property>
<property><name>mapreduce.job.end-notification.max.retry.interval</name><value>5000</value><source>mapred-default.xml</source></property>
<property><name>ipc.client.connect.retry.interval</name><value>1000</value><source>core-default.xml</source></property>
<property><name>fs.s3a.multipart.size</name><value>104857600</value><source>core-default.xml</source></property>
<property><name>yarn.app.mapreduce.am.command-opts</name><value>-Xmx1024m</value><source>mapred-default.xml</source></property>
<property><name>yarn.nodemanager.process-kill-wait.ms</name><value>2000</value><source>yarn-default.xml</source></property>
<property><name>hbase.rpc.timeout</name><value>60000</value><source>hbase-site.xml</source></property>
<property><name>hbase.metrics.exposeOperationTimes</name><value>true</value><source>hbase-default.xml</source></property>
<property><name>dfs.namenode.safemode.min.datanodes</name><value>0</value><source>hdfs-default.xml</source></property>
<property><name>hbase.thrift.maxWorkerThreads</name><value>1000</value><source>hbase-default.xml</source></property>
<property><name>dfs.namenode.write.stale.datanode.ratio</name><value>0.5f</value><source>hdfs-default.xml</source></property>
<property><name>hadoop.jetty.logs.serve.aliases</name><value>true</value><source>core-default.xml</source></property>
<property><name>mapreduce.reduce.shuffle.fetch.retry.timeout-ms</name><value>30000</value><source>mapred-default.xml</source></property>
<property><name>hbase.regionserver.global.memstore.upperLimit</name><value>0.4</value><source>hbase-default.xml</source></property>
<property><name>fs.du.interval</name><value>600000</value><source>core-default.xml</source></property>
<property><name>mapreduce.tasktracker.dns.nameserver</name><value>default</value><source>mapred-default.xml</source></property>
<property><name>hadoop.proxyuser.httpfs.groups</name><value>*</value><source>core-site.xml</source></property>
<property><name>hbase.master.catalog.timeout</name><value>600000</value><source>hbase-default.xml</source></property>
<property><name>hadoop.security.random.device.file.path</name><value>/dev/urandom</value><source>core-default.xml</source></property>
<property><name>mapreduce.task.merge.progress.records</name><value>10000</value><source>mapred-default.xml</source></property>
<property><name>dfs.webhdfs.enabled</name><value>true</value><source>hdfs-default.xml</source></property>
<property><name>hadoop.ssl.client.conf</name><value>ssl-client.xml</value><source>core-site.xml</source></property>
<property><name>mapreduce.job.counters.max</name><value>120</value><source>mapred-default.xml</source></property>
<property><name>yarn.nodemanager.localizer.fetch.thread-count</name><value>4</value><source>yarn-default.xml</source></property>
<property><name>io.mapfile.bloom.size</name><value>1048576</value><source>core-default.xml</source></property>
<property><name>yarn.nodemanager.localizer.client.thread-count</name><value>5</value><source>yarn-default.xml</source></property>
<property><name>fs.automatic.close</name><value>true</value><source>core-default.xml</source></property>
<property><name>mapreduce.task.profile</name><value>false</value><source>mapred-default.xml</source></property>
<property><name>dfs.namenode.edit.log.autoroll.multiplier.threshold</name><value>2.0</value><source>hdfs-default.xml</source></property>
<property><name>mapreduce.task.combine.progress.records</name><value>10000</value><source>mapred-default.xml</source></property>
<property><name>mapreduce.shuffle.ssl.file.buffer.size</name><value>65536</value><source>mapred-default.xml</source></property>
<property><name>fs.swift.impl</name><value>org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystem</value><source>core-default.xml</source></property>
<property><name>yarn.app.mapreduce.am.container.log.backups</name><value>0</value><source>mapred-default.xml</source></property>
<property><name>hbase.hstore.bytes.per.checksum</name><value>16384</value><source>hbase-default.xml</source></property>
<property><name>yarn.ipc.serializer.type</name><value>protocolbuffers</value><source>yarn-default.xml</source></property>
<property><name>dfs.datanode.available-space-volume-choosing-policy.balanced-space-preference-fraction</name><value>0.75f</value><source>hdfs-default.xml</source></property>
<property><name>hbase.hstore.flusher.count</name><value>2</value><source>hbase-default.xml</source></property>
<property><name>dfs.namenode.backup.address</name><value>0.0.0.0:50100</value><source>hdfs-default.xml</source></property>
<property><name>dfs.client.https.need-auth</name><value>false</value><source>hdfs-default.xml</source></property>
<property><name>mapreduce.app-submission.cross-platform</name><value>false</value><source>mapred-default.xml</source></property>
<property><name>yarn.timeline-service.ttl-enable</name><value>true</value><source>yarn-default.xml</source></property>
<property><name>dfs.user.home.dir.prefix</name><value>/user</value><source>hdfs-default.xml</source></property>
<property><name>yarn.nodemanager.container-monitor.procfs-tree.smaps-based-rss.enabled</name><value>false</value><source>yarn-default.xml</source></property>
<property><name>yarn.nodemanager.keytab</name><value>/etc/krb5.keytab</value><source>yarn-default.xml</source></property>
<property><name>dfs.namenode.xattrs.enabled</name><value>true</value><source>hdfs-default.xml</source></property>
<property><name>dfs.client.write.exclude.nodes.cache.expiry.interval.millis</name><value>600000</value><source>hdfs-default.xml</source></property>
<property><name>mapreduce.jobtracker.restart.recover</name><value>false</value><source>mapred-default.xml</source></property>
<property><name>mapreduce.map.skip.proc.count.autoincr</name><value>true</value><source>mapred-default.xml</source></property>
<property><name>dfs.namenode.datanode.registration.ip-hostname-check</name><value>true</value><source>hdfs-default.xml</source></property>
<property><name>dfs.image.transfer.chunksize</name><value>65536</value><source>hdfs-default.xml</source></property>
<property><name>hadoop.security.instrumentation.requires.admin</name><value>false</value><source>core-site.xml</source></property>
<property><name>io.compression.codec.bzip2.library</name><value>system-native</value><source>core-default.xml</source></property>
<property><name>dfs.namenode.name.dir.restore</name><value>false</value><source>hdfs-default.xml</source></property>
<property><name>hbase.client.retries.number</name><value>350</value><source>programatically</source></property>
<property><name>hadoop.ssl.keystores.factory.class</name><value>org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory</value><source>core-site.xml</source></property>
<property><name>dfs.namenode.list.cache.directives.num.responses</name><value>100</value><source>hdfs-default.xml</source></property>
<property><name>hbase.status.multicast.address.port</name><value>60100</value><source>hbase-default.xml</source></property>
<property><name>fs.ftp.host</name><value>0.0.0.0</value><source>core-default.xml</source></property>
<property><name>hbase.hstore.checksum.algorithm</name><value>CRC32</value><source>hbase-default.xml</source></property>
<property><name>s3.blocksize</name><value>67108864</value><source>core-default.xml</source></property>
<property><name>s3native.stream-buffer-size</name><value>4096</value><source>core-default.xml</source></property>
<property><name>mapreduce.jobtracker.taskscheduler</name><value>org.apache.hadoop.mapred.JobQueueTaskScheduler</value><source>mapred-default.xml</source></property>
<property><name>dfs.datanode.dns.nameserver</name><value>default</value><source>hdfs-default.xml</source></property>
<property><name>yarn.nodemanager.resource.memory-mb</name><value>8192</value><source>yarn-default.xml</source></property>
<property><name>mapreduce.task.userlog.limit.kb</name><value>0</value><source>mapred-default.xml</source></property>
<property><name>hadoop.security.crypto.codec.classes.aes.ctr.nopadding</name><value>org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec,org.apache.hadoop.crypto.JceAesCtrCryptoCodec</value><source>core-default.xml</source></property>
<property><name>mapreduce.reduce.speculative</name><value>true</value><source>mapred-default.xml</source></property>
<property><name>yarn.nodemanager.container-monitor.interval-ms</name><value>3000</value><source>yarn-default.xml</source></property>
<property><name>dfs.replication.max</name><value>512</value><source>hdfs-default.xml</source></property>
<property><name>dfs.replication</name><value>3</value><source>hdfs-site.xml</source></property>
<property><name>yarn.client.failover-retries</name><value>0</value><source>yarn-default.xml</source></property>
<property><name>yarn.nodemanager.resource.cpu-vcores</name><value>8</value><source>yarn-default.xml</source></property>
<property><name>mapreduce.jobhistory.recovery.enable</name><value>false</value><source>mapred-default.xml</source></property>
<property><name>hbase.server.thread.wakefrequency</name><value>10000</value><source>hbase-site.xml</source></property>
<property><name>nfs.exports.allowed.hosts</name><value>*
rw</value><source>core-default.xml</source></property>
<property><name>hbase.lease.recovery.timeout</name><value>900000</value><source>hbase-default.xml</source></property>
<property><name>mapreduce.reduce.shuffle.memory.limit.percent</name><value>0.25</value><source>mapred-default.xml</source></property>
<property><name>file.replication</name><value>1</value><source>core-default.xml</source></property>
<property><name>mapreduce.job.reduce.shuffle.consumer.plugin.class</name><value>org.apache.hadoop.mapreduce.task.reduce.Shuffle</value><source>mapred-default.xml</source></property>
<property><name>hfile.format.version</name><value>2</value><source>hbase-default.xml</source></property>
<property><name>mapreduce.job.jvm.numtasks</name><value>1</value><source>mapred-default.xml</source></property>
<property><name>dfs.datanode.fsdatasetcache.max.threads.per.volume</name><value>4</value><source>hdfs-default.xml</source></property>
<property><name>mapreduce.am.max-attempts</name><value>2</value><source>mapred-default.xml</source></property>
<property><name>mapreduce.shuffle.connection-keep-alive.timeout</name><value>5</value><source>mapred-default.xml</source></property>
<property><name>hadoop.fuse.timer.period</name><value>5</value><source>hdfs-default.xml</source></property>
<property><name>mapreduce.job.reduces</name><value>1</value><source>mapred-default.xml</source></property>
<property><name>hbase.mob.sweep.tool.compaction.ratio</name><value>0.5f</value><source>hbase-default.xml</source></property>
<property><name>hbase.thrift.minWorkerThreads</name><value>16</value><source>hbase-default.xml</source></property>
<property><name>hbase.zookeeper.dns.interface</name><value>default</value><source>hbase-default.xml</source></property>
<property><name>yarn.app.mapreduce.am.job.task.listener.thread-count</name><value>30</value><source>mapred-default.xml</source></property>
<property><name>yarn.resourcemanager.store.class</name><value>org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore</value><source>yarn-default.xml</source></property>
<property><name>mapreduce.job.speculative.slownodethreshold</name><value>1.0</value><source>mapred-default.xml</source></property>
<property><name>s3native.replication</name><value>3</value><source>core-default.xml</source></property>
<property><name>mapreduce.tasktracker.reduce.tasks.maximum</name><value>2</value><source>mapred-default.xml</source></property>
<property><name>hbase.snapshot.restore.failsafe.name</name><value>hbase-failsafe-{snapshot.name}-{restore.timestamp}</value><source>hbase-default.xml</source></property>
<property><name>fs.permissions.umask-mode</name><value>022</value><source>hdfs-site.xml</source></property>
<property><name>mapreduce.cluster.local.dir</name><value>${hadoop.tmp.dir}/mapred/local</value><source>mapred-default.xml</source></property>
<property><name>mapreduce.client.output.filter</name><value>FAILED</value><source>mapred-default.xml</source></property>
<property><name>yarn.nodemanager.pmem-check-enabled</name><value>true</value><source>yarn-default.xml</source></property>
<property><name>dfs.client.failover.connection.retries.on.timeouts</name><value>0</value><source>hdfs-default.xml</source></property>
<property><name>hadoop.proxyuser.httpfs.hosts</name><value>*</value><source>core-site.xml</source></property>
<property><name>mapreduce.jobtracker.instrumentation</name><value>org.apache.hadoop.mapred.JobTrackerMetricsInst</value><source>mapred-default.xml</source></property>
<property><name>ftp.replication</name><value>3</value><source>core-default.xml</source></property>
<property><name>hbase.hstore.blockingStoreFiles</name><value>10</value><source>hbase-default.xml</source></property>
<property><name>hadoop.security.group.mapping.ldap.search.attr.member</name><value>member</value><source>core-default.xml</source></property>
<property><name>hbase.regionserver.hlog.reader.impl</name><value>org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader</value><source>hbase-default.xml</source></property>
<property><name>dfs.namenode.replication.work.multiplier.per.iteration</name><value>2</value><source>hdfs-default.xml</source></property>
<property><name>yarn.resourcemanager.resource-tracker.address</name><value>${yarn.resourcemanager.hostname}:8031</value><source>yarn-default.xml</source></property>
<property><name>mapreduce.tasktracker.outofband.heartbeat</name><value>false</value><source>mapred-default.xml</source></property>
<property><name>hbase.master.info.bindAddress</name><value>0.0.0.0</value><source>hbase-default.xml</source></property>
<property><name>dfs.namenode.edits.dir</name><value>${dfs.namenode.name.dir}</value><source>hdfs-default.xml</source></property>
<property><name>dfs.https.port</name><value>50470</value><source>hdfs-site.xml</source></property>
<property><name>yarn.resourcemanager.scheduler.monitor.enable</name><value>false</value><source>yarn-default.xml</source></property>
<property><name>fs.trash.checkpoint.interval</name><value>0</value><source>core-default.xml</source></property>
<property><name>dfs.client.read.shortcircuit.streams.cache.expiry.ms</name><value>300000</value><source>hdfs-default.xml</source></property>
<property><name>yarn.timeline-service.leveldb-timeline-store.start-time-write-cache-size</name><value>10000</value><source>yarn-default.xml</source></property>
<property><name>s3.stream-buffer-size</name><value>4096</value><source>core-default.xml</source></property>
<property><name>fs.s3a.connection.maximum</name><value>15</value><source>core-default.xml</source></property>
<property><name>file.client-write-packet-size</name><value>65536</value><source>core-default.xml</source></property>
<property><name>mapreduce.tasktracker.healthchecker.script.timeout</name><value>600000</value><source>mapred-default.xml</source></property>
<property><name>hbase.status.listener.class</name><value>org.apache.hadoop.hbase.client.ClusterStatusListener$MulticastListener</value><source>hbase-default.xml</source></property>
<property><name>dfs.namenode.fs-limits.max-directory-items</name><value>1048576</value><source>hdfs-default.xml</source></property>
<property><name>mapreduce.tasktracker.taskcontroller</name><value>org.apache.hadoop.mapred.DefaultTaskController</value><source>mapred-default.xml</source></property>
<property><name>dfs.namenode.path.based.cache.block.map.allocation.percent</name><value>0.25</value><source>hdfs-default.xml</source></property>
<property><name>fs.s3a.impl</name><value>org.apache.hadoop.fs.s3a.S3AFileSystem</value><source>core-default.xml</source></property>
<property><name>hbase.replication</name><value>true</value><source>hbase-site.xml</source></property>
<property><name>dfs.namenode.checkpoint.dir</name><value>file://${hadoop.tmp.dir}/dfs/namesecondary</value><source>hdfs-default.xml</source></property>
<property><name>hbase.regionserver.metahandler.count</name><value>10</value><source>hbase-site.xml</source></property>
<property><name>hbase.regionserver.dns.interface</name><value>default</value><source>hbase-default.xml</source></property>
<property><name>yarn.nodemanager.remote-app-log-dir</name><value>/tmp/logs</value><source>yarn-default.xml</source></property>
<property><name>mapreduce.reduce.shuffle.retry-delay.max.ms</name><value>60000</value><source>mapred-default.xml</source></property>
<property><name>io.map.index.interval</name><value>128</value><source>core-default.xml</source></property>
<property><name>dfs.namenode.servicerpc-address</name><value>hdpnode2:8022</value><source>hdfs-site.xml</source></property>
<property><name>dfs.client.block.write.replace-datanode-on-failure.enable</name><value>true</value><source>hdfs-default.xml</source></property>
<property><name>dfs.namenode.replication.interval</name><value>3</value><source>hdfs-default.xml</source></property>
<property><name>hbase.rest.port</name><value>8080</value><source>hbase-default.xml</source></property>
<property><name>hbase.regionserver.handler.count</name><value>30</value><source>hbase-site.xml</source></property>
<property><name>hadoop.ssl.server.conf</name><value>ssl-server.xml</value><source>core-site.xml</source></property>
<property><name>hadoop.rpc.socket.factory.class.default</name><value>org.apache.hadoop.net.StandardSocketFactory</value><source>core-default.xml</source></property>
<property><name>yarn.app.mapreduce.client.max-retries</name><value>3</value><source>mapred-default.xml</source></property>
<property><name>yarn.nodemanager.address</name><value>${yarn.nodemanager.hostname}:0</value><source>yarn-default.xml</source></property>
<property><name>hbase.ipc.server.callqueue.scan.ratio</name><value>0</value><source>hbase-default.xml</source></property>
<property><name>dfs.datanode.max.transfer.threads</name><value>4096</value><source>hdfs-default.xml</source></property>
<property><name>ha.failover-controller.graceful-fence.rpc-timeout.ms</name><value>5000</value><source>core-default.xml</source></property>
<property><name>dfs.datanode.ipc.address</name><value>0.0.0.0:50020</value><source>hdfs-default.xml</source></property>
<property><name>yarn.resourcemanager.delayed.delegation-token.removal-interval-ms</name><value>30000</value><source>yarn-default.xml</source></property>
<property><name>yarn.timeline-service.enabled</name><value>false</value><source>yarn-default.xml</source></property>
<property><name>dfs.client.cached.conn.retry</name><value>3</value><source>hdfs-default.xml</source></property>
<property><name>dfs.namenode.backup.http-address</name><value>0.0.0.0:50105</value><source>hdfs-default.xml</source></property>
<property><name>mapreduce.tasktracker.report.address</name><value>127.0.0.1:0</value><source>mapred-default.xml</source></property>
<property><name>hbase.bulkload.retries.number</name><value>0</value><source>hbase-default.xml</source></property>
<property><name>dfs.namenode.checkpoint.period</name><value>3600</value><source>hdfs-default.xml</source></property>
<property><name>mapreduce.task.attempt.id</name><value>hb_m_hdpnode5,60000,1430432112654</value><source>because
mapred.task.id is deprecated</source></property>
<property><name>hbase.hregion.max.filesize</name><value>10737418240</value><source>hbase-default.xml</source></property>
<property><name>dfs.datanode.shared.file.descriptor.paths</name><value>/dev/shm,/tmp</value><source>hdfs-default.xml</source></property>
<property><name>hbase.master.loadbalancer.class</name><value>org.apache.hadoop.hbase.master.balancer.StochasticLoadBalancer</value><source>hbase-default.xml</source></property>
<property><name>dfs.http.policy</name><value>HTTP_ONLY</value><source>hdfs-default.xml</source></property>
<property><name>hadoop.security.groups.cache.warn.after.ms</name><value>5000</value><source>core-default.xml</source></property>
<property><name>hadoop.security.auth_to_local</name><value>DEFAULT</value><source>core-site.xml</source></property>
<property><name>dfs.namenode.fs-limits.max-xattrs-per-inode</name><value>32</value><source>hdfs-default.xml</source></property>
<property><name>yarn.resourcemanager.zk-acl</name><value>world:anyone:rwcda</value><source>yarn-default.xml</source></property>
<property><name>mapreduce.job.speculative.speculativecap</name><value>0.1</value><source>mapred-default.xml</source></property>
<property><name>dfs.namenode.support.allow.format</name><value>true</value><source>hdfs-default.xml</source></property>
<property><name>dfs.namenode.checkpoint.max-retries</name><value>3</value><source>hdfs-default.xml</source></property>
<property><name>zookeeper.znode.acl.parent</name><value>acl</value><source>hbase-default.xml</source></property>
<property><name>hbase.status.publisher.class</name><value>org.apache.hadoop.hbase.master.ClusterStatusPublisher$MulticastPublisher</value><source>hbase-default.xml</source></property>
<property><name>yarn.resourcemanager.fs.state-store.retry-policy-spec</name><value>2000,
500</value><source>yarn-default.xml</source></property>
<property><name>hbase.tmp.dir</name><value>${java.io.tmpdir}/hbase-${user.name}</value><source>hbase-default.xml</source></property>
<property><name>dfs.namenode.decommission.nodes.per.interval</name><value>5</value><source>hdfs-default.xml</source></property>
<property><name>mapreduce.job.committer.setup.cleanup.needed</name><value>true</value><source>mapred-default.xml</source></property>
<property><name>dfs.datanode.cache.revocation.polling.ms</name><value>500</value><source>hdfs-default.xml</source></property>
<property><name>mapreduce.job.end-notification.retry.attempts</name><value>0</value><source>mapred-default.xml</source></property>
<property><name>yarn.resourcemanager.state-store.max-completed-applications</name><value>${yarn.resourcemanager.max-completed-applications}</value><source>yarn-default.xml</source></property>
<property><name>replication.source.nb.capacity</name><value>1000</value><source>hbase-site.xml</source></property>
<property><name>hbase.snapshot.format.version</name><value>2</value><source>hbase-default.xml</source></property>
<property><name>mapreduce.map.output.compress</name><value>false</value><source>mapred-default.xml</source></property>
<property><name>hbase.client.localityCheck.threadPoolSize</name><value>2</value><source>hbase-default.xml</source></property>
<property><name>yarn.timeline-service.generic-application-history.store-class</name><value>org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore</value><source>yarn-default.xml</source></property>
<property><name>mapreduce.jobhistory.cleaner.enable</name><value>true</value><source>mapred-default.xml</source></property>
<property><name>io.seqfile.local.dir</name><value>${hadoop.tmp.dir}/io/local</value><source>core-default.xml</source></property>
<property><name>dfs.blockreport.split.threshold</name><value>1000000</value><source>hdfs-default.xml</source></property>
<property><name>mapreduce.reduce.shuffle.read.timeout</name><value>180000</value><source>mapred-default.xml</source></property>
<property><name>mapreduce.job.queuename</name><value>default</value><source>mapred-default.xml</source></property>
<property><name>ipc.client.connect.max.retries</name><value>10</value><source>core-default.xml</source></property>
<property><name>io.seqfile.lazydecompress</name><value>true</value><source>core-default.xml</source></property>
<property><name>yarn.app.mapreduce.am.staging-dir</name><value>/tmp/hadoop-yarn/staging</value><source>mapred-default.xml</source></property>
<property><name>yarn.nodemanager.linux-container-executor.resources-handler.class</name><value>org.apache.hadoop.yarn.server.nodemanager.util.DefaultLCEResourcesHandler</value><source>yarn-default.xml</source></property>
<property><name>yarn.timeline-service.leveldb-timeline-store.read-cache-size</name><value>104857600</value><source>yarn-default.xml</source></property>
<property><name>io.file.buffer.size</name><value>4096</value><source>core-default.xml</source></property>
<property><name>ha.zookeeper.parent-znode</name><value>/hadoop-ha</value><source>core-default.xml</source></property>
<property><name>mapreduce.tasktracker.indexcache.mb</name><value>10</value><source>mapred-default.xml</source></property>
<property><name>tfile.io.chunk.size</name><value>1048576</value><source>core-default.xml</source></property>
<property><name>yarn.resourcemanager.work-preserving-recovery.scheduling-wait-ms</name><value>10000</value><source>yarn-default.xml</source></property>
<property><name>yarn.timeline-service.keytab</name><value>/etc/krb5.keytab</value><source>yarn-default.xml</source></property>
<property><name>yarn.acl.enable</name><value>false</value><source>yarn-default.xml</source></property>
<property><name>rpc.engine.org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolPB</name><value>org.apache.hadoop.ipc.ProtobufRpcEngine</value><source>programatically</source></property>
<property><name>hbase.regionserver.regionSplitLimit</name><value>2147483647</value><source>hbase-default.xml</source></property>
<property><name>hadoop.security.group.mapping.ldap.directory.search.timeout</name><value>10000</value><source>core-default.xml</source></property>
<property><name>yarn.nodemanager.resourcemanager.connect.wait.secs</name><value>900</value><source>yarn-default.xml</source></property>
<property><name>mapreduce.job.token.tracking.ids.enabled</name><value>false</value><source>mapred-default.xml</source></property>
<property><name>hbase.thrift.maxQueuedRequests</name><value>1000</value><source>hbase-default.xml</source></property>
<property><name>mapreduce.map.output.compress.codec</name><value>org.apache.hadoop.io.compress.DefaultCodec</value><source>mapred-default.xml</source></property>
<property><name>s3.replication</name><value>3</value><source>core-default.xml</source></property>
<property><name>tfile.fs.input.buffer.size</name><value>262144</value><source>core-default.xml</source></property>
<property><name>ha.failover-controller.graceful-fence.connection.retries</name><value>1</value><source>core-default.xml</source></property>
<property><name>net.topology.script.number.args</name><value>100</value><source>core-default.xml</source></property>
<property><name>fs.s3n.multipart.uploads.block.size</name><value>67108864</value><source>core-default.xml</source></property>
<property><name>hfile.block.index.cacheonwrite</name><value>false</value><source>hbase-default.xml</source></property>
<property><name>yarn.nodemanager.recovery.dir</name><value>${hadoop.tmp.dir}/yarn-nm-recovery</value><source>yarn-default.xml</source></property>
<property><name>hadoop.ssl.enabled</name><value>false</value><source>core-site.xml</source></property>
<property><name>yarn.timeline-service.handler-thread-count</name><value>10</value><source>yarn-default.xml</source></property>
<property><name>hbase.config.read.zookeeper.config</name><value>false</value><source>hbase-default.xml</source></property>
<property><name>hbase.column.max.version</name><value>1</value><source>hbase-default.xml</source></property>
<property><name>dfs.namenode.reject-unresolved-dn-topology-mapping</name><value>false</value><source>hdfs-default.xml</source></property>
<property><name>mapreduce.jobhistory.recovery.store.class</name><value>org.apache.hadoop.mapreduce.v2.hs.HistoryServerFileSystemStateStoreService</value><source>mapred-default.xml</source></property>
<property><name>yarn.nodemanager.log.retain-seconds</name><value>10800</value><source>yarn-default.xml</source></property>
<property><name>yarn.resourcemanager.admin.address</name><value>${yarn.resourcemanager.hostname}:8033</value><source>yarn-default.xml</source></property>
<property><name>yarn.resourcemanager.recovery.enabled</name><value>false</value><source>yarn-default.xml</source></property>
<property><name>dfs.client.slow.io.warning.threshold.ms</name><value>30000</value><source>hdfs-default.xml</source></property>
<property><name>yarn.resourcemanager.ha.automatic-failover.zk-base-path</name><value>/yarn-leader-election</value><source>yarn-default.xml</source></property>
<property><name>fs.AbstractFileSystem.viewfs.impl</name><value>org.apache.hadoop.fs.viewfs.ViewFs</value><source>core-default.xml</source></property>
<property><name>mapreduce.tasktracker.dns.interface</name><value>default</value><source>mapred-default.xml</source></property>
<property><name>hbase.offheapcache.percentage</name><value>0</value><source>hbase-default.xml</source></property>
<property><name>mapreduce.jobtracker.handler.count</name><value>10</value><source>mapred-default.xml</source></property>
<property><name>dfs.blockreport.initialDelay</name><value>0</value><source>hdfs-default.xml</source></property>
<property><name>hbase.mob.sweep.tool.compaction.memstore.flush.size</name><value>134217728</value><source>hbase-default.xml</source></property>
<property><name>fs.AbstractFileSystem.hdfs.impl</name><value>org.apache.hadoop.fs.Hdfs</value><source>core-default.xml</source></property>
<property><name>dfs.namenode.retrycache.expirytime.millis</name><value>600000</value><source>hdfs-default.xml</source></property>
<property><name>dfs.client.failover.sleep.max.millis</name><value>15000</value><source>hdfs-default.xml</source></property>
<property><name>mapred.task.id</name><value>hb_m_hdpnode5,60000,1430432112654</value></property>
<property><name>dfs.namenode.blocks.per.postponedblocks.rescan</name><value>10000</value><source>hdfs-default.xml</source></property>
<property><name>hbase.zookeeper.property.clientPort</name><value>2181</value><source>hbase-site.xml</source></property>
<property><name>yarn.resourcemanager.max-completed-applications</name><value>10000</value><source>yarn-default.xml</source></property>
<property><name>hadoop.proxyuser.oozie.groups</name><value>*</value><source>core-site.xml</source></property>
<property><name>yarn.nodemanager.log-dirs</name><value>${yarn.log.dir}/userlogs</value><source>yarn-default.xml</source></property>
<property><name>dfs.client.failover.sleep.base.millis</name><value>500</value><source>hdfs-default.xml</source></property>
<property><name>yarn.nodemanager.linux-container-executor.nonsecure-mode.user-pattern</name><value>^[_.A-Za-z0-9][-@_.A-Za-z0-9]{0,255}?[$]?$</value><source>yarn-default.xml</source></property>
<property><name>hbase.rest.readonly</name><value>false</value><source>hbase-default.xml</source></property>
<property><name>dfs.default.chunk.view.size</name><value>32768</value><source>hdfs-default.xml</source></property>
<property><name>hbase.rpc.server.engine</name><value>org.apache.hadoop.hbase.ipc.ProtobufRpcServerEngine</value><source>hbase-default.xml</source></property>
<property><name>dfs.client.read.shortcircuit</name><value>false</value><source>hdfs-site.xml</source></property>
<property><name>ftp.blocksize</name><value>67108864</value><source>core-default.xml</source></property>
<property><name>mapreduce.job.acl-modify-job</name><value>
</value><source>mapred-default.xml</source></property>
<property><name>zookeeper.znode.parent</name><value>/hbase</value><source>hbase-site.xml</source></property>
<property><name>fs.defaultFS</name><value>hdfs://hdpnode2:8020</value><source>because
fs.default.name is deprecated</source></property>
<property><name>hbase.rpc.shortoperation.timeout</name><value>10000</value><source>hbase-default.xml</source></property>
<property><name>hadoop.http.filter.initializers</name><value>org.apache.hadoop.http.lib.StaticUserWebFilter</value><source>core-default.xml</source></property>
<property><name>fs.s3n.multipart.copy.block.size</name><value>5368709120</value><source>core-default.xml</source></property>
<property><name>yarn.resourcemanager.connect.max-wait.ms</name><value>900000</value><source>yarn-default.xml</source></property>
<property><name>hadoop.security.group.mapping.ldap.ssl</name><value>false</value><source>core-default.xml</source></property>
<property><name>dfs.namenode.max.extra.edits.segments.retained</name><value>10000</value><source>hdfs-default.xml</source></property>
<property><name>dfs.namenode.https-address</name><value>hdpnode2:50470</value><source>hdfs-site.xml</source></property>
<property><name>yarn.resourcemanager.admin.client.thread-count</name><value>1</value><source>yarn-default.xml</source></property>
<property><name>hadoop.security.kms.client.encrypted.key.cache.size</name><value>500</value><source>core-default.xml</source></property>
<property><name>ipc.client.kill.max</name><value>10</value><source>core-default.xml</source></property>
<property><name>hadoop.security.group.mapping.ldap.search.filter.group</name><value>(objectClass=group)</value><source>core-default.xml</source></property>
<property><name>fs.AbstractFileSystem.file.impl</name><value>org.apache.hadoop.fs.local.LocalFs</value><source>core-default.xml</source></property>
<property><name>hadoop.http.authentication.kerberos.keytab</name><value>${user.home}/hadoop.keytab</value><source>core-default.xml</source></property>
<property><name>yarn.client.nodemanager-connect.max-wait-ms</name><value>900000</value><source>yarn-default.xml</source></property>
<property><name>mapreduce.job.map.output.collector.class</name><value>org.apache.hadoop.mapred.MapTask$MapOutputBuffer</value><source>mapred-default.xml</source></property>
<property><name>dfs.namenode.path.based.cache.retry.interval.ms</name><value>30000</value><source>hdfs-default.xml</source></property>
<property><name>hadoop.security.uid.cache.secs</name><value>14400</value><source>core-default.xml</source></property>
<property><name>mapreduce.map.cpu.vcores</name><value>1</value><source>mapred-default.xml</source></property>
<property><name>yarn.log-aggregation.retain-check-interval-seconds</name><value>-1</value><source>yarn-default.xml</source></property>
<property><name>mapreduce.map.log.level</name><value>INFO</value><source>mapred-default.xml</source></property>
<property><name>mapred.child.java.opts</name><value>-Xmx200m</value><source>mapred-default.xml</source></property>
<property><name>hbase.mob.sweep.tool.compaction.mergeable.size</name><value>134217728</value><source>hbase-default.xml</source></property>
<property><name>hfile.index.block.max.size</name><value>131072</value><source>hbase-default.xml</source></property>
<property><name>hbase.client.scanner.timeout.period</name><value>60000</value><source>hbase-default.xml</source></property>
<property><name>yarn.nodemanager.local-cache.max-files-per-directory</name><value>8192</value><source>yarn-default.xml</source></property>
<property><name>dfs.https.server.keystore.resource</name><value>ssl-server.xml</value><source>hdfs-default.xml</source></property>
<property><name>mapreduce.jobtracker.taskcache.levels</name><value>2</value><source>mapred-default.xml</source></property>
<property><name>dfs.datanode.handler.count</name><value>10</value><source>hdfs-default.xml</source></property>
<property><name>s3native.blocksize</name><value>67108864</value><source>core-default.xml</source></property>
<property><name>yarn.resourcemanager.nm.liveness-monitor.interval-ms</name><value>1000</value><source>yarn-default.xml</source></property>
<property><name>mapreduce.client.completion.pollinterval</name><value>5000</value><source>mapred-default.xml</source></property>
<property><name>hbase.hstore.compactionThreshold</name><value>3</value><source>hbase-default.xml</source></property>
<property><name>dfs.stream-buffer-size</name><value>4096</value><source>hdfs-default.xml</source></property>
<property><name>dfs.namenode.delegation.key.update-interval</name><value>86400000</value><source>hdfs-default.xml</source></property>
<property><name>mapreduce.job.maps</name><value>2</value><source>mapred-default.xml</source></property>
<property><name>hbase.master.logcleaner.ttl</name><value>60000</value><source>hbase-site.xml</source></property>
<property><name>mapreduce.job.acl-view-job</name><value>
</value><source>mapred-default.xml</source></property>
<property><name>dfs.namenode.enable.retrycache</name><value>true</value><source>hdfs-default.xml</source></property>
<property><name>yarn.resourcemanager.connect.retry-interval.ms</name><value>30000</value><source>yarn-default.xml</source></property>
<property><name>yarn.timeline-service.leveldb-timeline-store.ttl-interval-ms</name><value>300000</value><source>yarn-default.xml</source></property>
<property><name>fs.s3a.multipart.threshold</name><value>2147483647</value><source>core-default.xml</source></property>
<property><name>dfs.namenode.decommission.interval</name><value>30</value><source>hdfs-default.xml</source></property>
<property><name>hbase.hregion.majorcompaction</name><value>604800000</value><source>hbase-default.xml</source></property>
<property><name>mapreduce.shuffle.max.connections</name><value>0</value><source>mapred-default.xml</source></property>
<property><name>hadoop.proxyuser.hdfs.hosts</name><value>*</value><source>core-site.xml</source></property>
<property><name>yarn.log-aggregation-enable</name><value>false</value><source>yarn-default.xml</source></property>
<property><name>dfs.client-write-packet-size</name><value>65536</value><source>hdfs-default.xml</source></property>
<property><name>dfs.client.file-block-storage-locations.timeout.millis</name><value>1000</value><source>hdfs-default.xml</source></property>
<property><name>mapreduce.jobtracker.expire.trackers.interval</name><value>600000</value><source>mapred-default.xml</source></property>
<property><name>dfs.client.block.write.retries</name><value>3</value><source>hdfs-default.xml</source></property>
<property><name>mapreduce.task.io.sort.factor</name><value>10</value><source>mapred-default.xml</source></property>
<property><name>hbase.hregion.memstore.flush.size</name><value>134217728</value><source>hbase-default.xml</source></property>
<property><name>ha.health-monitor.sleep-after-disconnect.ms</name><value>1000</value><source>core-default.xml</source></property>
<property><name>ha.zookeeper.session-timeout.ms</name><value>5000</value><source>core-default.xml</source></property>
<property><name>hbase.client.prefetch</name><value>true</value><source>hbase-default.xml</source></property>
<property><name>yarn.nodemanager.linux-container-executor.nonsecure-mode.limit-users</name><value>true</value><source>yarn-default.xml</source></property>
<property><name>dfs.support.append</name><value>true</value><source>hdfs-default.xml</source></property>
<property><name>mapreduce.input.fileinputformat.list-status.num-threads</name><value>1</value><source>mapred-default.xml</source></property>
<property><name>io.skip.checksum.errors</name><value>false</value><source>core-default.xml</source></property>
<property><name>hbase.ipc.client.tcpnodelay</name><value>true</value><source>hbase-default.xml</source></property>
<property><name>hbase.regionserver.optionalcacheflushinterval</name><value>3600000</value><source>hbase-default.xml</source></property>
<property><name>yarn.resourcemanager.scheduler.client.thread-count</name><value>50</value><source>yarn-default.xml</source></property>
<property><name>dfs.namenode.safemode.extension</name><value>30000</value><source>hdfs-default.xml</source></property>
<property><name>mapreduce.jobhistory.move.thread-count</name><value>3</value><source>mapred-default.xml</source></property>
<property><name>yarn.resourcemanager.zk-state-store.parent-path</name><value>/rmstore</value><source>yarn-default.xml</source></property>
<property><name>hadoop.proxyuser.hdfs.groups</name><value>*</value><source>core-site.xml</source></property>
<property><name>ipc.client.idlethreshold</name><value>4000</value><source>core-default.xml</source></property>
<property><name>hbase.regionserver.port</name><value>60020</value><source>hbase-default.xml</source></property>
<property><name>dfs.namenode.accesstime.precision</name><value>3600000</value><source>hdfs-default.xml</source></property>
<property><name>mapreduce.task.profile.params</name><value>-agentlib:hprof=cpu=samples,heap=sites,force=n,thread=y,verbose=n,file=%s</value><source>mapred-default.xml</source></property>
<property><name>hbase.regionserver.logroll.errors.tolerated</name><value>2</value><source>hbase-default.xml</source></property>
<property><name>mapreduce.jobhistory.keytab</name><value>/etc/security/keytab/jhs.service.keytab</value><source>mapred-default.xml</source></property>
<property><name>hbase.hstore.compaction.max</name><value>10</value><source>hbase-default.xml</source></property>
<property><name>yarn.resourcemanager.amliveliness-monitor.interval-ms</name><value>1000</value><source>yarn-default.xml</source></property>
<property><name>dfs.datanode.hdfs-blocks-metadata.enabled</name><value>true</value><source>hdfs-site.xml</source></property>
<property><name>yarn.scheduler.minimum-allocation-mb</name><value>1024</value><source>yarn-default.xml</source></property>
<property><name>yarn.resourcemanager.container-tokens.master-key-rolling-interval-secs</name><value>86400</value><source>yarn-default.xml</source></property>
<property><name>mapreduce.reduce.shuffle.fetch.retry.interval-ms</name><value>1000</value><source>mapred-default.xml</source></property>
<property><name>hadoop.user.group.static.mapping.overrides</name><value>dr.who=;</value><source>core-default.xml</source></property>
<property><name>hadoop.security.kms.client.encrypted.key.cache.low-watermark</name><value>0.3f</value><source>core-default.xml</source></property>
<property><name>fs.s3a.connection.ssl.enabled</name><value>true</value><source>core-default.xml</source></property>
<property><name>dfs.datanode.directoryscan.interval</name><value>21600</value><source>hdfs-default.xml</source></property>
<property><name>hbase.zookeeper.property.initLimit</name><value>10</value><source>hbase-default.xml</source></property>
<property><name>yarn.resourcemanager.scheduler.monitor.policies</name><value>org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy</value><source>yarn-default.xml</source></property>
<property><name>ipc.server.listen.queue.size</name><value>128</value><source>core-default.xml</source></property>
<property><name>rpc.metrics.quantile.enable</name><value>false</value><source>core-default.xml</source></property>
<property><name>mapreduce.jobtracker.persist.jobstatus.dir</name><value>/jobtracker/jobsInfo</value><source>mapred-default.xml</source></property>
<property><name>dfs.domain.socket.path</name><value>/var/run/hdfs-sockets/dn</value><source>hdfs-site.xml</source></property>
<property><name>yarn.client.nodemanager-client-async.thread-pool-max-size</name><value>500</value><source>yarn-default.xml</source></property>
<property><name>hadoop.security.group.mapping</name><value>org.apache.hadoop.security.ShellBasedUnixGroupsMapping</value><source>core-site.xml</source></property>
<property><name>dfs.namenode.name.dir</name><value>file:///dfs/nn,file:///data2/dfs/nn</value><source>hdfs-site.xml</source></property>
<property><name>hbase.coprocessor.abortonerror</name><value>false</value><source>hbase-site.xml</source></property>
<property><name>yarn.am.liveness-monitor.expiry-interval-ms</name><value>600000</value><source>yarn-default.xml</source></property>
<property><name>yarn.nm.liveness-monitor.expiry-interval-ms</name><value>600000</value><source>yarn-default.xml</source></property>
<property><name>hbase.hstore.compaction.kv.max</name><value>10</value><source>hbase-default.xml</source></property>
<property><name>hbase.hregion.preclose.flush.size</name><value>5242880</value><source>hbase-default.xml</source></property>
<property><name>ftp.bytes-per-checksum</name><value>512</value><source>core-default.xml</source></property>
<property><name>dfs.namenode.max.objects</name><value>0</value><source>hdfs-default.xml</source></property>
<property><name>hbase.master.hfilecleaner.plugins</name><value>org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner,org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner,org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner</value><source>programatically</source></property>
<property><name>io.compression.codecs</name><value>org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.BZip2Codec,org.apache.hadoop.io.compress.DeflateCodec,org.apache.hadoop.io.compress.SnappyCodec,org.apache.hadoop.io.compress.Lz4Codec</value><source>core-site.xml</source></property>
<property><name>hbase.metrics.showTableName</name><value>true</value><source>hbase-default.xml</source></property>
<property><name>mapreduce.map.memory.mb</name><value>1024</value><source>mapred-default.xml</source></property>
<property><name>yarn.client.nodemanager-connect.retry-interval-ms</name><value>10000</value><source>yarn-default.xml</source></property>
<property><name>dfs.namenode.edits.journal-plugin.qjournal</name><value>org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager</value><source>hdfs-default.xml</source></property>
<property><name>mapreduce.tasktracker.healthchecker.interval</name><value>60000</value><source>mapred-default.xml</source></property>
<property><name>nfs.wtmax</name><value>1048576</value><source>hdfs-default.xml</source></property>
<property><name>yarn.timeline-service.leveldb-timeline-store.start-time-read-cache-size</name><value>10000</value><source>yarn-default.xml</source></property>
<property><name>hbase.lease.recovery.dfs.timeout</name><value>64000</value><source>hbase-default.xml</source></property>
<property><name>yarn.resourcemanager.address</name><value>${yarn.resourcemanager.hostname}:8032</value><source>yarn-default.xml</source></property>
<property><name>dfs.cachereport.intervalMsec</name><value>10000</value><source>hdfs-default.xml</source></property>
<property><name>mapreduce.task.skip.start.attempts</name><value>2</value><source>mapred-default.xml</source></property>
<property><name>fail.fast.expired.active.master</name><value>false</value><source>hbase-default.xml</source></property>
<property><name>yarn.resourcemanager.zk-timeout-ms</name><value>10000</value><source>yarn-default.xml</source></property>
<property><name>dfs.namenode.checkpoint.edits.dir</name><value>${dfs.namenode.checkpoint.dir}</value><source>hdfs-default.xml</source></property>
<property><name>hadoop.hdfs.configuration.version</name><value>1</value><source>hdfs-default.xml</source></property>
<property><name>hadoop.proxyuser.hive.groups</name><value>*</value><source>core-site.xml</source></property>
<property><name>mapreduce.map.skip.maxrecords</name><value>0</value><source>mapred-default.xml</source></property>
<property><name>dfs.datanode.available-space-volume-choosing-policy.balanced-space-threshold</name><value>10737418240</value><source>hdfs-default.xml</source></property>
<property><name>nfs.allow.insecure.ports</name><value>true</value><source>hdfs-default.xml</source></property>
<property><name>mapreduce.jobtracker.system.dir</name><value>${hadoop.tmp.dir}/mapred/system</value><source>mapred-default.xml</source></property>
<property><name>yarn.timeline-service.hostname</name><value>0.0.0.0</value><source>yarn-default.xml</source></property>
<property><name>mapreduce.job.reducer.preempt.delay.sec</name><value>0</value><source>mapred-default.xml</source></property>
<property><name>hbase.zookeeper.dns.nameserver</name><value>default</value><source>hbase-default.xml</source></property>
<property><name>hbase.ipc.server.callqueue.handler.factor</name><value>0.1</value><source>hbase-default.xml</source></property>
<property><name>hadoop.proxyuser.oozie.hosts</name><value>*</value><source>core-site.xml</source></property>
<property><name>hbase.master.dns.nameserver</name><value>default</value><source>hbase-default.xml</source></property>
<property><name>mapreduce.shuffle.ssl.enabled</name><value>false</value><source>mapred-default.xml</source></property>
<property><name>yarn.nodemanager.vmem-pmem-ratio</name><value>2.1</value><source>yarn-default.xml</source></property>
<property><name>yarn.nodemanager.container-manager.thread-count</name><value>20</value><source>yarn-default.xml</source></property>
<property><name>dfs.encrypt.data.transfer</name><value>false</value><source>hdfs-default.xml</source></property>
<property><name>dfs.block.access.key.update.interval</name><value>600</value><source>hdfs-default.xml</source></property>
<property><name>hadoop.tmp.dir</name><value>/tmp/hadoop-${user.name}</value><source>core-default.xml</source></property>
<property><name>dfs.namenode.audit.loggers</name><value>default</value><source>hdfs-default.xml</source></property>
<property><name>yarn.timeline-service.generic-application-history.fs-history-store.compression-type</name><value>none</value><source>yarn-default.xml</source></property>
<property><name>fs.AbstractFileSystem.har.impl</name><value>org.apache.hadoop.fs.HarFs</value><source>core-default.xml</source></property>
<property><name>yarn.nodemanager.localizer.cache.target-size-mb</name><value>10240</value><source>yarn-default.xml</source></property>
<property><name>yarn.http.policy</name><value>HTTP_ONLY</value><source>yarn-default.xml</source></property>
<property><name>hbase.regionserver.logroll.period</name><value>3600000</value><source>hbase-default.xml</source></property>
<property><name>dfs.client.short.circuit.replica.stale.threshold.ms</name><value>1800000</value><source>hdfs-default.xml</source></property>
<property><name>yarn.timeline-service.webapp.https.address</name><value>${yarn.timeline-service.hostname}:8190</value><source>yarn-default.xml</source></property>
<property><name>mapreduce.jobtracker.persist.jobstatus.hours</name><value>1</value><source>mapred-default.xml</source></property>
<property><name>tfile.fs.output.buffer.size</name><value>262144</value><source>core-default.xml</source></property>
<property><name>hbase.hregion.memstore.block.multiplier</name><value>4</value><source>hbase-default.xml</source></property>
<property><name>dfs.namenode.checkpoint.check.period</name><value>60</value><source>hdfs-default.xml</source></property>
<property><name>dfs.datanode.dns.interface</name><value>default</value><source>hdfs-default.xml</source></property>
<property><name>fs.ftp.host.port</name><value>21</value><source>core-default.xml</source></property>
<property><name>mapreduce.task.io.sort.mb</name><value>100</value><source>mapred-default.xml</source></property>
<property><name>dfs.namenode.inotify.max.events.per.rpc</name><value>1000</value><source>hdfs-default.xml</source></property>
<property><name>hadoop.security.group.mapping.ldap.search.attr.group.name</name><value>cn</value><source>core-default.xml</source></property>
<property><name>dfs.namenode.avoid.read.stale.datanode</name><value>false</value><source>hdfs-default.xml</source></property>
<property><name>mapreduce.output.fileoutputformat.compress.type</name><value>RECORD</value><source>mapred-default.xml</source></property>
<property><name>hbase.storescanner.parallel.seek.enable</name><value>false</value><source>hbase-default.xml</source></property>
<property><name>mapreduce.reduce.skip.proc.count.autoincr</name><value>true</value><source>mapred-default.xml</source></property>
<property><name>hbase.dfs.client.read.shortcircuit.buffer.size</name><value>131072</value><source>hbase-default.xml</source></property>
<property><name>file.bytes-per-checksum</name><value>512</value><source>core-default.xml</source></property>
<property><name>mapreduce.job.userlog.retain.hours</name><value>24</value><source>mapred-default.xml</source></property>
<property><name>dfs.datanode.http.address</name><value>0.0.0.0:50075</value><source>hdfs-default.xml</source></property>
<property><name>dfs.image.compress</name><value>false</value><source>hdfs-default.xml</source></property>
<property><name>ha.health-monitor.check-interval.ms</name><value>1000</value><source>core-default.xml</source></property>
<property><name>dfs.permissions.enabled</name><value>true</value><source>hdfs-default.xml</source></property>
<property><name>hbase.thrift.htablepool.size.max</name><value>1000</value><source>hbase-default.xml</source></property>
<property><name>yarn.resourcemanager.resource-tracker.client.thread-count</name><value>50</value><source>yarn-default.xml</source></property>
<property><name>dfs.client.domain.socket.data.traffic</name><value>false</value><source>hdfs-site.xml</source></property>
<property><name>dfs.image.compression.codec</name><value>org.apache.hadoop.io.compress.DefaultCodec</value><source>hdfs-default.xml</source></property>
<property><name>dfs.datanode.address</name><value>0.0.0.0:50010</value><source>hdfs-default.xml</source></property>
<property><name>dfs.block.access.token.enable</name><value>false</value><source>hdfs-default.xml</source></property>
<property><name>mapreduce.reduce.input.buffer.percent</name><value>0.0</value><source>mapred-default.xml</source></property>
<property><name>hbase.client.scanner.caching</name><value>100</value><source>hbase-site.xml</source></property>
<property><name>mapreduce.tasktracker.local.dir.minspacestart</name><value>0</value><source>mapred-default.xml</source></property>
<property><name>dfs.blockreport.intervalMsec</name><value>21600000</value><source>hdfs-default.xml</source></property>
<property><name>hbase.snapshot.restore.take.failsafe.snapshot</name><value>true</value><source>hbase-default.xml</source></property>
<property><name>ha.health-monitor.rpc-timeout.ms</name><value>45000</value><source>core-default.xml</source></property>
<property><name>dfs.client.failover.connection.retries</name><value>0</value><source>hdfs-default.xml</source></property>
<property><name>dfs.namenode.kerberos.internal.spnego.principal</name><value>${dfs.web.authentication.kerberos.principal}</value><source>hdfs-default.xml</source></property>
<property><name>hadoop.policy.file</name><value>hbase-policy.xml</value><source>hbase-default.xml</source></property>
<property><name>yarn.scheduler.maximum-allocation-mb</name><value>8192</value><source>yarn-default.xml</source></property>
<property><name>mapreduce.task.files.preserve.failedtasks</name><value>false</value><source>mapred-default.xml</source></property>
<property><name>hbase.status.published</name><value>false</value><source>hbase-default.xml</source></property>
<property><name>yarn.nodemanager.delete.thread-count</name><value>4</value><source>yarn-default.xml</source></property>
<property><name>dfs.https.enable</name><value>false</value><source>hdfs-default.xml</source></property>
<property><name>mapreduce.output.fileoutputformat.compress.codec</name><value>org.apache.hadoop.io.compress.DefaultCodec</value><source>mapred-default.xml</source></property>
<property><name>map.sort.class</name><value>org.apache.hadoop.util.QuickSort</value><source>mapred-default.xml</source></property>
<property><name>mapreduce.job.classloader.system.classes</name><value>java.,javax.,org.apache.commons.logging.,org.apache.log4j.,
          org.apache.hadoop.,core-default.xml,hdfs-default.xml,
          mapred-default.xml,yarn-default.xml</value><source>mapred-default.xml</source></property>
<property><name>mapreduce.job.classloader</name><value>false</value><source>mapred-default.xml</source></property>
<property><name>mapreduce.jobtracker.tasktracker.maxblacklists</name><value>4</value><source>mapred-default.xml</source></property>
<property><name>io.seqfile.compress.blocksize</name><value>1000000</value><source>core-default.xml</source></property>
<property><name>dfs.blocksize</name><value>134217728</value><source>hdfs-site.xml</source></property>
<property><name>mapreduce.task.profile.maps</name><value>0-2</value><source>mapred-default.xml</source></property>
<property><name>mapreduce.jobtracker.staging.root.dir</name><value>${hadoop.tmp.dir}/mapred/staging</value><source>mapred-default.xml</source></property>
<property><name>yarn.nodemanager.localizer.cache.cleanup.interval-ms</name><value>600000</value><source>yarn-default.xml</source></property>
<property><name>mapreduce.jobtracker.http.address</name><value>0.0.0.0:50030</value><source>mapred-default.xml</source></property>
<property><name>hbase.regionserver.info.bindAddress</name><value>0.0.0.0</value><source>hbase-default.xml</source></property>
<property><name>hadoop.proxyuser.hive.hosts</name><value>*</value><source>core-site.xml</source></property>
<property><name>dfs.client.mmap.cache.timeout.ms</name><value>3600000</value><source>hdfs-default.xml</source></property>
<property><name>yarn.timeline-service.generic-application-history.fs-history-store.uri</name><value>${hadoop.tmp.dir}/yarn/timeline/generic-history</value><source>yarn-default.xml</source></property>
<property><name>hadoop.security.java.secure.random.algorithm</name><value>SHA1PRNG</value><source>core-default.xml</source></property>
<property><name>fs.client.resolve.remote.symlinks</name><value>true</value><source>core-default.xml</source></property>
<property><name>hbase.master.logcleaner.plugins</name><value>org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner,org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner,org.apache.hadoop.hbase.master.snapshot.SnapshotLogCleaner</value><source>programatically</source></property>
<property><name>hbase.data.umask.enable</name><value>false</value><source>hbase-default.xml</source></property>
<property><name>hbase.master.dns.interface</name><value>default</value><source>hbase-default.xml</source></property>
<property><name>mapreduce.tasktracker.local.dir.minspacekill</name><value>0</value><source>mapred-default.xml</source></property>
<property><name>nfs.mountd.port</name><value>4242</value><source>hdfs-default.xml</source></property>
<property><name>yarn.nodemanager.disk-health-checker.min-healthy-disks</name><value>0.25</value><source>yarn-default.xml</source></property>
<property><name>mapreduce.tasktracker.taskmemorymanager.monitoringinterval</name><value>5000</value><source>mapred-default.xml</source></property>
<property><name>hbase.local.dir</name><value>${hbase.tmp.dir}/local/</value><source>hbase-default.xml</source></property>
<property><name>hbase.master.executor.closeregion.threads</name><value>5</value><source>hbase-site.xml</source></property>
<property><name>hadoop.proxyuser.mapred.hosts</name><value>*</value><source>core-site.xml</source></property>
<property><name>hbase.master.handler.count</name><value>25</value><source>hbase-site.xml</source></property>
<property><name>mapreduce.job.end-notification.retry.interval</name><value>1000</value><source>mapred-default.xml</source></property>
<property><name>mapreduce.jobhistory.loadedjobs.cache.size</name><value>5</value><source>mapred-default.xml</source></property>
<property><name>dfs.client.datanode-restart.timeout</name><value>30</value><source>hdfs-default.xml</source></property>
<property><name>hadoop.proxyuser.mapred.groups</name><value>*</value><source>core-site.xml</source></property>
<property><name>yarn.nodemanager.local-dirs</name><value>${hadoop.tmp.dir}/nm-local-dir</value><source>yarn-default.xml</source></property>
<property><name>hbase.table.lock.enable</name><value>true</value><source>hbase-default.xml</source></property>
<property><name>dfs.datanode.block.id.layout.upgrade.threads</name><value>12</value><source>hdfs-default.xml</source></property>
<property><name>hbase.storescanner.parallel.seek.threads</name><value>10</value><source>hbase-default.xml</source></property>
<property><name>hbase.client.prefetch.limit</name><value>10</value><source>hbase-default.xml</source></property>
<property><name>yarn.timeline-service.webapp.address</name><value>${yarn.timeline-service.hostname}:8188</value><source>yarn-default.xml</source></property>
<property><name>hbase.online.schema.update.enable</name><value>true</value><source>hbase-default.xml</source></property>
<property><name>mapreduce.jobhistory.address</name><value>0.0.0.0:10020</value><source>mapred-default.xml</source></property>
<property><name>mapreduce.jobtracker.persist.jobstatus.active</name><value>true</value><source>mapred-default.xml</source></property>
<property><name>hbase.mob.file.cache.size</name><value>1000</value><source>hbase-default.xml</source></property>
<property><name>file.blocksize</name><value>67108864</value><source>core-default.xml</source></property>
<property><name>dfs.datanode.readahead.bytes</name><value>4193404</value><source>hdfs-default.xml</source></property>
<property><name>hbase.zookeeper.property.dataDir</name><value>${hbase.tmp.dir}/zookeeper</value><source>hbase-default.xml</source></property>
<property><name>dfs.namenode.http-address</name><value>hdpnode2:50070</value><source>hdfs-site.xml</source></property>
<property><name>"hadoop.security.kms.client.encrypted.key.cache.expiry</name><value>43200000</value><source>core-default.xml</source></property>
<property><name>dfs.client.hedged.read.threadpool.size</name><value>0</value><source>hdfs-site.xml</source></property>
<property><name>hadoop.work.around.non.threadsafe.getpwuid</name><value>false</value><source>core-default.xml</source></property>
<property><name>yarn.resourcemanager.configuration.provider-class</name><value>org.apache.hadoop.yarn.LocalConfigurationProvider</value><source>yarn-default.xml</source></property>
<property><name>yarn.nodemanager.recovery.enabled</name><value>false</value><source>yarn-default.xml</source></property>
<property><name>yarn.resourcemanager.hostname</name><value>0.0.0.0</value><source>yarn-default.xml</source></property>
<property><name>fs.s3n.multipart.uploads.enabled</name><value>false</value><source>core-default.xml</source></property>
<property><name>hbase.security.exec.permission.checks</name><value>false</value><source>hbase-default.xml</source></property>
<property><name>dfs.namenode.fs-limits.max-component-length</name><value>255</value><source>hdfs-default.xml</source></property>
<property><name>hbase.regionserver.info.port.auto</name><value>false</value><source>hbase-default.xml</source></property>
<property><name>ha.failover-controller.cli-check.rpc-timeout.ms</name><value>20000</value><source>core-default.xml</source></property>
<property><name>hbase.auth.key.update.interval</name><value>86400000</value><source>hbase-default.xml</source></property>
<property><name>ftp.client-write-packet-size</name><value>65536</value><source>core-default.xml</source></property>
<property><name>mapreduce.reduce.shuffle.parallelcopies</name><value>5</value><source>mapred-default.xml</source></property>
<property><name>mapreduce.jobhistory.principal</name><value>jhs/_HOST@REALM.TLD</value><source>mapred-default.xml</source></property>
<property><name>hadoop.http.authentication.simple.anonymous.allowed</name><value>true</value><source>core-default.xml</source></property>
<property><name>yarn.log-aggregation.retain-seconds</name><value>-1</value><source>yarn-default.xml</source></property>
<property><name>hbase.regionserver.thrift.framed</name><value>false</value><source>hbase-default.xml</source></property>
<property><name>hbase.zookeeper.property.maxClientCnxns</name><value>300</value><source>hbase-default.xml</source></property>
<property><name>hbase.splitlog.manager.timeout</name><value>120000</value><source>hbase-site.xml</source></property>
<property><name>mapreduce.client.genericoptionsparser.used</name><value>true</value><source>programatically</source></property>
<property><name>dfs.namenode.secondary.https-address</name><value>0.0.0.0:50091</value><source>hdfs-default.xml</source></property>
<property><name>hbase.mob.cache.evict.remain.ratio</name><value>0.5f</value><source>hbase-default.xml</source></property>
<property><name>mapreduce.job.ubertask.maxreduces</name><value>1</value><source>mapred-default.xml</source></property>
<property><name>yarn.nodemanager.health-checker.interval-ms</name><value>600000</value><source>yarn-default.xml</source></property>
<property><name>dfs.namenode.fs-limits.max-xattr-size</name><value>16384</value><source>hdfs-default.xml</source></property>
<property><name>fs.s3a.multipart.purge</name><value>false</value><source>core-default.xml</source></property>
<property><name>hadoop.security.kms.client.encrypted.key.cache.num.refill.threads</name><value>2</value><source>core-default.xml</source></property>
<property><name>hbase.server.compactchecker.interval.multiplier</name><value>1000</value><source>hbase-default.xml</source></property>
<property><name>yarn.timeline-service.store-class</name><value>org.apache.hadoop.yarn.server.timeline.LeveldbTimelineStore</value><source>yarn-default.xml</source></property>
<property><name>mapreduce.shuffle.transfer.buffer.size</name><value>131072</value><source>mapred-default.xml</source></property>
<property><name>yarn.resourcemanager.zk-num-retries</name><value>1000</value><source>yarn-default.xml</source></property>
<property><name>mapreduce.jobtracker.jobhistory.task.numberprogresssplits</name><value>12</value><source>mapred-default.xml</source></property>
<property><name>yarn.nodemanager.webapp.address</name><value>${yarn.nodemanager.hostname}:8042</value><source>yarn-default.xml</source></property>
<property><name>yarn.app.mapreduce.client-am.ipc.max-retries</name><value>3</value><source>mapred-default.xml</source></property>
<property><name>ha.failover-controller.new-active.rpc-timeout.ms</name><value>60000</value><source>core-default.xml</source></property>
<property><name>mapreduce.jobhistory.client.thread-count</name><value>10</value><source>mapred-default.xml</source></property>
<property><name>fs.trash.interval</name><value>1</value><source>core-site.xml</source></property>
<property><name>hbase.client.max.perregion.tasks</name><value>1</value><source>hbase-default.xml</source></property>
<property><name>mapreduce.fileoutputcommitter.algorithm.version</name><value>1</value><source>mapred-default.xml</source></property>
<property><name>mapreduce.reduce.skip.maxgroups</name><value>0</value><source>mapred-default.xml</source></property>
<property><name>mapreduce.reduce.memory.mb</name><value>1024</value><source>mapred-default.xml</source></property>
<property><name>yarn.nodemanager.health-checker.script.timeout-ms</name><value>1200000</value><source>yarn-default.xml</source></property>
<property><name>dfs.datanode.du.reserved</name><value>0</value><source>hdfs-default.xml</source></property>
<property><name>hbase.master.mob.ttl.cleaner.period</name><value>86400000</value><source>hbase-default.xml</source></property>
<property><name>mapreduce.client.progressmonitor.pollinterval</name><value>1000</value><source>mapred-default.xml</source></property>
<property><name>yarn.resourcemanager.application-tokens.master-key-rolling-interval-secs</name><value>86400</value><source>yarn-default.xml</source></property>
<property><name>yarn.nodemanager.hostname</name><value>0.0.0.0</value><source>yarn-default.xml</source></property>
<property><name>yarn.resourcemanager.ha.enabled</name><value>false</value><source>yarn-default.xml</source></property>
<property><name>dfs.ha.log-roll.period</name><value>120</value><source>hdfs-default.xml</source></property>
<property><name>yarn.scheduler.minimum-allocation-vcores</name><value>1</value><source>yarn-default.xml</source></property>
<property><name>dfs.client.block.write.replace-datanode-on-failure.best-effort</name><value>false</value><source>hdfs-default.xml</source></property>
<property><name>yarn.app.mapreduce.am.container.log.limit.kb</name><value>0</value><source>mapred-default.xml</source></property>
<property><name>hadoop.http.authentication.signature.secret.file</name><value>${user.home}/hadoop-http-auth-signature-secret</value><source>core-default.xml</source></property>
<property><name>mapreduce.jobhistory.move.interval-ms</name><value>180000</value><source>mapred-default.xml</source></property>
<property><name>yarn.nodemanager.container-executor.class</name><value>org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor</value><source>yarn-default.xml</source></property>
<property><name>hadoop.security.authorization</name><value>false</value><source>core-site.xml</source></property>
<property><name>dfs.datanode.https.address</name><value>0.0.0.0:50475</value><source>hdfs-default.xml</source></property>
<property><name>yarn.nodemanager.localizer.address</name><value>${yarn.nodemanager.hostname}:8040</value><source>yarn-default.xml</source></property>
<property><name>mapreduce.jobhistory.recovery.store.fs.uri</name><value>${hadoop.tmp.dir}/mapred/history/recoverystore</value><source>mapred-default.xml</source></property>
<property><name>dfs.namenode.replication.min</name><value>1</value><source>hdfs-default.xml</source></property>
<property><name>mapreduce.shuffle.connection-keep-alive.enable</name><value>false</value><source>mapred-default.xml</source></property>
<property><name>hadoop.common.configuration.version</name><value>0.23.0</value><source>core-default.xml</source></property>
<property><name>yarn.app.mapreduce.task.container.log.backups</name><value>0</value><source>mapred-default.xml</source></property>
<property><name>hadoop.security.groups.negative-cache.secs</name><value>30</value><source>core-default.xml</source></property>
<property><name>mapreduce.ifile.readahead</name><value>true</value><source>mapred-default.xml</source></property>
<property><name>mapreduce.job.max.split.locations</name><value>10</value><source>mapred-default.xml</source></property>
<property><name>dfs.datanode.max.locked.memory</name><value>0</value><source>hdfs-default.xml</source></property>
<property><name>mapreduce.jobhistory.joblist.cache.size</name><value>20000</value><source>mapred-default.xml</source></property>
<property><name>mapreduce.job.end-notification.max.attempts</name><value>5</value><source>mapred-default.xml</source></property>
<property><name>dfs.image.transfer.timeout</name><value>60000</value><source>hdfs-default.xml</source></property>
<property><name>dfs.client.read.shortcircuit.skip.checksum</name><value>false</value><source>hdfs-site.xml</source></property>
<property><name>nfs.rtmax</name><value>1048576</value><source>hdfs-default.xml</source></property>
<property><name>dfs.namenode.edit.log.autoroll.check.interval.ms</name><value>300000</value><source>hdfs-default.xml</source></property>
<property><name>mapreduce.reduce.shuffle.connect.timeout</name><value>180000</value><source>mapred-default.xml</source></property>
<property><name>mapreduce.jobhistory.webapp.address</name><value>0.0.0.0:19888</value><source>mapred-default.xml</source></property>
<property><name>dfs.datanode.failed.volumes.tolerated</name><value>0</value><source>hdfs-default.xml</source></property>
<property><name>fs.s3a.connection.timeout</name><value>5000</value><source>core-default.xml</source></property>
<property><name>dfs.client.mmap.retry.timeout.ms</name><value>300000</value><source>hdfs-default.xml</source></property>
<property><name>dfs.datanode.data.dir.perm</name><value>700</value><source>hdfs-default.xml</source></property>
<property><name>hadoop.http.authentication.token.validity</name><value>36000</value><source>core-default.xml</source></property>
<property><name>ipc.client.connect.max.retries.on.timeouts</name><value>45</value><source>core-default.xml</source></property>
<property><name>yarn.app.mapreduce.am.job.committer.cancel-timeout</name><value>60000</value><source>mapred-default.xml</source></property>
<property><name>dfs.ha.fencing.ssh.connect-timeout</name><value>30000</value><source>core-default.xml</source></property>
<property><name>hbase.data.umask</name><value>000</value><source>hbase-default.xml</source></property>
<property><name>mapreduce.reduce.log.level</name><value>INFO</value><source>mapred-default.xml</source></property>
<property><name>mapreduce.reduce.shuffle.merge.percent</name><value>0.66</value><source>mapred-default.xml</source></property>
<property><name>ipc.client.fallback-to-simple-auth-allowed</name><value>false</value><source>core-default.xml</source></property>
<property><name>hbase.master.executor.openregion.threads</name><value>5</value><source>hbase-site.xml</source></property>
<property><name>io.serializations</name><value>org.apache.hadoop.io.serializer.WritableSerialization,org.apache.hadoop.io.serializer.avro.AvroSpecificSerialization,org.apache.hadoop.io.serializer.avro.AvroReflectSerialization</value><source>core-default.xml</source></property>
<property><name>hbase.regionserver.hlog.writer.impl</name><value>org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter</value><source>hbase-default.xml</source></property>
<property><name>fs.s3.block.size</name><value>67108864</value><source>core-default.xml</source></property>
<property><name>yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user</name><value>nobody</value><source>yarn-default.xml</source></property>
<property><name>hadoop.kerberos.kinit.command</name><value>kinit</value><source>core-default.xml</source></property>
<property><name>hbase.regionserver.global.memstore.lowerLimit</name><value>0.38</value><source>hbase-default.xml</source></property>
<property><name>yarn.resourcemanager.fs.state-store.uri</name><value>${hadoop.tmp.dir}/yarn/system/rmstore</value><source>yarn-default.xml</source></property>
<property><name>hbase.regionserver.region.split.policy</name><value>org.apache.hadoop.hbase.regionserver.IncreasingToUpperBoundRegionSplitPolicy</value><source>hbase-default.xml</source></property>
<property><name>yarn.admin.acl</name><value>*</value><source>yarn-default.xml</source></property>
<property><name>dfs.namenode.delegation.token.max-lifetime</name><value>604800000</value><source>hdfs-default.xml</source></property>
<property><name>mapreduce.reduce.merge.inmem.threshold</name><value>1000</value><source>mapred-default.xml</source></property>
<property><name>net.topology.impl</name><value>org.apache.hadoop.net.NetworkTopology</value><source>core-default.xml</source></property>
<property><name>yarn.resourcemanager.ha.automatic-failover.enabled</name><value>true</value><source>yarn-default.xml</source></property>
<property><name>dfs.datanode.use.datanode.hostname</name><value>false</value><source>hdfs-default.xml</source></property>
<property><name>dfs.heartbeat.interval</name><value>3</value><source>hdfs-default.xml</source></property>
<property><name>yarn.resourcemanager.scheduler.class</name><value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value><source>yarn-default.xml</source></property>
<property><name>io.map.index.skip</name><value>0</value><source>core-default.xml</source></property>
<property><name>yarn.resourcemanager.webapp.https.address</name><value>${yarn.resourcemanager.hostname}:8090</value><source>yarn-default.xml</source></property>
<property><name>dfs.namenode.handler.count</name><value>10</value><source>hdfs-default.xml</source></property>
<property><name>yarn.nodemanager.admin-env</name><value>MALLOC_ARENA_MAX=$MALLOC_ARENA_MAX</value><source>yarn-default.xml</source></property>
<property><name>hbase.client.max.total.tasks</name><value>100</value><source>hbase-default.xml</source></property>
<property><name>hadoop.security.crypto.cipher.suite</name><value>AES/CTR/NoPadding</value><source>core-default.xml</source></property>
<property><name>mapreduce.task.profile.map.params</name><value>${mapreduce.task.profile.params}</value><source>mapred-default.xml</source></property>
<property><name>mapreduce.jobtracker.jobhistory.block.size</name><value>3145728</value><source>mapred-default.xml</source></property>
<property><name>hbase.zookeeper.peerport</name><value>2888</value><source>hbase-default.xml</source></property>
<property><name>hadoop.security.crypto.buffer.size</name><value>8192</value><source>core-default.xml</source></property>
<property><name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name><value>org.apache.hadoop.mapred.ShuffleHandler</value><source>yarn-default.xml</source></property>
<property><name>hbase.master.executor.serverops.threads</name><value>5</value><source>hbase-site.xml</source></property>
<property><name>mapreduce.cluster.acls.enabled</name><value>false</value><source>mapred-default.xml</source></property>
<property><name>hbase.regionserver.info.port</name><value>60030</value><source>hbase-site.xml</source></property>
<property><name>hbase.hregion.majorcompaction.jitter</name><value>0.50</value><source>hbase-default.xml</source></property>
<property><name>dfs.client.hedged.read.threshold.millis</name><value>500</value><source>hdfs-site.xml</source></property>
<property><name>fs.har.impl.disable.cache</name><value>true</value><source>core-default.xml</source></property>
<property><name>mapreduce.tasktracker.map.tasks.maximum</name><value>2</value><source>mapred-default.xml</source></property>
<property><name>ipc.client.connect.timeout</name><value>20000</value><source>core-default.xml</source></property>
<property><name>yarn.timeline-service.generic-application-history.enabled</name><value>false</value><source>yarn-default.xml</source></property>
<property><name>yarn.nodemanager.remote-app-log-dir-suffix</name><value>logs</value><source>yarn-default.xml</source></property>
<property><name>fs.df.interval</name><value>60000</value><source>core-default.xml</source></property>
<property><name>hbase.regionserver.thrift.framed.max_frame_size_in_mb</name><value>5</value><source>hbase-site.xml</source></property>
<property><name>hadoop.util.hash.type</name><value>murmur</value><source>core-default.xml</source></property>
<property><name>mapreduce.jobhistory.minicluster.fixed.ports</name><value>false</value><source>mapred-default.xml</source></property>
<property><name>mapreduce.jobtracker.jobhistory.lru.cache.size</name><value>5</value><source>mapred-default.xml</source></property>
<property><name>dfs.client.failover.max.attempts</name><value>15</value><source>hdfs-default.xml</source></property>
<property><name>dfs.client.use.datanode.hostname</name><value>false</value><source>hdfs-site.xml</source></property>
<property><name>ha.zookeeper.acl</name><value>world:anyone:rwcda</value><source>core-default.xml</source></property>
<property><name>mapreduce.jobtracker.maxtasks.perjob</name><value>-1</value><source>mapred-default.xml</source></property>
<property><name>mapreduce.map.sort.spill.percent</name><value>0.80</value><source>mapred-default.xml</source></property>
<property><name>file.stream-buffer-size</name><value>4096</value><source>core-default.xml</source></property>
<property><name>yarn.resourcemanager.ha.automatic-failover.embedded</name><value>true</value><source>yarn-default.xml</source></property>
<property><name>hbase.regionserver.catalog.timeout</name><value>600000</value><source>hbase-default.xml</source></property>
<property><name>hbase.security.authentication</name><value>simple</value><source>hbase-site.xml</source></property>
<property><name>yarn.resourcemanager.nodemanager.minimum.version</name><value>NONE</value><source>yarn-default.xml</source></property>
<property><name>hadoop.fuse.connection.timeout</name><value>300</value><source>hdfs-default.xml</source></property>
<property><name>hbase.client.keyvalue.maxsize</name><value>10485760</value><source>hbase-site.xml</source></property>
<property><name>yarn.resourcemanager.history-writer.multi-threaded-dispatcher.pool-size</name><value>10</value><source>yarn-default.xml</source></property>
<property><name>hbase.regionserver.thrift.compact</name><value>false</value><source>hbase-default.xml</source></property>
<property><name>mapreduce.tasktracker.instrumentation</name><value>org.apache.hadoop.mapred.TaskTrackerMetricsInst</value><source>mapred-default.xml</source></property>
<property><name>io.seqfile.sorter.recordlimit</name><value>1000000</value><source>core-default.xml</source></property>
<property><name>yarn.app.mapreduce.am.resource.mb</name><value>1536</value><source>mapred-default.xml</source></property>
<property><name>mapreduce.framework.name</name><value>local</value><source>mapred-default.xml</source></property>
<property><name>mapreduce.job.reduce.slowstart.completedmaps</name><value>0.05</value><source>mapred-default.xml</source></property>
<property><name>yarn.resourcemanager.client.thread-count</name><value>50</value><source>yarn-default.xml</source></property>
<property><name>mapreduce.cluster.temp.dir</name><value>${hadoop.tmp.dir}/mapred/temp</value><source>mapred-default.xml</source></property>
<property><name>dfs.client.mmap.enabled</name><value>true</value><source>hdfs-default.xml</source></property>
<property><name>mapreduce.jobhistory.intermediate-done-dir</name><value>${yarn.app.mapreduce.am.staging-dir}/history/done_intermediate</value><source>mapred-default.xml</source></property>
<property><name>hbase.defaults.for.version.skip</name><value>false</value><source>hbase-default.xml</source></property>
<property><name>fs.s3a.attempts.maximum</name><value>10</value><source>core-default.xml</source></property>
<property><name>hbase.rest.support.proxyuser</name><value>false</value><source>hbase-default.xml</source></property>
</configuration>

Recent regionserver aborts:
===========================================================


Logs
===========================================================
+++++++++++++++++++++++++++++++
/var/log/hbase/hbase-cmf-hbase-MASTER-hdpnode5.log.out
+++++++++++++++++++++++++++++++
2015-04-23 04:02:33,098 INFO org.apache.zookeeper.ZooKeeper: Client
environment:java.library.path=/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/native
2015-04-23 04:02:33,098 INFO org.apache.zookeeper.ZooKeeper: Client
environment:java.io.tmpdir=/tmp
2015-04-23 04:02:33,098 INFO org.apache.zookeeper.ZooKeeper: Client
environment:java.compiler=<NA>
2015-04-23 04:02:33,098 INFO org.apache.zookeeper.ZooKeeper: Client
environment:os.name=Linux
2015-04-23 04:02:33,098 INFO org.apache.zookeeper.ZooKeeper: Client
environment:os.arch=amd64
2015-04-23 04:02:33,098 INFO org.apache.zookeeper.ZooKeeper: Client
environment:os.version=3.2.0-59-virtual
2015-04-23 04:02:33,098 INFO org.apache.zookeeper.ZooKeeper: Client
environment:user.name=hbase
2015-04-23 04:02:33,098 INFO org.apache.zookeeper.ZooKeeper: Client
environment:user.home=/var/lib/hbase
2015-04-23 04:02:33,099 INFO org.apache.zookeeper.ZooKeeper: Client
environment:user.dir=/run/cloudera-scm-agent/process/1911-hbase-MASTER
2015-04-23 04:02:33,100 INFO org.apache.zookeeper.ZooKeeper:
Initiating client connection,
connectString=hdpnode4:2181,hdpnode3:2181,hdpnode2:2181,hdpnode1:2181,hdpnode5:2181
sessionTimeout=60000 watcher=master:60000,
quorum=hdpnode4:2181,hdpnode3:2181,hdpnode2:2181,hdpnode1:2181,hdpnode5:2181,
baseZNode=/hbase
2015-04-23 04:02:33,128 INFO org.apache.zookeeper.ClientCnxn: Opening
socket connection to server hdpnode4/172.30.1.230:2181. Will not
attempt to authenticate using SASL (unknown error)
2015-04-23 04:02:33,135 INFO org.apache.zookeeper.ClientCnxn: Socket
connection established to hdpnode4/172.30.1.230:2181, initiating
session
2015-04-23 04:02:33,149 INFO org.apache.zookeeper.ClientCnxn: Session
establishment complete on server hdpnode4/172.30.1.230:2181, sessionid
= 0x44ce41ad766013f, negotiated timeout = 60000
2015-04-23 04:02:35,673 INFO org.apache.zookeeper.ZooKeeper:
Initiating client connection,
connectString=hdpnode4:2181,hdpnode3:2181,hdpnode2:2181,hdpnode1:2181,hdpnode5:2181
sessionTimeout=60000 watcher=hconnection-0x9fc618,
quorum=hdpnode4:2181,hdpnode3:2181,hdpnode2:2181,hdpnode1:2181,hdpnode5:2181,
baseZNode=/hbase
2015-04-23 04:02:35,674 INFO org.apache.zookeeper.ClientCnxn: Opening
socket connection to server hdpnode3/172.30.2.189:2181. Will not
attempt to authenticate using SASL (unknown error)
2015-04-23 04:02:35,675 INFO org.apache.zookeeper.ClientCnxn: Socket
connection established to hdpnode3/172.30.2.189:2181, initiating
session
2015-04-23 04:02:35,679 INFO org.apache.zookeeper.ClientCnxn: Session
establishment complete on server hdpnode3/172.30.2.189:2181, sessionid
= 0x34cd490344e39ac, negotiated timeout = 60000
2015-04-23 04:02:36,196 INFO org.apache.zookeeper.ZooKeeper:
Initiating client connection,
connectString=hdpnode4:2181,hdpnode3:2181,hdpnode2:2181,hdpnode1:2181,hdpnode5:2181
sessionTimeout=60000 watcher=replicationLogCleaner,
quorum=hdpnode4:2181,hdpnode3:2181,hdpnode2:2181,hdpnode1:2181,hdpnode5:2181,
baseZNode=/hbase
2015-04-23 04:02:36,197 INFO org.apache.zookeeper.ClientCnxn: Opening
socket connection to server hdpnode3/172.30.2.189:2181. Will not
attempt to authenticate using SASL (unknown error)
2015-04-23 04:02:36,198 INFO org.apache.zookeeper.ClientCnxn: Socket
connection established to hdpnode3/172.30.2.189:2181, initiating
session
2015-04-23 04:02:36,202 INFO org.apache.zookeeper.ClientCnxn: Session
establishment complete on server hdpnode3/172.30.2.189:2181, sessionid
= 0x34cd490344e39ad, negotiated timeout = 60000
2015-04-30 13:42:45,060 INFO org.apache.zookeeper.ClientCnxn: Unable
to read additional data from server sessionid 0x44ce41ad766013f,
likely server has closed socket, closing socket connection and
attempting reconnect
2015-04-30 13:42:45,061 INFO org.apache.zookeeper.ClientCnxn: Unable
to read additional data from server sessionid 0x34cd490344e39ac,
likely server has closed socket, closing socket connection and
attempting reconnect
2015-04-30 13:42:45,060 INFO org.apache.zookeeper.ClientCnxn: Unable
to read additional data from server sessionid 0x34cd490344e39ad,
likely server has closed socket, closing socket connection and
attempting reconnect
2015-04-30 13:42:46,917 INFO org.apache.zookeeper.ClientCnxn: Opening
socket connection to server hdpnode1/172.30.1.73:2181. Will not
attempt to authenticate using SASL (unknown error)
2015-04-30 13:42:46,918 INFO org.apache.zookeeper.ClientCnxn: Opening
socket connection to server hdpnode2/172.30.0.99:2181. Will not
attempt to authenticate using SASL (unknown error)
2015-04-30 13:42:46,919 INFO org.apache.zookeeper.ClientCnxn: Socket
connection established to hdpnode2/172.30.0.99:2181, initiating
session
2015-04-30 13:42:46,920 INFO org.apache.zookeeper.ClientCnxn: Socket
connection established to hdpnode1/172.30.1.73:2181, initiating
session
2015-04-30 13:42:46,923 INFO org.apache.zookeeper.ClientCnxn: Unable
to read additional data from server sessionid 0x34cd490344e39ad,
likely server has closed socket, closing socket connection and
attempting reconnect
2015-04-30 13:42:46,988 INFO org.apache.zookeeper.ClientCnxn: Unable
to read additional data from server sessionid 0x44ce41ad766013f,
likely server has closed socket, closing socket connection and
attempting reconnect
2015-04-30 13:42:47,115 INFO org.apache.zookeeper.ClientCnxn: Opening
socket connection to server hdpnode4/172.30.1.230:2181. Will not
attempt to authenticate using SASL (unknown error)
2015-04-30 13:42:47,225 INFO org.apache.zookeeper.ClientCnxn: Socket
connection established to hdpnode4/172.30.1.230:2181, initiating
session
2015-04-30 13:42:47,227 INFO org.apache.zookeeper.ClientCnxn: Unable
to read additional data from server sessionid 0x34cd490344e39ac,
likely server has closed socket, closing socket connection and
attempting reconnect
2015-04-30 13:42:47,351 INFO org.apache.zookeeper.ClientCnxn: Opening
socket connection to server hdpnode2/172.30.0.99:2181. Will not
attempt to authenticate using SASL (unknown error)
2015-04-30 13:42:47,352 INFO org.apache.zookeeper.ClientCnxn: Socket
connection established to hdpnode2/172.30.0.99:2181, initiating
session
2015-04-30 13:42:47,353 INFO org.apache.zookeeper.ClientCnxn: Unable
to read additional data from server sessionid 0x34cd490344e39ad,
likely server has closed socket, closing socket connection and
attempting reconnect
2015-04-30 13:42:47,442 INFO org.apache.zookeeper.ClientCnxn: Opening
socket connection to server hdpnode1/172.30.1.73:2181. Will not
attempt to authenticate using SASL (unknown error)
2015-04-30 13:42:47,444 INFO org.apache.zookeeper.ClientCnxn: Socket
connection established to hdpnode1/172.30.1.73:2181, initiating
session
2015-04-30 13:42:47,446 INFO org.apache.zookeeper.ClientCnxn: Unable
to read additional data from server sessionid 0x44ce41ad766013f,
likely server has closed socket, closing socket connection and
attempting reconnect
2015-04-30 13:42:47,617 INFO org.apache.zookeeper.ClientCnxn: Opening
socket connection to server hdpnode4/172.30.1.230:2181. Will not
attempt to authenticate using SASL (unknown error)
2015-04-30 13:42:47,618 INFO org.apache.zookeeper.ClientCnxn: Socket
connection established to hdpnode4/172.30.1.230:2181, initiating
session
2015-04-30 13:42:47,620 INFO org.apache.zookeeper.ClientCnxn: Unable
to read additional data from server sessionid 0x34cd490344e39ad,
likely server has closed socket, closing socket connection and
attempting reconnect
2015-04-30 13:42:47,750 INFO org.apache.zookeeper.ClientCnxn: Opening
socket connection to server hdpnode3/172.30.2.189:2181. Will not
attempt to authenticate using SASL (unknown error)
2015-04-30 13:42:47,751 INFO org.apache.zookeeper.ClientCnxn: Socket
connection established to hdpnode3/172.30.2.189:2181, initiating
session
2015-04-30 13:42:47,754 INFO org.apache.zookeeper.ClientCnxn: Session
establishment complete on server hdpnode3/172.30.2.189:2181, sessionid
= 0x44ce41ad766013f, negotiated timeout = 60000
2015-04-30 13:42:48,238 INFO org.apache.zookeeper.ClientCnxn: Opening
socket connection to server hdpnode2/172.30.0.99:2181. Will not
attempt to authenticate using SASL (unknown error)
2015-04-30 13:42:48,240 INFO org.apache.zookeeper.ClientCnxn: Socket
connection established to hdpnode2/172.30.0.99:2181, initiating
session
2015-04-30 13:42:48,241 INFO org.apache.zookeeper.ClientCnxn: Session
establishment complete on server hdpnode2/172.30.0.99:2181, sessionid
= 0x34cd490344e39ac, negotiated timeout = 60000
2015-04-30 13:42:48,504 INFO org.apache.zookeeper.ClientCnxn: Opening
socket connection to server hdpnode5/172.30.2.55:2181. Will not
attempt to authenticate using SASL (unknown error)
2015-04-30 13:42:48,504 INFO org.apache.zookeeper.ClientCnxn: Socket
connection established to hdpnode5/172.30.2.55:2181, initiating
session
2015-04-30 13:42:48,504 INFO org.apache.zookeeper.ClientCnxn: Unable
to read additional data from server sessionid 0x34cd490344e39ad,
likely server has closed socket, closing socket connection and
attempting reconnect
2015-04-30 13:42:50,344 INFO org.apache.zookeeper.ClientCnxn: Opening
socket connection to server hdpnode3/172.30.2.189:2181. Will not
attempt to authenticate using SASL (unknown error)
2015-04-30 13:42:50,345 INFO org.apache.zookeeper.ClientCnxn: Socket
connection established to hdpnode3/172.30.2.189:2181, initiating
session
2015-04-30 13:42:50,348 INFO org.apache.zookeeper.ClientCnxn: Session
establishment complete on server hdpnode3/172.30.2.189:2181, sessionid
= 0x34cd490344e39ad, negotiated timeout = 60000
2015-04-30 22:12:41,896 INFO org.apache.zookeeper.ZooKeeper: Session:
0x34cd490344e39ad closed
2015-04-30 22:12:41,896 INFO org.apache.zookeeper.ClientCnxn:
EventThread shut down
2015-04-30 22:12:48,678 INFO org.apache.zookeeper.ZooKeeper: Session:
0x34cd490344e39ac closed
2015-04-30 22:12:48,678 INFO org.apache.zookeeper.ClientCnxn:
EventThread shut down
2015-04-30 22:12:48,684 INFO org.apache.zookeeper.ZooKeeper: Session:
0x44ce41ad766013f closed
2015-04-30 22:12:48,684 INFO org.apache.zookeeper.ClientCnxn:
EventThread shut down
2015-04-30 22:15:13,943 INFO org.apache.zookeeper.ZooKeeper: Client
environment:zookeeper.version=3.4.5-cdh5.3.3--1, built on 04/08/2015
21:54 GMT
2015-04-30 22:15:13,965 INFO org.apache.zookeeper.ZooKeeper: Client
environment:host.name=hdpnode5
2015-04-30 22:15:13,966 INFO org.apache.zookeeper.ZooKeeper: Client
environment:java.version=1.7.0_67
2015-04-30 22:15:13,966 INFO org.apache.zookeeper.ZooKeeper: Client
environment:java.vendor=Oracle Corporation
2015-04-30 22:15:13,966 INFO org.apache.zookeeper.ZooKeeper: Client
environment:java.home=/usr/lib/jvm/java-7-oracle-cloudera/jre
2015-04-30 22:15:13,966 INFO org.apache.zookeeper.ZooKeeper: Client
environment:java.class.path=/run/cloudera-scm-agent/process/2091-hbase-MASTER:/usr/lib/jvm/java-7-oracle-cloudera/lib/tools.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/activation-1.1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/apacheds-i18n-2.0.0-M15.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/api-asn1-api-1.0.0-M20.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/api-util-1.0.0-M20.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/avro.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/commons-beanutils-1.7.0.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/commons-beanutils-core-1.8.0.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/commons-codec-1.7.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/commons-collections-3.2.1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/commons-compress-1.4.1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/commons-configuration-1.6.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/commons-daemon-1.0.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/commons-digester-1.8.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/commons-el-1.0.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/commons-httpclient-3.1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/commons-io-2.4.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/commons-lang-2.6.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/commons-logging-1.1.1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/commons-math-2.1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/commons-math3-3.1.1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/commons-net-3.1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/core-3.1.1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/curator-client-2.6.0.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/curator-framework-2.6.0.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/curator-recipes-2.6.0.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/findbugs-annotations-1.3.9-1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/gson-2.2.4.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/guava-12.0.1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/hamcrest-core-1.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/hbase-client-0.98.6-cdh5.3.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/hbase-common-0.98.6-cdh5.3.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/hbase-common-0.98.6-cdh5.3.3-tests.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/hbase-examples-0.98.6-cdh5.3.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/hbase-hadoop2-compat-0.98.6-cdh5.3.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/hbase-hadoop2-compat-0.98.6-cdh5.3.3-tests.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/hbase-hadoop-compat-0.98.6-cdh5.3.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/hbase-hadoop-compat-0.98.6-cdh5.3.3-tests.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/hbase-it-0.98.6-cdh5.3.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/hbase-it-0.98.6-cdh5.3.3-tests.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/hbase-prefix-tree-0.98.6-cdh5.3.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/hbase-protocol-0.98.6-cdh5.3.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/hbase-server-0.98.6-cdh5.3.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/hbase-server-0.98.6-cdh5.3.3-tests.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/hbase-shell-0.98.6-cdh5.3.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/hbase-testing-util-0.98.6-cdh5.3.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/hbase-thrift-0.98.6-cdh5.3.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/high-scale-lib-1.1.1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/hsqldb-1.8.0.10.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/htrace-core-2.04.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/htrace-core.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/httpclient-4.2.5.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/httpcore-4.2.5.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/jackson-core-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/jackson-jaxrs-1.8.8.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/jackson-mapper-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/jackson-xc-1.8.8.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/jamon-runtime-2.3.1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/jasper-compiler-5.5.23.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/jasper-runtime-5.5.23.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/java-xmlbuilder-0.4.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/jaxb-api-2.1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/jaxb-impl-2.2.3-1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/jersey-core-1.8.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/jersey-json-1.8.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/jersey-server-1.8.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/jets3t-0.9.0.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/jettison-1.3.1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/jetty-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/jetty-sslengine-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/jetty-util-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/jruby-complete-1.6.8.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/jsch-0.1.42.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/jsp-2.1-6.1.14.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/jsp-api-2.1-6.1.14.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/jsp-api-2.1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/jsr305-1.3.9.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/junit-4.11.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/libthrift-0.9.0.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/metrics-core-2.2.0.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/netty-3.6.6.Final.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/paranamer-2.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/servlet-api-2.5-6.1.14.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/servlet-api-2.5.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/slf4j-api-1.7.5.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/slf4j-log4j12.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/snappy-java-1.0.4.1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/xmlenc-0.52.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/xz-1.0.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hbase/lib/zookeeper.jar:/run/cloudera-scm-agent/process/2091-hbase-MASTER:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/hamcrest-core-1.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/jersey-server-1.9.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/jsr305-1.3.9.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/commons-io-2.4.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/snappy-java-1.0.4.1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/httpclient-4.2.5.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/activation-1.1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/gson-2.2.4.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/jackson-xc-1.8.8.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/junit-4.11.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/commons-httpclient-3.1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/commons-logging-1.1.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/jersey-core-1.9.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/stax-api-1.0-2.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/commons-beanutils-1.7.0.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/curator-recipes-2.6.0.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/zookeeper.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/jsp-api-2.1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/api-util-1.0.0-M20.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/commons-net-3.1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/xz-1.0.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/curator-client-2.6.0.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/slf4j-log4j12.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/httpcore-4.2.5.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/hue-plugins-3.7.0-cdh5.3.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/commons-lang-2.6.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/avro.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/commons-collections-3.2.1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/jsch-0.1.42.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/jetty-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/jackson-jaxrs-1.8.8.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/api-asn1-api-1.0.0-M20.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/jaxb-impl-2.2.3-1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/curator-framework-2.6.0.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/jaxb-api-2.2.2.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/java-xmlbuilder-0.4.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/mockito-all-1.8.5.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/commons-digester-1.8.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/commons-beanutils-core-1.8.0.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/xmlenc-0.52.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/commons-math3-3.1.1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/commons-configuration-1.6.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/servlet-api-2.5.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/jettison-1.1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/netty-3.6.2.Final.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/commons-el-1.0.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/jasper-compiler-5.5.23.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/jackson-core-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/commons-codec-1.4.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/jetty-util-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/commons-compress-1.4.1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/jersey-json-1.9.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/jets3t-0.9.0.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/apacheds-i18n-2.0.0-M15.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/paranamer-2.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/aws-java-sdk-1.7.4.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/jackson-mapper-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/jasper-runtime-5.5.23.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/lib/slf4j-api-1.7.5.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/.//parquet-hadoop-bundle.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/.//parquet-hadoop.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/.//hadoop-common.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/.//hadoop-auth-2.5.0-cdh5.3.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/.//hadoop-annotations-2.5.0-cdh5.3.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/.//parquet-column.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/.//hadoop-common-2.5.0-cdh5.3.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/.//parquet-format.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/.//parquet-tools.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/.//parquet-scala_2.10.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/.//parquet-protobuf.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/.//parquet-format-sources.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/.//parquet-scrooge_2.10.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/.//parquet-format-javadoc.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/.//parquet-generator.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/.//parquet-common.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/.//parquet-test-hadoop2.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/.//parquet-cascading.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/.//hadoop-annotations.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/.//parquet-pig.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/.//parquet-pig-bundle.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/.//parquet-avro.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/.//hadoop-common-2.5.0-cdh5.3.3-tests.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/.//hadoop-nfs.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/.//hadoop-auth.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/.//parquet-encoding.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/.//parquet-thrift.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/.//hadoop-aws.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/.//hadoop-nfs-2.5.0-cdh5.3.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/.//hadoop-aws-2.5.0-cdh5.3.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop/.//parquet-jackson.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-hdfs/./:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-hdfs/lib/jersey-server-1.9.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-hdfs/lib/jsr305-1.3.9.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-hdfs/lib/commons-io-2.4.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-hdfs/lib/commons-logging-1.1.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-hdfs/lib/jersey-core-1.9.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-hdfs/lib/jsp-api-2.1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-hdfs/lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-hdfs/lib/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-hdfs/lib/commons-lang-2.6.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-hdfs/lib/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-hdfs/lib/jetty-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-hdfs/lib/xmlenc-0.52.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-hdfs/lib/servlet-api-2.5.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-hdfs/lib/netty-3.6.2.Final.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-hdfs/lib/commons-el-1.0.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-hdfs/lib/commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-hdfs/lib/commons-codec-1.4.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-hdfs/lib/jetty-util-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-hdfs/lib/jasper-runtime-5.5.23.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-hdfs/.//hadoop-hdfs-nfs-2.5.0-cdh5.3.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-hdfs/.//hadoop-hdfs-nfs.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-hdfs/.//hadoop-hdfs-2.5.0-cdh5.3.3-tests.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-hdfs/.//hadoop-hdfs-2.5.0-cdh5.3.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-hdfs/.//hadoop-hdfs.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/lib/jersey-server-1.9.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/lib/jsr305-1.3.9.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/lib/commons-io-2.4.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/lib/activation-1.1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/lib/jackson-xc-1.8.8.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/lib/commons-httpclient-3.1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/lib/commons-logging-1.1.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/lib/jersey-core-1.9.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/lib/javax.inject-1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/lib/stax-api-1.0-2.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/lib/zookeeper.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/lib/jline-0.9.94.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/lib/xz-1.0.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/lib/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/lib/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/lib/guice-3.0.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/lib/commons-lang-2.6.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/lib/commons-collections-3.2.1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/lib/aopalliance-1.0.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/lib/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/lib/jetty-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/lib/jackson-jaxrs-1.8.8.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/lib/jaxb-impl-2.2.3-1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/lib/guice-servlet-3.0.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/lib/jaxb-api-2.2.2.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/lib/leveldbjni-all-1.8.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/lib/servlet-api-2.5.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/lib/jettison-1.1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/lib/commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/lib/jackson-core-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/lib/commons-codec-1.4.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/lib/jersey-guice-1.9.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/lib/jersey-client-1.9.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/lib/jetty-util-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/lib/commons-compress-1.4.1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/lib/jersey-json-1.9.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/lib/jackson-mapper-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher-2.5.0-cdh5.3.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/.//hadoop-yarn-server-tests.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/.//hadoop-yarn-server-nodemanager.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/.//hadoop-yarn-server-resourcemanager-2.5.0-cdh5.3.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/.//hadoop-yarn-server-common-2.5.0-cdh5.3.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/.//hadoop-yarn-applications-distributedshell-2.5.0-cdh5.3.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/.//hadoop-yarn-common-2.5.0-cdh5.3.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/.//hadoop-yarn-client.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/.//hadoop-yarn-server-common.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/.//hadoop-yarn-server-web-proxy-2.5.0-cdh5.3.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/.//hadoop-yarn-client-2.5.0-cdh5.3.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/.//hadoop-yarn-server-resourcemanager.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/.//hadoop-yarn-server-tests-2.5.0-cdh5.3.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/.//hadoop-yarn-server-nodemanager-2.5.0-cdh5.3.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/.//hadoop-yarn-common.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice-2.5.0-cdh5.3.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/.//hadoop-yarn-applications-distributedshell.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/.//hadoop-yarn-server-web-proxy.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/.//hadoop-yarn-api-2.5.0-cdh5.3.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-yarn/.//hadoop-yarn-api.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/lib/hamcrest-core-1.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/lib/jersey-server-1.9.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/lib/commons-io-2.4.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/lib/snappy-java-1.0.4.1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/lib/junit-4.11.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/lib/jersey-core-1.9.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/lib/javax.inject-1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/lib/xz-1.0.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/lib/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/lib/guice-3.0.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/lib/avro.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/lib/aopalliance-1.0.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/lib/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/lib/guice-servlet-3.0.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/lib/leveldbjni-all-1.8.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/lib/netty-3.6.2.Final.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/lib/jackson-core-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/lib/jersey-guice-1.9.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/lib/commons-compress-1.4.1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/lib/paranamer-2.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//hamcrest-core-1.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//hadoop-mapreduce-client-core-2.5.0-cdh5.3.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//jersey-server-1.9.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//jsr305-1.3.9.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//commons-io-2.4.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//snappy-java-1.0.4.1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//httpclient-4.2.5.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins-2.5.0-cdh5.3.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//activation-1.1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//hadoop-auth-2.5.0-cdh5.3.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//jackson-databind-2.2.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//gson-2.2.4.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//hadoop-mapreduce-client-nativetask.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//hadoop-sls-2.5.0-cdh5.3.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//jackson-xc-1.8.8.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//junit-4.11.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//commons-httpclient-3.1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//hadoop-datajoin-2.5.0-cdh5.3.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//commons-logging-1.1.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//hadoop-mapreduce-client-nativetask-2.5.0-cdh5.3.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//jersey-core-1.9.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//stax-api-1.0-2.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//jackson-core-2.2.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//commons-beanutils-1.7.0.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//hadoop-archives.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//hadoop-mapreduce-client-hs.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//curator-recipes-2.6.0.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//zookeeper.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//jsp-api-2.1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle-2.5.0-cdh5.3.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//api-util-1.0.0-M20.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//hadoop-rumen-2.5.0-cdh5.3.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//commons-net-3.1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//hadoop-archives-2.5.0-cdh5.3.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//hadoop-mapreduce-client-hs-2.5.0-cdh5.3.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//asm-3.2.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-tests.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//hadoop-gridmix-2.5.0-cdh5.3.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//hadoop-azure.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//hadoop-mapreduce-client-core.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//xz-1.0.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//hadoop-extras.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//curator-client-2.6.0.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//guava-11.0.2.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.5.0-cdh5.3.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.5.0-cdh5.3.3-tests.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//httpcore-4.2.5.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//hadoop-mapreduce-client-app.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//jackson-annotations-2.2.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//apacheds-kerberos-codec-2.0.0-M15.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//hadoop-mapreduce-client-app-2.5.0-cdh5.3.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//microsoft-windowsazure-storage-sdk-0.6.0.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//hadoop-distcp-2.5.0-cdh5.3.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//commons-lang-2.6.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//avro.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//commons-collections-3.2.1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//hadoop-streaming-2.5.0-cdh5.3.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//jsch-0.1.42.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//jetty-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//hadoop-streaming.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//jackson-jaxrs-1.8.8.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//api-asn1-api-1.0.0-M20.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//jaxb-impl-2.2.3-1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//curator-framework-2.6.0.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//jaxb-api-2.2.2.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//hadoop-sls.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//java-xmlbuilder-0.4.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//commons-digester-1.8.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//hadoop-extras-2.5.0-cdh5.3.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//hadoop-azure-2.5.0-cdh5.3.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//commons-beanutils-core-1.8.0.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//hadoop-mapreduce-examples.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//hadoop-distcp.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//hadoop-rumen.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//xmlenc-0.52.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//metrics-core-3.0.1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//commons-math3-3.1.1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//commons-configuration-1.6.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//servlet-api-2.5.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//jettison-1.1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//commons-el-1.0.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//hadoop-auth.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//jasper-compiler-5.5.23.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//jackson-core-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//commons-codec-1.4.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//jetty-util-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//commons-compress-1.4.1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//jersey-json-1.9.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//jets3t-0.9.0.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//apacheds-i18n-2.0.0-M15.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//paranamer-2.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//hadoop-datajoin.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//hadoop-mapreduce-examples-2.5.0-cdh5.3.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//hadoop-mapreduce-client-common-2.5.0-cdh5.3.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//hadoop-mapreduce-client-common.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//jackson-mapper-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//jasper-runtime-5.5.23.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/libexec/../../hadoop-mapreduce/.//hadoop-gridmix.jar:/run/cloudera-scm-agent/process/2091-hbase-MASTER:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/parquet-hadoop-bundle.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/parquet-hadoop.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/hadoop-common.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/hadoop-auth-2.5.0-cdh5.3.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/hadoop-annotations-2.5.0-cdh5.3.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/parquet-column.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/hadoop-common-2.5.0-cdh5.3.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/parquet-format.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/parquet-tools.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/parquet-scala_2.10.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/parquet-protobuf.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/parquet-format-sources.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/parquet-scrooge_2.10.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/parquet-format-javadoc.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/parquet-generator.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/parquet-common.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/parquet-test-hadoop2.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/parquet-cascading.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/hadoop-annotations.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/parquet-pig.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/parquet-pig-bundle.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/parquet-avro.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/hadoop-common-2.5.0-cdh5.3.3-tests.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/hadoop-nfs.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/hadoop-auth.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/parquet-encoding.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/parquet-thrift.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/hadoop-aws.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/hadoop-nfs-2.5.0-cdh5.3.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/hadoop-aws-2.5.0-cdh5.3.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/parquet-jackson.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/hamcrest-core-1.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/jersey-server-1.9.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/jsr305-1.3.9.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/commons-io-2.4.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/snappy-java-1.0.4.1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/httpclient-4.2.5.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/activation-1.1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/gson-2.2.4.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/jackson-xc-1.8.8.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/junit-4.11.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/commons-httpclient-3.1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/commons-logging-1.1.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/jersey-core-1.9.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/stax-api-1.0-2.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/curator-recipes-2.6.0.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/zookeeper.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/jsp-api-2.1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/api-util-1.0.0-M20.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/commons-net-3.1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/xz-1.0.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/curator-client-2.6.0.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/slf4j-log4j12.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/httpcore-4.2.5.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/hue-plugins-3.7.0-cdh5.3.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/commons-lang-2.6.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/avro.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/commons-collections-3.2.1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/jsch-0.1.42.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/jetty-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/api-asn1-api-1.0.0-M20.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/curator-framework-2.6.0.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/jaxb-api-2.2.2.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/java-xmlbuilder-0.4.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/mockito-all-1.8.5.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/commons-digester-1.8.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/xmlenc-0.52.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/commons-math3-3.1.1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/commons-configuration-1.6.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/servlet-api-2.5.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/jettison-1.1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/netty-3.6.2.Final.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/commons-el-1.0.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/jackson-core-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/commons-codec-1.4.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/jetty-util-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/commons-compress-1.4.1.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/jersey-json-1.9.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/jets3t-0.9.0.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/apacheds-i18n-2.0.0-M15.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/paranamer-2.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/aws-java-sdk-1.7.4.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/jackson-mapper-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/slf4j-api-1.7.5.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/bin/../lib/zookeeper/zookeeper.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/bin/../lib/zookeeper/zookeeper-3.4.5-cdh5.3.3.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/bin/../lib/zookeeper/lib/log4j-1.2.16.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/bin/../lib/zookeeper/lib/slf4j-log4j12-1.7.5.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/bin/../lib/zookeeper/lib/netty-3.2.2.Final.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/bin/../lib/zookeeper/lib/jline-0.9.94.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/bin/../lib/zookeeper/lib/slf4j-log4j12.jar:/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/bin/../lib/zookeeper/lib/slf4j-api-1.7.5.jar:/usr/share/cmf/lib/plugins/tt-instrumentation-5.2.1.jar:/usr/share/cmf/lib/plugins/event-publish-5.2.1-shaded.jar:/usr/share/cmf/lib/plugins/cdh5/audit-plugin-cdh5-2.1.1-shaded.jar
2015-04-30 22:15:14,065 INFO org.apache.zookeeper.ZooKeeper: Client
environment:java.library.path=/opt/cloudera/parcels/CDH-5.3.3-1.cdh5.3.3.p0.5/lib/hadoop/lib/native
2015-04-30 22:15:14,065 INFO org.apache.zookeeper.ZooKeeper: Client
environment:java.io.tmpdir=/tmp
2015-04-30 22:15:14,065 INFO org.apache.zookeeper.ZooKeeper: Client
environment:java.compiler=<NA>
2015-04-30 22:15:14,065 INFO org.apache.zookeeper.ZooKeeper: Client
environment:os.name=Linux
2015-04-30 22:15:14,066 INFO org.apache.zookeeper.ZooKeeper: Client
environment:os.arch=amd64
2015-04-30 22:15:14,066 INFO org.apache.zookeeper.ZooKeeper: Client
environment:os.version=3.2.0-59-virtual
2015-04-30 22:15:14,066 INFO org.apache.zookeeper.ZooKeeper: Client
environment:user.name=hbase
2015-04-30 22:15:14,066 INFO org.apache.zookeeper.ZooKeeper: Client
environment:user.home=/var/lib/hbase
2015-04-30 22:15:14,066 INFO org.apache.zookeeper.ZooKeeper: Client
environment:user.dir=/run/cloudera-scm-agent/process/2091-hbase-MASTER
2015-04-30 22:15:14,067 INFO org.apache.zookeeper.ZooKeeper:
Initiating client connection,
connectString=hdpnode4:2181,hdpnode3:2181,hdpnode2:2181,hdpnode1:2181,hdpnode5:2181
sessionTimeout=60000 watcher=master:60000,
quorum=hdpnode4:2181,hdpnode3:2181,hdpnode2:2181,hdpnode1:2181,hdpnode5:2181,
baseZNode=/hbase
2015-04-30 22:15:14,103 INFO org.apache.zookeeper.ClientCnxn: Opening
socket connection to server hdpnode4/172.30.1.230:2181. Will not
attempt to authenticate using SASL (unknown error)
2015-04-30 22:15:14,110 INFO org.apache.zookeeper.ClientCnxn: Socket
connection established to hdpnode4/172.30.1.230:2181, initiating
session
2015-04-30 22:15:14,140 INFO org.apache.zookeeper.ClientCnxn: Session
establishment complete on server hdpnode4/172.30.1.230:2181, sessionid
= 0x44d0c6468dd0002, negotiated timeout = 60000
2015-04-30 22:15:17,137 INFO org.apache.zookeeper.ZooKeeper:
Initiating client connection,
connectString=hdpnode4:2181,hdpnode3:2181,hdpnode2:2181,hdpnode1:2181,hdpnode5:2181
sessionTimeout=60000 watcher=hconnection-0x57df1f4c,
quorum=hdpnode4:2181,hdpnode3:2181,hdpnode2:2181,hdpnode1:2181,hdpnode5:2181,
baseZNode=/hbase
2015-04-30 22:15:17,138 INFO org.apache.zookeeper.ClientCnxn: Opening
socket connection to server hdpnode2/172.30.0.99:2181. Will not
attempt to authenticate using SASL (unknown error)
2015-04-30 22:15:17,139 INFO org.apache.zookeeper.ClientCnxn: Socket
connection established to hdpnode2/172.30.0.99:2181, initiating
session
2015-04-30 22:15:17,145 INFO org.apache.zookeeper.ClientCnxn: Session
establishment complete on server hdpnode2/172.30.0.99:2181, sessionid
= 0x24d0c649c6a0004, negotiated timeout = 60000
2015-04-30 22:15:17,636 INFO org.apache.zookeeper.ZooKeeper:
Initiating client connection,
connectString=hdpnode4:2181,hdpnode3:2181,hdpnode2:2181,hdpnode1:2181,hdpnode5:2181
sessionTimeout=60000 watcher=replicationLogCleaner,
quorum=hdpnode4:2181,hdpnode3:2181,hdpnode2:2181,hdpnode1:2181,hdpnode5:2181,
baseZNode=/hbase
2015-04-30 22:15:17,638 INFO org.apache.zookeeper.ClientCnxn: Opening
socket connection to server hdpnode4/172.30.1.230:2181. Will not
attempt to authenticate using SASL (unknown error)
2015-04-30 22:15:17,639 INFO org.apache.zookeeper.ClientCnxn: Socket
connection established to hdpnode4/172.30.1.230:2181, initiating
session
2015-04-30 22:15:17,647 INFO org.apache.zookeeper.ClientCnxn: Session
establishment complete on server hdpnode4/172.30.1.230:2181, sessionid
= 0x44d0c6468dd0006, negotiated timeout = 60000
2015-05-01 20:22:40,309 INFO org.apache.zookeeper.ZooKeeper:
Initiating client connection,
connectString=hdpnode4:2181,hdpnode3:2181,hdpnode2:2181,hdpnode1:2181,hdpnode5:2181
sessionTimeout=60000 watcher=catalogtracker-on-hconnection-0x57df1f4c,
quorum=hdpnode4:2181,hdpnode3:2181,hdpnode2:2181,hdpnode1:2181,hdpnode5:2181,
baseZNode=/hbase
2015-05-01 20:22:40,310 INFO org.apache.zookeeper.ClientCnxn: Opening
socket connection to server hdpnode1/172.30.1.73:2181. Will not
attempt to authenticate using SASL (unknown error)
2015-05-01 20:22:40,312 INFO org.apache.zookeeper.ClientCnxn: Socket
connection established to hdpnode1/172.30.1.73:2181, initiating
session
2015-05-01 20:22:40,319 INFO org.apache.zookeeper.ClientCnxn: Session
establishment complete on server hdpnode1/172.30.1.73:2181, sessionid
= 0x14d0c6468ea10dd, negotiated timeout = 60000
2015-05-01 20:22:40,334 INFO org.apache.zookeeper.ZooKeeper: Session:
0x14d0c6468ea10dd closed
2015-05-01 20:22:40,335 INFO org.apache.zookeeper.ClientCnxn:
EventThread shut down
2015-05-01 20:22:40,338 INFO org.apache.zookeeper.ZooKeeper:
Initiating client connection,
connectString=hdpnode4:2181,hdpnode3:2181,hdpnode2:2181,hdpnode1:2181,hdpnode5:2181
sessionTimeout=60000 watcher=catalogtracker-on-hconnection-0x57df1f4c,
quorum=hdpnode4:2181,hdpnode3:2181,hdpnode2:2181,hdpnode1:2181,hdpnode5:2181,
baseZNode=/hbase
2015-05-01 20:22:40,339 INFO org.apache.zookeeper.ClientCnxn: Opening
socket connection to server hdpnode3/172.30.2.189:2181. Will not
attempt to authenticate using SASL (unknown error)
2015-05-01 20:22:40,339 INFO org.apache.zookeeper.ClientCnxn: Socket
connection established to hdpnode3/172.30.2.189:2181, initiating
session
2015-05-01 20:22:40,343 INFO org.apache.zookeeper.ClientCnxn: Session
establishment complete on server hdpnode3/172.30.2.189:2181, sessionid
= 0x34d0c6468e11067, negotiated timeout = 60000
2015-05-01 20:22:40,440 INFO org.apache.zookeeper.ZooKeeper: Session:
0x34d0c6468e11067 closed
2015-05-01 20:22:40,440 INFO org.apache.zookeeper.ClientCnxn:
EventThread shut down
2015-05-01 20:23:08,302 INFO org.apache.zookeeper.ZooKeeper:
Initiating client connection,
connectString=hdpnode4:2181,hdpnode3:2181,hdpnode2:2181,hdpnode1:2181,hdpnode5:2181
sessionTimeout=60000 watcher=catalogtracker-on-hconnection-0x57df1f4c,
quorum=hdpnode4:2181,hdpnode3:2181,hdpnode2:2181,hdpnode1:2181,hdpnode5:2181,
baseZNode=/hbase
2015-05-01 20:23:08,303 INFO org.apache.zookeeper.ClientCnxn: Opening
socket connection to server hdpnode1/172.30.1.73:2181. Will not
attempt to authenticate using SASL (unknown error)
2015-05-01 20:23:08,305 INFO org.apache.zookeeper.ClientCnxn: Socket
connection established to hdpnode1/172.30.1.73:2181, initiating
session
2015-05-01 20:23:08,312 INFO org.apache.zookeeper.ClientCnxn: Session
establishment complete on server hdpnode1/172.30.1.73:2181, sessionid
= 0x14d0c6468ea1105, negotiated timeout = 60000
2015-05-01 20:23:08,534 INFO org.apache.zookeeper.ZooKeeper: Session:
0x14d0c6468ea1105 closed
2015-05-01 20:23:08,534 INFO org.apache.zookeeper.ClientCnxn:
EventThread shut down
2015-05-01 20:23:13,709 INFO org.apache.zookeeper.ZooKeeper:
Initiating client connection,
connectString=hdpnode4:2181,hdpnode3:2181,hdpnode2:2181,hdpnode1:2181,hdpnode5:2181
sessionTimeout=60000 watcher=catalogtracker-on-hconnection-0x57df1f4c,
quorum=hdpnode4:2181,hdpnode3:2181,hdpnode2:2181,hdpnode1:2181,hdpnode5:2181,
baseZNode=/hbase
2015-05-01 20:23:13,710 INFO org.apache.zookeeper.ClientCnxn: Opening
socket connection to server hdpnode3/172.30.2.189:2181. Will not
attempt to authenticate using SASL (unknown error)
2015-05-01 20:23:13,711 INFO org.apache.zookeeper.ClientCnxn: Socket
connection established to hdpnode3/172.30.2.189:2181, initiating
session
2015-05-01 20:23:13,715 INFO org.apache.zookeeper.ClientCnxn: Session
establishment complete on server hdpnode3/172.30.2.189:2181, sessionid
= 0x34d0c6468e1108f, negotiated timeout = 60000
2015-05-01 20:23:13,725 INFO org.apache.zookeeper.ZooKeeper: Session:
0x34d0c6468e1108f closed
2015-05-01 20:23:13,725 INFO org.apache.zookeeper.ClientCnxn:
EventThread shut down
2015-05-01 20:23:13,727 INFO org.apache.zookeeper.ZooKeeper:
Initiating client connection,
connectString=hdpnode4:2181,hdpnode3:2181,hdpnode2:2181,hdpnode1:2181,hdpnode5:2181
sessionTimeout=60000 watcher=catalogtracker-on-hconnection-0x57df1f4c,
quorum=hdpnode4:2181,hdpnode3:2181,hdpnode2:2181,hdpnode1:2181,hdpnode5:2181,
baseZNode=/hbase
2015-05-01 20:23:13,728 INFO org.apache.zookeeper.ClientCnxn: Opening
socket connection to server hdpnode3/172.30.2.189:2181. Will not
attempt to authenticate using SASL (unknown error)
2015-05-01 20:23:13,729 INFO org.apache.zookeeper.ClientCnxn: Socket
connection established to hdpnode3/172.30.2.189:2181, initiating
session
2015-05-01 20:23:13,733 INFO org.apache.zookeeper.ClientCnxn: Session
establishment complete on server hdpnode3/172.30.2.189:2181, sessionid
= 0x34d0c6468e11090, negotiated timeout = 60000
2015-05-01 20:23:13,813 INFO org.apache.zookeeper.ZooKeeper: Session:
0x34d0c6468e11090 closed
2015-05-01 20:23:13,813 INFO org.apache.zookeeper.ClientCnxn:
EventThread shut down
2015-05-04 03:29:07,767 INFO org.apache.zookeeper.ClientCnxn: Client
session timed out, have not heard from server in 40011ms for sessionid
0x44d0c6468dd0002, closing socket connection and attempting reconnect
2015-05-04 03:29:08,270 INFO org.apache.zookeeper.ClientCnxn: Opening
socket connection to server hdpnode2/172.30.0.99:2181. Will not
attempt to authenticate using SASL (unknown error)
2015-05-04 03:29:08,271 INFO org.apache.zookeeper.ClientCnxn: Socket
connection established to hdpnode2/172.30.0.99:2181, initiating
session
2015-05-04 03:29:08,275 INFO org.apache.zookeeper.ClientCnxn: Session
establishment complete on server hdpnode2/172.30.0.99:2181, sessionid
= 0x44d0c6468dd0002, negotiated timeout = 60000
2015-05-04 03:29:13,232 INFO org.apache.zookeeper.ClientCnxn: Unable
to read additional data from server sessionid 0x44d0c6468dd0006,
likely server has closed socket, closing socket connection and
attempting reconnect
2015-05-04 03:29:13,941 INFO org.apache.zookeeper.ClientCnxn: Opening
socket connection to server hdpnode2/172.30.0.99:2181. Will not
attempt to authenticate using SASL (unknown error)
2015-05-04 03:29:13,942 INFO org.apache.zookeeper.ClientCnxn: Socket
connection established to hdpnode2/172.30.0.99:2181, initiating
session
2015-05-04 03:29:13,945 INFO org.apache.zookeeper.ClientCnxn: Session
establishment complete on server hdpnode2/172.30.0.99:2181, sessionid
= 0x44d0c6468dd0006, negotiated timeout = 60000
2015-05-05 19:22:24,188 INFO org.apache.zookeeper.ZooKeeper:
Initiating client connection,
connectString=hdpnode4:2181,hdpnode3:2181,hdpnode2:2181,hdpnode1:2181,hdpnode5:2181
sessionTimeout=60000 watcher=catalogtracker-on-hconnection-0x57df1f4c,
quorum=hdpnode4:2181,hdpnode3:2181,hdpnode2:2181,hdpnode1:2181,hdpnode5:2181,
baseZNode=/hbase
2015-05-05 19:22:24,190 INFO org.apache.zookeeper.ClientCnxn: Opening
socket connection to server hdpnode1/172.30.1.73:2181. Will not
attempt to authenticate using SASL (unknown error)
2015-05-05 19:22:24,191 INFO org.apache.zookeeper.ClientCnxn: Socket
connection established to hdpnode1/172.30.1.73:2181, initiating
session
2015-05-05 19:22:24,197 INFO org.apache.zookeeper.ClientCnxn: Session
establishment complete on server hdpnode1/172.30.1.73:2181, sessionid
= 0x14d0c6468ea589b, negotiated timeout = 60000
2015-05-05 19:22:24,208 INFO org.apache.zookeeper.ZooKeeper: Session:
0x14d0c6468ea589b closed
2015-05-05 19:22:24,208 INFO org.apache.zookeeper.ClientCnxn:
EventThread shut down
2015-05-05 19:22:24,213 INFO org.apache.zookeeper.ZooKeeper:
Initiating client connection,
connectString=hdpnode4:2181,hdpnode3:2181,hdpnode2:2181,hdpnode1:2181,hdpnode5:2181
sessionTimeout=60000 watcher=catalogtracker-on-hconnection-0x57df1f4c,
quorum=hdpnode4:2181,hdpnode3:2181,hdpnode2:2181,hdpnode1:2181,hdpnode5:2181,
baseZNode=/hbase
2015-05-05 19:22:24,214 INFO org.apache.zookeeper.ClientCnxn: Opening
socket connection to server hdpnode1/172.30.1.73:2181. Will not
attempt to authenticate using SASL (unknown error)
2015-05-05 19:22:24,216 INFO org.apache.zookeeper.ClientCnxn: Socket
connection established to hdpnode1/172.30.1.73:2181, initiating
session
2015-05-05 19:22:24,222 INFO org.apache.zookeeper.ClientCnxn: Session
establishment complete on server hdpnode1/172.30.1.73:2181, sessionid
= 0x14d0c6468ea589c, negotiated timeout = 60000
2015-05-05 19:22:24,334 INFO org.apache.zookeeper.ZooKeeper: Session:
0x14d0c6468ea589c closed
2015-05-05 19:22:24,334 INFO org.apache.zookeeper.ClientCnxn:
EventThread shut down






On Fri, Mar 27, 2015 at 7:09 PM, Ted Yu <yuzhihong@gmail.com> wrote:

> Do you mind providing a bit more information?
>
> Such as the release of hbase.
>
> Can you pastebin complete stack trace ?
>
> If you can encapsulate this in a unit test, I will debug it.
>
> Thanks
>
>
>
> > On Mar 27, 2015, at 7:03 PM, Abraham Tom <work2much@gmail.com> wrote:
> >
> > Every so often using the reverse key scan on the thrift API seems to
> throw
> > an error
> > We have even isolated to specific record.
> > What is surprising to us is that when we make are start row the very next
> > record before it, it scans correctly.  Of course that is not feasible in
> > any means
> >
> > Thoughts on this
> >
> > --
> > Abraham Tom
> > Email:   work2much@gmail.com
> > Phone:  415-515-3621
>



-- 
Abraham Tom
Email:   work2much@gmail.com
Phone:  415-515-3621

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message