hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Pablo Musa <pa...@psafe.com>
Subject RegionServers Crashing every hour in production env
Date Fri, 08 Mar 2013 15:44:24 GMT
Hey guys,
as I sent in an email a long time ago, the RSs in my cluster did not get 
along
and crashed 3 times a day. I tried a lot of options we discussed in the
emails, but it not solved the problem. As I used an old version of hadoop I
thought this was the problem.

So, I upgraded from hadoop 0.20 - hbase 0.90 - zookeeper 3.3.5 to hadoop 
2.0.0
- hbase 0.94 - zookeeper 3.4.5.

Unfortunately the RSs did not stop crashing, and worst! Now they crash every
hour and some times when the RS that holds the .ROOT. crashes all 
cluster get
stuck in transition and everything stops working.
In this case I need to clean zookeeper znodes, restart the master and 
the RSs.
To avoid this case I am running on production with only ONE RS and a 
monitoring
script that check every minute, if the RS is ok. If not, restart it.
* This case does not get the cluster stuck.

This is driving me crazy, but I really cant find a solution for the cluster.
I tracked all logs from the start time 16:49 from all interesting nodes 
(zoo,
namenode, master, rs, dn2, dn9, dn10) and copied here what I think is 
usefull.

There are some strange errors in the DATANODE2, as an error copiyng a block
to itself.

The gc log points to GC timeout. However it is very weird that the RS spend
so much time in GC while in the other cases it takes 0.001sec. Besides, 
the time
spent, is in sys which makes me think that might be a problem in another 
place.

I know that it is a bunch of logs, and that it is very difficult to find the
problem without much context. But I REALLY need some help. If it is not the
solution, at least what I should read, where I should look, or which cases I
should monitor.

Thank you very much,
Pablo Musa

Start RS at 16:49
####ZOOKEEPER-PSLBHNN001####
2013-03-07 16:49:04,369 [myid:] - INFO 
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@197] - 
Accepted socket connection from /172.17.2.18:33549
2013-03-07 16:49:04,372 [myid:] - INFO 
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@839] - Client 
attempting to establish new session at /172.17.2.18:33549
2013-03-07 16:49:04,373 [myid:] - INFO 
[SyncThread:0:ZooKeeperServer@595] - Established session 
0x13d3c4bcba600a7 with negotiated timeout 150000 for client 
/172.17.2.18:33549
2013-03-07 16:49:04,458 [myid:] - INFO  [ProcessThread(sid:0 
cport:-1)::PrepRequestProcessor@627] - Got user-level KeeperException 
when processing sessionid:0x13d3c4bcba600a7 type:create cxid:0x8 
zxid:0x11010ade3a txntype:-1 reqpath:n/a Error 
Path:/hbase/online-snapshot/acquired Error:KeeperErrorCode = NodeExists 
for /hbase/online-snapshot/acquired
2013-03-07 16:49:09,036 [myid:] - INFO 
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@197] - 
Accepted socket connection from /172.17.2.18:33675
2013-03-07 16:49:09,036 [myid:] - INFO 
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@839] - Client 
attempting to establish new session at /172.17.2.18:33675
2013-03-07 16:49:09,041 [myid:] - INFO 
[SyncThread:0:ZooKeeperServer@595] - Established session 
0x13d3c4bcba600a8 with negotiated timeout 150000 for client 
/172.17.2.18:33675
2013-03-07 17:24:50,001 [myid:] - INFO 
[SessionTracker:ZooKeeperServer@325] - Expiring session 
0x13d3c4bcba600a7, timeout of 150000ms exceeded
2013-03-07 17:24:50,001 [myid:] - INFO  [ProcessThread(sid:0 
cport:-1)::PrepRequestProcessor@476] - Processed session termination for 
sessionid: 0x13d3c4bcba600a7
2013-03-07 17:24:50,002 [myid:] - INFO [SyncThread:0:NIOServerCnxn@1001] 
- Closed socket connection for client /172.17.2.18:33549 which had 
sessionid 0x13d3c4bcba600a7
2013-03-07 17:24:57,889 [myid:] - WARN 
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@349] - caught 
end of stream exception
EndOfStreamException: Unable to read additional data from client 
sessionid 0x13d3c4bcba600a8, likely client has closed socket
         at 
org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:220)
         at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
         at java.lang.Thread.run(Thread.java:722)
2013-03-07 17:24:57,890 [myid:] - INFO 
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1001] - Closed 
socket connection for client /172.17.2.18:33675 which had sessionid 
0x13d3c4bcba600a8
2013-03-07 17:25:00,001 [myid:] - INFO 
[SessionTracker:ZooKeeperServer@325] - Expiring session 
0x13d3c4bcba600a8, timeout of 150000ms exceeded
2013-03-07 17:25:00,001 [myid:] - INFO  [ProcessThread(sid:0 
cport:-1)::PrepRequestProcessor@476] - Processed session termination for 
sessionid: 0x13d3c4bcba600a8
2013-03-07 17:26:03,211 [myid:] - INFO 
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@197] - 
Accepted socket connection from /172.17.2.18:35983
2013-03-07 17:26:03,214 [myid:] - INFO 
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@839] - Client 
attempting to establish new session at /172.17.2.18:35983
2013-03-07 17:26:03,215 [myid:] - INFO 
[SyncThread:0:ZooKeeperServer@595] - Established session 
0x13d3c4bcba600a9 with negotiated timeout 150000 for client 
/172.17.2.18:35983

####MASTER-PSLBHNN001####
2013-03-07 16:49:00,379 DEBUG 
org.apache.hadoop.hbase.master.SplitLogManager: total tasks = 1 
unassigned = 1
2013-03-07 16:49:04,517 DEBUG 
org.apache.hadoop.hbase.master.ServerManager: STARTUP: Server 
PSLBHDN002,60020,1362685744154 came back up, removed it from the dead 
servers list
2013-03-07 16:49:04,517 INFO 
org.apache.hadoop.hbase.master.ServerManager: Registering 
server=PSLBHDN002,60020,1362685744154
2013-03-07 16:49:05,380 DEBUG 
org.apache.hadoop.hbase.master.SplitLogManager: total tasks = 1 
unassigned = 1
2013-03-07 16:49:05,483 INFO 
org.apache.hadoop.hbase.master.SplitLogManager: task 
/hbase/splitlog/hdfs%3A%2F%2Fpslbhnn001.psafe.net%2Fhbase%2F.logs%2FPSLBHDN002%2C60020%2C1362683643918-splitting%2FPSLBHDN002%252C60020%252C1362683643918.1362683644662 
acquired by PSLBHDN002,60020,1362685744154
2013-03-07 16:49:06,380 DEBUG 
org.apache.hadoop.hbase.master.SplitLogManager: total tasks = 1 
unassigned = 0
... regions assigned, no problems until...
2013-03-07 17:24:50,002 INFO 
org.apache.hadoop.hbase.zookeeper.RegionServerTracker: RegionServer 
ephemeral node deleted, processing expiration 
[PSLBHDN002,60020,1362685744154]
2013-03-07 17:24:50,003 DEBUG 
org.apache.hadoop.hbase.master.AssignmentManager: based on AM, current 
region=-ROOT-,,0.70236052 is on server=PSLBHDN002,60020,1362685744154 
server being checked: PSLBHDN002,60020,1362685744154
2013-03-07 17:24:50,004 DEBUG 
org.apache.hadoop.hbase.master.AssignmentManager: based on AM, current 
region=.META.,,1.1028785192 is on server=PSLBHDN002,60020,1362685744154 
server being checked: PSLBHDN002,60020,1362685744154
2013-03-07 17:24:50,004 DEBUG 
org.apache.hadoop.hbase.master.ServerManager: 
Added=PSLBHDN002,60020,1362685744154 to dead servers, submitted shutdown 
handler to be executed, root=true, meta=true
2013-03-07 17:24:50,004 INFO 
org.apache.hadoop.hbase.master.handler.ServerShutdownHandler: Splitting 
logs for PSLBHDN002,60020,1362685744154
2013-03-07 17:24:50,009 DEBUG 
org.apache.hadoop.hbase.master.MasterFileSystem: Renamed region 
directory: 
hdfs://pslbhnn001.psafe.net/hbase/.logs/PSLBHDN002,60020,1362685744154-splitting
2013-03-07 17:24:50,009 INFO 
org.apache.hadoop.hbase.master.SplitLogManager: dead splitlog workers 
[PSLBHDN002,60020,1362685744154]
2013-03-07 17:24:50,011 DEBUG 
org.apache.hadoop.hbase.master.SplitLogManager: Scheduling batch of logs 
to split
2013-03-07 17:24:50,011 INFO 
org.apache.hadoop.hbase.master.SplitLogManager: started splitting logs 
in 
[hdfs://pslbhnn001.psafe.net/hbase/.logs/PSLBHDN002,60020,1362685744154-splitting]
2013-03-07 17:24:50,012 DEBUG 
org.apache.hadoop.hbase.master.SplitLogManager: put up splitlog task at 
znode 
/hbase/splitlog/hdfs%3A%2F%2Fpslbhnn001.psafe.net%2Fhbase%2F.logs%2FPSLBHDN002%2C60020%2C1362685744154-splitting%2FPSLBHDN002%252C60020%252C1362685744154.1362685744953
2013-03-07 17:24:50,012 DEBUG 
org.apache.hadoop.hbase.master.SplitLogManager: task not yet acquired 
/hbase/splitlog/hdfs%3A%2F%2Fpslbhnn001.psafe.net%2Fhbase%2F.logs%2FPSLBHDN002%2C60020%2C1362685744154-splitting%2FPSLBHDN002%252C60020%252C1362685744154.1362685744953 
ver = 0
2013-03-07 17:24:50,681 DEBUG 
org.apache.hadoop.hbase.master.SplitLogManager: total tasks = 1 
unassigned = 1
2013-03-07 17:24:51,681 DEBUG 
org.apache.hadoop.hbase.master.SplitLogManager: total tasks = 1 
unassigned = 1
2013-03-07 17:24:52,681 DEBUG 
org.apache.hadoop.hbase.master.SplitLogManager: total tasks = 1 
unassigned = 1
2013-03-07 17:24:53,681 DEBUG 
org.apache.hadoop.hbase.master.SplitLogManager: total tasks = 1 
unassigned = 1
2013-03-07 17:24:54,681 DEBUG 
org.apache.hadoop.hbase.master.SplitLogManager: total tasks = 1 
unassigned = 1
2013-03-07 17:24:55,682 DEBUG 
org.apache.hadoop.hbase.master.SplitLogManager: total tasks = 1 
unassigned = 1
2013-03-07 17:24:56,682 DEBUG 
org.apache.hadoop.hbase.master.SplitLogManager: total tasks = 1 
unassigned = 1
2013-03-07 17:24:57,682 DEBUG 
org.apache.hadoop.hbase.master.SplitLogManager: total tasks = 1 
unassigned = 1
2013-03-07 17:24:57,918 ERROR org.apache.hadoop.hbase.master.HMaster: 
Region server ^@^@PSLBHDN002,60020,1362685744154 reported a fatal error:
ABORTING region server PSLBHDN002,60020,1362685744154: Unhandled 
exception: org.apache.hadoop.hbase.YouAreDeadException: Server REPORT 
rejected; currently processing PSLBHDN002,60020,1362685744154 as dead server
         at 
org.apache.hadoop.hbase.master.ServerManager.checkIsDead(ServerManager.java:242)
         at 
org.apache.hadoop.hbase.master.ServerManager.regionServerReport(ServerManager.java:160)
         at 
org.apache.hadoop.hbase.master.HMaster.regionServerReport(HMaster.java:966)
         at sun.reflect.GeneratedMethodAccessor24.invoke(Unknown Source)
         at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
         at java.lang.reflect.Method.invoke(Method.java:601)
         at 
org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
         at 
org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1400)

Cause:
org.apache.hadoop.hbase.YouAreDeadException: 
org.apache.hadoop.hbase.YouAreDeadException: Server REPORT rejected; 
currently processing PSLBHDN002,60020,1362685744154 as dead server
         at 
org.apache.hadoop.hbase.master.ServerManager.checkIsDead(ServerManager.java:242)
         at 
org.apache.hadoop.hbase.master.ServerManager.regionServerReport(ServerManager.java:160)
         at 
org.apache.hadoop.hbase.master.HMaster.regionServerReport(HMaster.java:966)
         at sun.reflect.GeneratedMethodAccessor24.invoke(Unknown Source)
         at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
         at java.lang.reflect.Method.invoke(Method.java:601)
         at 
org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
         at 
org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1400)

         at 
sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
         at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
         at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
         at java.lang.reflect.Constructor.newInstance(Constructor.java:525)
         at 
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
         at 
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:79)
         at 
org.apache.hadoop.hbase.regionserver.HRegionServer.tryRegionServerReport(HRegionServer.java:883)
         at 
org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:741)
         at java.lang.Thread.run(Thread.java:722)
Caused by: 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hbase.YouAreDeadException): 
org.apache.hadoop.hbase.YouAreDeadException: Server REPORT rejected; 
currently processing PSLBHDN002,60020,1362685744154 as dead server
         at 
org.apache.hadoop.hbase.master.ServerManager.checkIsDead(ServerManager.java:242)
         at 
org.apache.hadoop.hbase.master.ServerManager.regionServerReport(ServerManager.java:160)
         at 
org.apache.hadoop.hbase.master.HMaster.regionServerReport(HMaster.java:966)
         at sun.reflect.GeneratedMethodAccessor24.invoke(Unknown Source)
         at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
         at java.lang.reflect.Method.invoke(Method.java:601)
         at 
org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
         at 
org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1400)

         at 
org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:1021)
         at 
org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
         at $Proxy10.regionServerReport(Unknown Source)
         at 
org.apache.hadoop.hbase.regionserver.HRegionServer.tryRegionServerReport(HRegionServer.java:880)
         ... 2 more

####NAMENODE-PSLBHNN001####
2013-03-07 16:49:05,506 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Recovering [Lease.  
Holder: DFSClient_NONMAPREDUCE_1668790704_25, pendingcreates: 1], 
src=/hbase/.logs/PSLBHDN002,60020,1362683643918-splitting/PSLBHDN002%2C60020%2C1362683643918.1362683644662
2013-03-07 16:49:05,506 INFO BlockStateChange: BLOCK* 
blk_-8211539468888583598_25099031{blockUCState=UNDER_RECOVERY, 
primaryNodeIndex=0, 
replicas=[ReplicaUnderConstruction[172.17.2.32:50010|RBW], 
ReplicaUnderConstruction[172.17.2.30:50010|RBW], 
ReplicaUnderConstruction[172.17.2.31:50010|RBW]]} recovery started, 
primary=172.17.2.32:50010
2013-03-07 16:49:05,506 WARN org.apache.hadoop.hdfs.StateChange: DIR* 
NameSystem.internalReleaseLease: File 
/hbase/.logs/PSLBHDN002,60020,1362683643918-splitting/PSLBHDN002%2C60020%2C1362683643918.1362683644662 
has not been closed. Lease recovery is in progress. RecoveryId = 
25099081 for block 
blk_-8211539468888583598_25099031{blockUCState=UNDER_RECOVERY, 
primaryNodeIndex=0, 
replicas=[ReplicaUnderConstruction[172.17.2.32:50010|RBW], 
ReplicaUnderConstruction[172.17.2.30:50010|RBW], 
ReplicaUnderConstruction[172.17.2.31:50010|RBW]]}
2013-03-07 16:49:06,853 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
allocateBlock: 
/hbase/.META./1028785192/recovered.edits/0000000000263914400.temp. 
BP-43236042-172.17.2.10-1362490844340 
blk_7156899642437814505_25099084{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[172.17.2.18:50010|RBW], 
ReplicaUnderConstruction[172.17.2.32:50010|RBW], 
ReplicaUnderConstruction[172.17.2.14:50010|RBW]]}
2013-03-07 16:49:07,018 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
allocateBlock: 
/hbase/TMeta/8326651d47600abd9058305966559ef7/recovered.edits/0000000000263916443.temp. 
BP-43236042-172.17.2.10-1362490844340 
blk_336947258563431268_25099090{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[172.17.2.18:50010|RBW], 
ReplicaUnderConstruction[172.17.2.33:50010|RBW], 
ReplicaUnderConstruction[172.17.2.32:50010|RBW]]}
2013-03-07 16:49:07,402 INFO BlockStateChange: BLOCK* addStoredBlock: 
blockMap updated: 172.17.2.32:50010 is added to 
blk_-8211539468888583598_25099031{blockUCState=UNDER_RECOVERY, 
primaryNodeIndex=0, 
replicas=[ReplicaUnderConstruction[172.17.2.32:50010|RBW], 
ReplicaUnderConstruction[172.17.2.30:50010|RBW], 
ReplicaUnderConstruction[172.17.2.31:50010|RBW]]} size 1218634
2013-03-07 16:49:07,404 INFO BlockStateChange: BLOCK* addStoredBlock: 
blockMap updated: 172.17.2.30:50010 is added to 
blk_-8211539468888583598_25099031{blockUCState=UNDER_RECOVERY, 
primaryNodeIndex=0, 
replicas=[ReplicaUnderConstruction[172.17.2.32:50010|RBW], 
ReplicaUnderConstruction[172.17.2.30:50010|RBW], 
ReplicaUnderConstruction[172.17.2.31:50010|RBW]]} size 1218634
2013-03-07 16:49:07,406 INFO BlockStateChange: BLOCK* addStoredBlock: 
blockMap updated: 172.17.2.31:50010 is added to 
blk_-8211539468888583598_25099031{blockUCState=UNDER_RECOVERY, 
primaryNodeIndex=0, 
replicas=[ReplicaUnderConstruction[172.17.2.32:50010|RBW], 
ReplicaUnderConstruction[172.17.2.30:50010|RBW], 
ReplicaUnderConstruction[172.17.2.31:50010|RBW]]} size 1218634
2013-03-07 16:49:07,406 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: 
commitBlockSynchronization(lastblock=BP-43236042-172.17.2.10-1362490844340:blk_-8211539468888583598_25099031, 
newgenerationstamp=25099081, newlength=1480669, 
newtargets=[172.17.2.32:50010, 172.17.2.30:50010, 172.17.2.31:50010], 
closeFile=true, deleteBlock=false)
2013-03-07 16:49:07,407 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: 
commitBlockSynchronization(newblock=BP-43236042-172.17.2.10-1362490844340:blk_-8211539468888583598_25099031, 
file=/hbase/.logs/PSLBHDN002,60020,1362683643918-splitting/PSLBHDN002%2C60020%2C1362683643918.1362683644662, 
newgenerationstamp=25099081, newlength=1480669, 
newtargets=[172.17.2.32:50010, 172.17.2.30:50010, 172.17.2.31:50010]) 
successful
2013-03-07 16:49:07,453 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
allocateBlock: 
/hbase/Meta/fa0771a96bc960c60fa2dcce8d968893/recovered.edits/0000000000263916447.temp. 
BP-43236042-172.17.2.10-1362490844340 
blk_8884451173425697631_25099130{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[172.17.2.18:50010|RBW], 
ReplicaUnderConstruction[172.17.2.33:50010|RBW], 
ReplicaUnderConstruction[172.17.2.30:50010|RBW]]}
2013-03-07 16:49:07,469 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
allocateBlock: 
/hbase/Plan/e9837362b3a39f2482234d30cad8a459/recovered.edits/0000000000263916393.temp. 
BP-43236042-172.17.2.10-1362490844340 
blk_3730904837329292545_25099132{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[172.17.2.18:50010|RBW], 
ReplicaUnderConstruction[172.17.2.31:50010|RBW], 
ReplicaUnderConstruction[172.17.2.30:50010|RBW]]}
2013-03-07 16:49:07,554 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
allocateBlock: 
/hbase/Icon/005c80c9afed76369027a61a09c67f7c/recovered.edits/0000000000263916916.temp. 
BP-43236042-172.17.2.10-1362490844340 
blk_-8067763750327121256_25099140{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[172.17.2.18:50010|RBW], 
ReplicaUnderConstruction[172.17.2.34:50010|RBW], 
ReplicaUnderConstruction[172.17.2.35:50010|RBW]]}
2013-03-07 16:49:07,564 INFO BlockStateChange: BLOCK* addStoredBlock: 
blockMap updated: 172.17.2.35:50010 is added to 
blk_-8067763750327121256_25099140{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[172.17.2.18:50010|RBW], 
ReplicaUnderConstruction[172.17.2.34:50010|RBW], 
ReplicaUnderConstruction[172.17.2.35:50010|RBW]]} size 0
2013-03-07 16:49:07,564 INFO BlockStateChange: BLOCK* addStoredBlock: 
blockMap updated: 172.17.2.34:50010 is added to 
blk_-8067763750327121256_25099140{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[172.17.2.18:50010|RBW], 
ReplicaUnderConstruction[172.17.2.34:50010|RBW], 
ReplicaUnderConstruction[172.17.2.35:50010|RBW]]} size 0
2013-03-07 16:49:07,565 INFO BlockStateChange: BLOCK* addStoredBlock: 
blockMap updated: 172.17.2.18:50010 is added to 
blk_-8067763750327121256_25099140{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[172.17.2.18:50010|RBW], 
ReplicaUnderConstruction[172.17.2.34:50010|RBW], 
ReplicaUnderConstruction[172.17.2.35:50010|RBW]]} size 0
2013-03-07 16:49:07,567 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: 
/hbase/Icon/005c80c9afed76369027a61a09c67f7c/recovered.edits/0000000000263916916.temp 
is closed by DFSClient_NONMAPREDUCE_-1491694821_78
2013-03-07 16:49:07,577 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
allocateBlock: 
/hbase/Meta/01c75cda10ea063f900a3ea935b29119/recovered.edits/0000000000263916474.temp. 
BP-43236042-172.17.2.10-1362490844340 
blk_-8466118669853895314_25099141{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[172.17.2.18:50010|RBW], 
ReplicaUnderConstruction[172.17.2.30:50010|RBW], 
ReplicaUnderConstruction[172.17.2.31:50010|RBW]]}
2013-03-07 16:49:07,587 INFO BlockStateChange: BLOCK* addStoredBlock: 
blockMap updated: 172.17.2.31:50010 is added to 
blk_-8466118669853895314_25099141{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[172.17.2.18:50010|RBW], 
ReplicaUnderConstruction[172.17.2.30:50010|RBW], 
ReplicaUnderConstruction[172.17.2.31:50010|RBW]]} size 0
2013-03-07 16:49:07,587 INFO BlockStateChange: BLOCK* addStoredBlock: 
blockMap updated: 172.17.2.30:50010 is added to 
blk_-8466118669853895314_25099141{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[172.17.2.18:50010|RBW], 
ReplicaUnderConstruction[172.17.2.30:50010|RBW], 
ReplicaUnderConstruction[172.17.2.31:50010|RBW]]} size 0
2013-03-07 16:49:07,588 INFO BlockStateChange: BLOCK* addStoredBlock: 
blockMap updated: 172.17.2.18:50010 is added to 
blk_-8466118669853895314_25099141{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[172.17.2.18:50010|RBW], 
ReplicaUnderConstruction[172.17.2.30:50010|RBW], 
ReplicaUnderConstruction[172.17.2.31:50010|RBW]]} size 0
2013-03-07 16:49:07,589 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: 
/hbase/Meta/01c75cda10ea063f900a3ea935b29119/recovered.edits/0000000000263916474.temp 
is closed by DFSClient_NONMAPREDUCE_1792119388_78
2013-03-07 16:49:07,595 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
allocateBlock: 
/hbase/Icon/07325dc7da9ab82e427278b5e6dc0d76/recovered.edits/0000000000263916527.temp. 
BP-43236042-172.17.2.10-1362490844340 
blk_4901477316103379126_25099142{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[172.17.2.18:50010|RBW], 
ReplicaUnderConstruction[172.17.2.31:50010|RBW], 
ReplicaUnderConstruction[172.17.2.33:50010|RBW]]}
2013-03-07 16:49:07,603 INFO BlockStateChange: BLOCK* addStoredBlock: 
blockMap updated: 172.17.2.33:50010 is added to 
blk_4901477316103379126_25099142{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[172.17.2.18:50010|RBW], 
ReplicaUnderConstruction[172.17.2.31:50010|RBW], 
ReplicaUnderConstruction[172.17.2.33:50010|RBW]]} size 0
2013-03-07 16:49:07,604 INFO BlockStateChange: BLOCK* addStoredBlock: 
blockMap updated: 172.17.2.31:50010 is added to 
blk_4901477316103379126_25099142{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[172.17.2.18:50010|RBW], 
ReplicaUnderConstruction[172.17.2.31:50010|RBW], 
ReplicaUnderConstruction[172.17.2.33:50010|RBW]]} size 0
2013-03-07 16:49:07,604 INFO BlockStateChange: BLOCK* addStoredBlock: 
blockMap updated: 172.17.2.18:50010 is added to 
blk_4901477316103379126_25099142{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[172.17.2.18:50010|RBW], 
ReplicaUnderConstruction[172.17.2.31:50010|RBW], 
ReplicaUnderConstruction[172.17.2.33:50010|RBW]]} size 0
2013-03-07 16:49:07,606 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: 
/hbase/Icon/07325dc7da9ab82e427278b5e6dc0d76/recovered.edits/0000000000263916527.temp 
is closed by DFSClient_NONMAPREDUCE_985447659_78
2013-03-07 16:49:07,612 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
allocateBlock: 
/hbase/Icon/074356c2f3e47394967b365e2a347bf2/recovered.edits/0000000000263916820.temp. 
BP-43236042-172.17.2.10-1362490844340 
blk_3839186567268918505_25099143{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[172.17.2.18:50010|RBW], 
ReplicaUnderConstruction[172.17.2.30:50010|RBW], 
ReplicaUnderConstruction[172.17.2.31:50010|RBW]]}
2013-03-07 16:49:07,620 INFO BlockStateChange: BLOCK* addStoredBlock: 
blockMap updated: 172.17.2.31:50010 is added to 
blk_3839186567268918505_25099143{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[172.17.2.18:50010|RBW], 
ReplicaUnderConstruction[172.17.2.30:50010|RBW], 
ReplicaUnderConstruction[172.17.2.31:50010|RBW]]} size 0
2013-03-07 16:49:07,621 INFO BlockStateChange: BLOCK* addStoredBlock: 
blockMap updated: 172.17.2.30:50010 is added to 
blk_3839186567268918505_25099143{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[172.17.2.18:50010|RBW], 
ReplicaUnderConstruction[172.17.2.30:50010|RBW], 
ReplicaUnderConstruction[172.17.2.31:50010|RBW]]} size 0

... 260 lines later (all seems ok)

2013-03-07 16:49:09,071 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
allocateBlock: 
/hbase/.logs/PSLBHDN002,60020,1362685744154/PSLBHDN002%2C60020%2C1362685744154.1362685744953. 
BP-43236042-172.17.2.10-1362490844340 
blk_-242393173458437587_25099194{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[172.17.2.18:50010|RBW], 
ReplicaUnderConstruction[172.17.2.31:50010|RBW], 
ReplicaUnderConstruction[172.17.2.30:50010|RBW]]}
2013-03-07 16:49:09,079 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
fsync: 
/hbase/.logs/PSLBHDN002,60020,1362685744154/PSLBHDN002%2C60020%2C1362685744154.1362685744953 
for DFSClient_NONMAPREDUCE_-420681946_25

... 687 lines later (all seems ok)

2013-03-07 17:22:13,716 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /hdfs/3f551d97f19975dcd1dc49c83bf0ace1 is closed by 
DFSClient_NONMAPREDUCE_1581590909_11
2013-03-07 17:24:50,008 INFO 
org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of 
transactions: 170274 Total time for transactions(ms): 2842Number of 
transactions batched in Syncs: 599 Number of syncs: 73443 SyncTimes(ms): 
36787 33553
2013-03-07 17:24:57,896 ERROR 
org.apache.hadoop.security.UserGroupInformation: 
PriviledgedActionException as:hbase (auth:SIMPLE) 
cause:org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No 
lease on 
/hbase/.logs/PSLBHDN002,60020,1362685744154/PSLBHDN002%2C60020%2C1362685744154.1362685744953 
File does not exist. [Lease.  Holder: 
DFSClient_NONMAPREDUCE_-420681946_25, pendingcreates: 1]
2013-03-07 17:24:57,896 INFO org.apache.hadoop.ipc.Server: IPC Server 
handler 31 on 8020, call 
org.apache.hadoop.hdfs.protocol.ClientProtocol.getAdditionalDatanode 
from 172.17.2.18:33979: error: 
org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease 
on 
/hbase/.logs/PSLBHDN002,60020,1362685744154/PSLBHDN002%2C60020%2C1362685744154.1362685744953 
File does not exist. [Lease.  Holder: 
DFSClient_NONMAPREDUCE_-420681946_25, pendingcreates: 1]
org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease 
on 
/hbase/.logs/PSLBHDN002,60020,1362685744154/PSLBHDN002%2C60020%2C1362685744154.1362685744953 
File does not exist. [Lease.  Holder: 
DFSClient_NONMAPREDUCE_-420681946_25, pendingcreates: 1]
         at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2396)
         at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2387)
         at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:2319)
         at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAdditionalDatanode(NameNodeRpcServer.java:511)
         at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolServerSideTranslatorPB.java:316)
         at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44082)
         at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
         at java.security.AccessController.doPrivileged(Native Method)
         at javax.security.auth.Subject.doAs(Subject.java:415)
         at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)

... RS restart "two" minutes later

2013-03-07 17:26:03,756 INFO 
org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of 
transactions: 170275 Total time for transactions(ms): 2842Number of 
transactions batched in Syncs: 599 Number of syncs: 73444 SyncTimes(ms): 
36787 33554
2013-03-07 17:26:04,304 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: recoverLease: 
[Lease.  Holder: DFSClient_NONMAPREDUCE_-420681946_25, pendingcreates: 
1], 
src=/hbase/.logs/PSLBHDN002,60020,1362685744154-splitting/PSLBHDN002%2C60020%2C1362685744154.1362685744953 
from client DFSClient_NONMAPREDUCE_-420681946_25
2013-03-07 17:26:04,304 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Recovering [Lease.  
Holder: DFSClient_NONMAPREDUCE_-420681946_25, pendingcreates: 1], 
src=/hbase/.logs/PSLBHDN002,60020,1362685744154-splitting/PSLBHDN002%2C60020%2C1362685744154.1362685744953

####REGIONSERVER-PSLBHDN002####
Thu Mar  7 16:49:02 BRT 2013 Starting regionserver on PSLBHDN002
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 579434
max locked memory       (kbytes, -l) unlimited
max memory size         (kbytes, -m) unlimited
open files                      (-n) 32768
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 1024
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
2013-03-07 16:49:03,499 INFO org.apache.hadoop.hbase.util.VersionInfo: 
HBase 0.94.2-cdh4.2.0
2013-03-07 16:49:03,500 INFO org.apache.hadoop.hbase.util.VersionInfo: 
Subversion 
file:///data/1/jenkins/workspace/generic-package-rhel64-6-0/topdir/BUILD/hbase-0.94.2-cdh4.2.0 
-r Unknown
2013-03-07 16:49:03,500 INFO org.apache.hadoop.hbase.util.VersionInfo: 
Compiled by jenkins on Fri Feb 15 11:51:18 PST 2013
2013-03-07 16:49:03,762 INFO 
org.apache.hadoop.hbase.util.ServerCommandLine: vmName=Java HotSpot(TM) 
64-Bit Server VM, vmVendor=Oracle Corporation, vmVersion=21.0-b17
2013-03-07 16:49:03,763 INFO 
org.apache.hadoop.hbase.util.ServerCommandLine: 
vmInputArguments=[-XX:OnOutOfMemoryError=kill -9 %p, -Xmx8192m, 
-XX:NewSize=64m, -XX:MaxNewSize=64m, -ea, -XX:+UseConcMarkSweepGC, 
-verbose:gc, -XX:+PrintGCDetails, -XX:+PrintGCTimeStamps, 
-Xloggc:/var/logs/hbase/gc-hbase.log, -Dhbase.log.dir=/var/log/hbase, 
-Dhbase.log.file=hbase-hbase-regionserver-PSLBHDN002.log, 
-Dhbase.home.dir=/usr/lib/hbase/bin/.., -Dhbase.id.str=hbase, 
-Dhbase.root.logger=INFO,RFA, 
-Djava.library.path=/usr/lib/hadoop/lib/native:/usr/lib/hbase/bin/../lib/native/Linux-amd64-64, 
-Dhbase.security.logger=INFO,DRFAS]
2013-03-07 16:49:03,843 DEBUG 
org.apache.hadoop.hbase.regionserver.HRegionServer: Set serverside 
HConnection retries=100
2013-03-07 16:49:04,073 INFO org.apache.hadoop.ipc.HBaseServer: Starting 
Thread-1
2013-03-07 16:49:04,075 INFO org.apache.hadoop.ipc.HBaseServer: Starting 
Thread-1
2013-03-07 16:49:04,076 INFO org.apache.hadoop.ipc.HBaseServer: Starting 
Thread-1
2013-03-07 16:49:04,077 INFO org.apache.hadoop.ipc.HBaseServer: Starting 
Thread-1
2013-03-07 16:49:04,078 INFO org.apache.hadoop.ipc.HBaseServer: Starting 
Thread-1
2013-03-07 16:49:04,079 INFO org.apache.hadoop.ipc.HBaseServer: Starting 
Thread-1
2013-03-07 16:49:04,081 INFO org.apache.hadoop.ipc.HBaseServer: Starting 
Thread-1
2013-03-07 16:49:04,081 INFO org.apache.hadoop.ipc.HBaseServer: Starting 
Thread-1
2013-03-07 16:49:04,082 INFO org.apache.hadoop.ipc.HBaseServer: Starting 
Thread-1
2013-03-07 16:49:04,082 INFO org.apache.hadoop.ipc.HBaseServer: Starting 
Thread-1
2013-03-07 16:49:04,096 INFO 
org.apache.hadoop.hbase.ipc.HBaseRpcMetrics: Initializing RPC Metrics 
with hostName=HRegionServer, port=60020
2013-03-07 16:49:04,323 INFO 
org.apache.hadoop.hbase.io.hfile.CacheConfig: Allocating LruBlockCache 
with maximum size 2.0g
2013-03-07 16:49:04,345 INFO org.apache.zookeeper.ZooKeeper: Client 
environment:zookeeper.version=3.4.5-cdh4.2.0--1, built on 02/15/2013 
18:36 GMT
2013-03-07 16:49:04,345 INFO org.apache.zookeeper.ZooKeeper: Client 
environment:host.name=PSLBHDN002
2013-03-07 16:49:04,345 INFO org.apache.zookeeper.ZooKeeper: Client 
environment:java.version=1.7.0
2013-03-07 16:49:04,345 INFO org.apache.zookeeper.ZooKeeper: Client 
environment:java.vendor=Oracle Corporation
2013-03-07 16:49:04,345 INFO org.apache.zookeeper.ZooKeeper: Client 
environment:java.home=/usr/java/jdk1.7.0/jre
2013-03-07 16:49:04,345 INFO org.apache.zookeeper.ZooKeeper: Client 
environment:java.library.path=/usr/lib/hadoop/lib/native:/usr/lib/hbase/bin/../lib/native/Linux-amd64-64
2013-03-07 16:49:04,345 INFO org.apache.zookeeper.ZooKeeper: Client 
environment:java.io.tmpdir=/tmp
2013-03-07 16:49:04,345 INFO org.apache.zookeeper.ZooKeeper: Client 
environment:java.compiler=<NA>
2013-03-07 16:49:04,345 INFO org.apache.zookeeper.ZooKeeper: Client 
environment:os.name=Linux
2013-03-07 16:49:04,345 INFO org.apache.zookeeper.ZooKeeper: Client 
environment:os.arch=amd64
2013-03-07 16:49:04,345 INFO org.apache.zookeeper.ZooKeeper: Client 
environment:os.version=2.6.32-220.7.1.el6.x86_64
2013-03-07 16:49:04,346 INFO org.apache.zookeeper.ZooKeeper: Client 
environment:user.name=hbase
2013-03-07 16:49:04,346 INFO org.apache.zookeeper.ZooKeeper: Client 
environment:user.home=/var/run/hbase
2013-03-07 16:49:04,346 INFO org.apache.zookeeper.ZooKeeper: Client 
environment:user.dir=/
2013-03-07 16:49:04,346 INFO org.apache.zookeeper.ZooKeeper: Initiating 
client connection, connectString=PSLBHNN001:2181 sessionTimeout=180000 
watcher=regionserver:60020
2013-03-07 16:49:04,360 INFO 
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The identifier 
of this process is 20060@PSLBHDN002
2013-03-07 16:49:04,365 INFO org.apache.zookeeper.ClientCnxn: Opening 
socket connection to server PSLBHNN001/172.17.2.10:2181. Will not 
attempt to authenticate using SASL (unknown error)
2013-03-07 16:49:04,368 INFO org.apache.zookeeper.ClientCnxn: Socket 
connection established to PSLBHNN001/172.17.2.10:2181, initiating session
2013-03-07 16:49:04,374 INFO org.apache.zookeeper.ClientCnxn: Session 
establishment complete on server PSLBHNN001/172.17.2.10:2181, sessionid 
= 0x13d3c4bcba600a7, negotiated timeout = 150000
2013-03-07 16:49:04,444 DEBUG 
org.apache.hadoop.hbase.catalog.CatalogTracker: Starting catalog tracker 
org.apache.hadoop.hbase.catalog.CatalogTracker@62b970cc
2013-03-07 16:49:04,463 INFO 
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Node 
/hbase/online-snapshot/acquired already exists and this is not a retry
2013-03-07 16:49:04,471 INFO 
org.apache.hadoop.hbase.regionserver.MemStoreFlusher: 
globalMemStoreLimit=3.2g, globalMemStoreLimitLowMark=2.8g, maxHeap=8.0g
2013-03-07 16:49:04,474 INFO 
org.apache.hadoop.hbase.regionserver.HRegionServer: Runs every 2hrs, 
46mins, 40sec
2013-03-07 16:49:04,479 INFO 
org.apache.hadoop.hbase.regionserver.HRegionServer: Attempting connect 
to Master server at PSLBHNN001,60000,1362581895355
2013-03-07 16:49:04,514 INFO 
org.apache.hadoop.hbase.regionserver.HRegionServer: Connected to master 
at PSLBHNN001/172.17.2.10:60000
2013-03-07 16:49:04,514 INFO 
org.apache.hadoop.hbase.regionserver.HRegionServer: Telling master at 
PSLBHNN001,60000,1362581895355 that we are up with port=60020, 
startcode=1362685744154
2013-03-07 16:49:04,550 INFO 
org.apache.hadoop.hbase.regionserver.HRegionServer: Master passed us 
hostname to use. Was=PSLBHDN002, Now=PSLBHDN002
2013-03-07 16:49:04,550 DEBUG 
org.apache.hadoop.hbase.regionserver.HRegionServer: Config from master: 
fs.default.name=hdfs://pslbhnn001.psafe.net
2013-03-07 16:49:04,550 WARN org.apache.hadoop.conf.Configuration: 
fs.default.name is deprecated. Instead, use fs.defaultFS
2013-03-07 16:49:04,551 DEBUG 
org.apache.hadoop.hbase.regionserver.HRegionServer: Config from master: 
hbase.rootdir=hdfs://pslbhnn001.psafe.net/hbase
2013-03-07 16:49:04,832 INFO 
org.apache.hadoop.hbase.regionserver.ShutdownHook: Installed shutdown 
hook thread: Shutdownhook:regionserver60020
2013-03-07 16:49:04,836 DEBUG 
org.apache.hadoop.hbase.regionserver.HRegionServer: 
logdir=hdfs://pslbhnn001.psafe.net/hbase/.logs/PSLBHDN002,60020,1362685744154
2013-03-07 16:49:04,952 INFO 
org.apache.hadoop.hbase.regionserver.wal.HLog: HLog configuration: 
blocksize=64 MB, rollsize=60.8 MB, enabled=true, 
optionallogflushinternal=1000ms
2013-03-07 16:49:05,030 INFO 
org.apache.hadoop.io.compress.zlib.ZlibFactory: Successfully loaded & 
initialized native-zlib library
2013-03-07 16:49:05,032 INFO org.apache.hadoop.io.compress.CodecPool: 
Got brand-new compressor [.deflate]
2013-03-07 16:49:05,032 DEBUG 
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter: using 
new createWriter -- HADOOP-6840
2013-03-07 16:49:05,033 DEBUG 
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter: 
Path=hdfs://pslbhnn001.psafe.net/hbase/.logs/PSLBHDN002,60020,1362685744154/PSLBHDN002%2C60020%2C1362685744154.1362685744953, 
syncFs=true, hflush=true, compression=false
2013-03-07 16:49:05,035 INFO 
org.apache.hadoop.hbase.regionserver.wal.HLog:  for 
/hbase/.logs/PSLBHDN002,60020,1362685744154/PSLBHDN002%2C60020%2C1362685744154.1362685744953
2013-03-07 16:49:05,036 INFO 
org.apache.hadoop.hbase.regionserver.wal.HLog: Using 
getNumCurrentReplicas--HDFS-826
2013-03-07 16:49:05,045 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: 
Initializing JVM Metrics with processName=RegionServer, 
sessionId=regionserver60020
2013-03-07 16:49:05,051 INFO org.apache.hadoop.hbase.metrics: 
MetricsString added: revision
2013-03-07 16:49:05,051 INFO org.apache.hadoop.hbase.metrics: 
MetricsString added: hdfsUser
2013-03-07 16:49:05,051 INFO org.apache.hadoop.hbase.metrics: 
MetricsString added: hdfsDate
2013-03-07 16:49:05,051 INFO org.apache.hadoop.hbase.metrics: 
MetricsString added: hdfsUrl
2013-03-07 16:49:05,051 INFO org.apache.hadoop.hbase.metrics: 
MetricsString added: date
2013-03-07 16:49:05,051 INFO org.apache.hadoop.hbase.metrics: 
MetricsString added: hdfsRevision
2013-03-07 16:49:05,051 INFO org.apache.hadoop.hbase.metrics: 
MetricsString added: user
2013-03-07 16:49:05,051 INFO org.apache.hadoop.hbase.metrics: 
MetricsString added: hdfsVersion
2013-03-07 16:49:05,051 INFO org.apache.hadoop.hbase.metrics: 
MetricsString added: url
2013-03-07 16:49:05,051 INFO org.apache.hadoop.hbase.metrics: 
MetricsString added: version
2013-03-07 16:49:05,051 INFO org.apache.hadoop.hbase.metrics: new MBeanInfo
2013-03-07 16:49:05,053 INFO org.apache.hadoop.hbase.metrics: new MBeanInfo
2013-03-07 16:49:05,053 INFO 
org.apache.hadoop.hbase.regionserver.metrics.RegionServerMetrics: 
Initialized
2013-03-07 16:49:05,057 DEBUG 
org.apache.hadoop.hbase.executor.ExecutorService: Starting executor 
service name=RS_OPEN_REGION-PSLBHDN002,60020,1362685744154, 
corePoolSize=3, maxPoolSize=3
2013-03-07 16:49:05,057 DEBUG 
org.apache.hadoop.hbase.executor.ExecutorService: Starting executor 
service name=RS_OPEN_ROOT-PSLBHDN002,60020,1362685744154, 
corePoolSize=1, maxPoolSize=1
2013-03-07 16:49:05,058 DEBUG 
org.apache.hadoop.hbase.executor.ExecutorService: Starting executor 
service name=RS_OPEN_META-PSLBHDN002,60020,1362685744154, 
corePoolSize=1, maxPoolSize=1
2013-03-07 16:49:05,058 DEBUG 
org.apache.hadoop.hbase.executor.ExecutorService: Starting executor 
service name=RS_CLOSE_REGION-PSLBHDN002,60020,1362685744154, 
corePoolSize=3, maxPoolSize=3
2013-03-07 16:49:05,058 DEBUG 
org.apache.hadoop.hbase.executor.ExecutorService: Starting executor 
service name=RS_CLOSE_ROOT-PSLBHDN002,60020,1362685744154, 
corePoolSize=1, maxPoolSize=1
2013-03-07 16:49:05,058 DEBUG 
org.apache.hadoop.hbase.executor.ExecutorService: Starting executor 
service name=RS_CLOSE_META-PSLBHDN002,60020,1362685744154, 
corePoolSize=1, maxPoolSize=1
2013-03-07 16:49:05,099 INFO org.mortbay.log: Logging to 
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via 
org.mortbay.log.Slf4jLog
2013-03-07 16:49:05,147 INFO org.apache.hadoop.http.HttpServer: Added 
global filter 'safety' 
(class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
2013-03-07 16:49:05,149 INFO org.apache.hadoop.http.HttpServer: Added 
filter static_user_filter 
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) 
to context regionserver
2013-03-07 16:49:05,149 INFO org.apache.hadoop.http.HttpServer: Added 
filter static_user_filter 
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) 
to context static
2013-03-07 16:49:05,157 INFO org.apache.hadoop.http.HttpServer: Jetty 
bound to port 60030
2013-03-07 16:49:05,157 INFO org.mortbay.log: jetty-6.1.26.cloudera.2
2013-03-07 16:49:05,448 INFO org.mortbay.log: Started 
SelectChannelConnector@0.0.0.0:60030
2013-03-07 16:49:05,449 INFO org.apache.hadoop.ipc.HBaseServer: IPC 
Server Responder: starting
2013-03-07 16:49:05,449 INFO org.apache.hadoop.ipc.HBaseServer: IPC 
Server listener on 60020: starting
2013-03-07 16:49:05,459 INFO org.apache.hadoop.ipc.HBaseServer: IPC 
Server handler 0 on 60020: starting
2013-03-07 16:49:05,459 INFO org.apache.hadoop.ipc.HBaseServer: IPC 
Server handler 1 on 60020: starting
2013-03-07 16:49:05,460 INFO org.apache.hadoop.ipc.HBaseServer: IPC 
Server handler 2 on 60020: starting
2013-03-07 16:49:05,460 INFO org.apache.hadoop.ipc.HBaseServer: IPC 
Server handler 3 on 60020: starting
2013-03-07 16:49:05,460 INFO org.apache.hadoop.ipc.HBaseServer: IPC 
Server handler 4 on 60020: starting
2013-03-07 16:49:05,460 INFO org.apache.hadoop.ipc.HBaseServer: IPC 
Server handler 5 on 60020: starting
2013-03-07 16:49:05,460 INFO org.apache.hadoop.ipc.HBaseServer: IPC 
Server handler 6 on 60020: starting
2013-03-07 16:49:05,461 INFO org.apache.hadoop.ipc.HBaseServer: IPC 
Server handler 7 on 60020: starting
2013-03-07 16:49:05,461 INFO org.apache.hadoop.ipc.HBaseServer: IPC 
Server handler 8 on 60020: starting
2013-03-07 16:49:05,461 INFO org.apache.hadoop.ipc.HBaseServer: IPC 
Server handler 9 on 60020: starting
2013-03-07 16:49:05,461 INFO org.apache.hadoop.ipc.HBaseServer: IPC 
Server handler 10 on 60020: starting
2013-03-07 16:49:05,461 INFO org.apache.hadoop.ipc.HBaseServer: IPC 
Server handler 11 on 60020: starting
2013-03-07 16:49:05,462 INFO org.apache.hadoop.ipc.HBaseServer: IPC 
Server handler 12 on 60020: starting
2013-03-07 16:49:05,462 INFO org.apache.hadoop.ipc.HBaseServer: IPC 
Server handler 13 on 60020: starting
2013-03-07 16:49:05,462 INFO org.apache.hadoop.ipc.HBaseServer: IPC 
Server handler 14 on 60020: starting
2013-03-07 16:49:05,463 INFO org.apache.hadoop.ipc.HBaseServer: IPC 
Server handler 15 on 60020: starting
2013-03-07 16:49:05,463 INFO org.apache.hadoop.ipc.HBaseServer: IPC 
Server handler 16 on 60020: starting
2013-03-07 16:49:05,464 INFO org.apache.hadoop.ipc.HBaseServer: IPC 
Server handler 17 on 60020: starting
2013-03-07 16:49:05,464 INFO org.apache.hadoop.ipc.HBaseServer: IPC 
Server handler 18 on 60020: starting
2013-03-07 16:49:05,464 INFO org.apache.hadoop.ipc.HBaseServer: IPC 
Server handler 19 on 60020: starting
2013-03-07 16:49:05,464 INFO org.apache.hadoop.ipc.HBaseServer: PRI IPC 
Server handler 0 on 60020: starting
2013-03-07 16:49:05,464 INFO org.apache.hadoop.ipc.HBaseServer: PRI IPC 
Server handler 1 on 60020: starting
2013-03-07 16:49:05,464 INFO org.apache.hadoop.ipc.HBaseServer: PRI IPC 
Server handler 2 on 60020: starting
2013-03-07 16:49:05,469 INFO 
org.apache.hadoop.hbase.regionserver.HRegionServer: Serving as 
PSLBHDN002,60020,1362685744154, RPC listening on 
PSLBHDN002/172.17.2.18:60020, sessionid=0x13d3c4bcba600a7
2013-03-07 16:49:05,470 INFO 
org.apache.hadoop.hbase.regionserver.SplitLogWorker: SplitLogWorker 
PSLBHDN002,60020,1362685744154 starting
2013-03-07 16:49:05,471 INFO 
org.apache.hadoop.hbase.regionserver.HRegionServer: Registered 
RegionServer MXBean
2013-03-07 16:49:05,471 DEBUG 
org.apache.hadoop.hbase.procedure.ZKProcedureMemberRpcs: Starting 
procedure member 'PSLBHDN002,60020,1362685744154'
2013-03-07 16:49:05,471 DEBUG 
org.apache.hadoop.hbase.procedure.ZKProcedureMemberRpcs: Starting the 
procedure member
2013-03-07 16:49:05,471 DEBUG 
org.apache.hadoop.hbase.procedure.ZKProcedureMemberRpcs: Checking for 
aborted procedures on node: '/hbase/online-snapshot/abort'
2013-03-07 16:49:05,473 DEBUG 
org.apache.hadoop.hbase.procedure.ZKProcedureMemberRpcs: Looking for new 
procedures under znode: '/hbase/online-snapshot/acquired'
2013-03-07 16:49:05,482 INFO 
org.apache.hadoop.hbase.regionserver.SplitLogWorker: worker 
PSLBHDN002,60020,1362685744154 acquired task 
/hbase/splitlog/hdfs%3A%2F%2Fpslbhnn001.psafe.net%2Fhbase%2F.logs%2FPSLBHDN002%2C60020%2C1362683643918-splitting%2FPSLBHDN002%252C60020%252C1362683643918.1362683644662
2013-03-07 16:49:05,503 INFO 
org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Splitting hlog: 
hdfs://pslbhnn001.psafe.net/hbase/.logs/PSLBHDN002,60020,1362683643918-splitting/PSLBHDN002%2C60020%2C1362683643918.1362683644662, 
length=1218634
2013-03-07 16:49:05,503 INFO org.apache.hadoop.hbase.util.FSHDFSUtils: 
Recovering file 
hdfs://pslbhnn001.psafe.net/hbase/.logs/PSLBHDN002,60020,1362683643918-splitting/PSLBHDN002%2C60020%2C1362683643918.1362683644662
2013-03-07 16:49:06,508 INFO org.apache.hadoop.hbase.util.FSHDFSUtils: 
Finished lease recover attempt for 
hdfs://pslbhnn001.psafe.net/hbase/.logs/PSLBHDN002,60020,1362683643918-splitting/PSLBHDN002%2C60020%2C1362683643918.1362683644662
2013-03-07 16:49:06,707 WARN org.apache.hadoop.conf.Configuration: 
hadoop.native.lib is deprecated. Instead, use io.native.lib.available
2013-03-07 16:49:06,812 INFO org.apache.hadoop.io.compress.CodecPool: 
Got brand-new compressor [.deflate]
2013-03-07 16:49:06,813 DEBUG 
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter: using 
new createWriter -- HADOOP-6840
2013-03-07 16:49:06,813 DEBUG 
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter: 
Path=hdfs://pslbhnn001.psafe.net/hbase/-ROOT-/70236052/recovered.edits/0000000000263914399.temp, 
syncFs=true, hflush=true, compression=false
2013-03-07 16:49:06,813 DEBUG 
org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Creating writer 
path=hdfs://pslbhnn001.psafe.net/hbase/-ROOT-/70236052/recovered.edits/0000000000263914399.temp 
region=70236052
2013-03-07 16:49:06,822 INFO org.apache.hadoop.io.compress.CodecPool: 
Got brand-new compressor [.deflate]
2013-03-07 16:49:06,822 DEBUG 
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter: using 
new createWriter -- HADOOP-6840
2013-03-07 16:49:06,822 DEBUG 
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter: 
Path=hdfs://pslbhnn001.psafe.net/hbase/.META./1028785192/recovered.edits/0000000000263914400.temp, 
syncFs=true, hflush=true, compression=false
2013-03-07 16:49:06,822 DEBUG 
org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Creating writer 
path=hdfs://pslbhnn001.psafe.net/hbase/.META./1028785192/recovered.edits/0000000000263914400.temp 
region=1028785192
2013-03-07 16:49:06,975 INFO org.apache.hadoop.io.compress.CodecPool: 
Got brand-new compressor [.deflate]
2013-03-07 16:49:06,975 DEBUG 
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter: using 
new createWriter -- HADOOP-6840
2013-03-07 16:49:06,975 DEBUG 
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter: 
Path=hdfs://pslbhnn001.psafe.net/hbase/Meta/bfb277e8f379fcc1e98d5a3750ff015d/recovered.edits/0000000000263916389.temp, 
syncFs=true, hflush=true, compression=false
2013-03-07 16:49:06,975 DEBUG 
org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Creating writer 
path=hdfs://pslbhnn001.psafe.net/hbase/Meta/bfb277e8f379fcc1e98d5a3750ff015d/recovered.edits/0000000000263916389.temp 
region=bfb277e8f379fcc1e98d5a3750ff015d
2013-03-07 16:49:06,983 INFO org.apache.hadoop.io.compress.CodecPool: 
Got brand-new compressor [.deflate]
2013-03-07 16:49:06,983 DEBUG 
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter: using 
new createWriter -- HADOOP-6840
2013-03-07 16:49:06,983 DEBUG 
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter: 
Path=hdfs://pslbhnn001.psafe.net/hbase/Plan/e9837362b3a39f2482234d30cad8a459/recovered.edits/0000000000263916393.temp, 
syncFs=true, hflush=true, compression=false
2013-03-07 16:49:06,984 DEBUG 
org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Creating writer 
path=hdfs://pslbhnn001.psafe.net/hbase/Plan/e9837362b3a39f2482234d30cad8a459/recovered.edits/0000000000263916393.temp 
region=e9837362b3a39f2482234d30cad8a459
2013-03-07 16:49:06,996 INFO org.apache.hadoop.io.compress.CodecPool: 
Got brand-new compressor [.deflate]
2013-03-07 16:49:06,996 DEBUG 
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter: using 
new createWriter -- HADOOP-6840
2013-03-07 16:49:06,996 DEBUG 
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter: 
Path=hdfs://pslbhnn001.psafe.net/hbase/TMeta/b10a299d71dae69e6aa6740b5180f009/recovered.edits/0000000000263916442.temp, 
syncFs=true, hflush=true, compression=false
2013-03-07 16:49:06,996 DEBUG 
org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Creating writer 
path=hdfs://pslbhnn001.psafe.net/hbase/TMeta/b10a299d71dae69e6aa6740b5180f009/recovered.edits/0000000000263916442.temp 
region=b10a299d71dae69e6aa6740b5180f009

... 25288 lines later

2013-03-07 16:49:55,785 DEBUG 
org.apache.hadoop.hbase.backup.HFileArchiver: No existing file in 
archive 
for:hdfs://pslbhnn001.psafe.net/hbase/.archive/TMeta/f6abb1d3dd9371d771c99fbb6398278f/usermeta/3f2cac15ca9645859e21ce9ff6f3ede6, 
free to archive original file.
2013-03-07 16:49:55,789 DEBUG 
org.apache.hadoop.hbase.backup.HFileArchiver: Finished archiving file 
from: class 
org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, 
file:hdfs://pslbhnn001.psafe.net/hbase/TMeta/f6abb1d3dd9371d771c99fbb6398278f/usermeta/3f2cac15ca9645859e21ce9ff6f3ede6, 
to: 
hdfs://pslbhnn001.psafe.net/hbase/.archive/TMeta/f6abb1d3dd9371d771c99fbb6398278f/usermeta/3f2cac15ca9645859e21ce9ff6f3ede6
2013-03-07 16:49:55,789 INFO org.apache.hadoop.hbase.regionserver.Store: 
Completed major compaction of 4 file(s) in usermeta of 
TMeta,\x00\x00\x00\x00\x00\x12-$\x00\x00\x00\x00\x01Oqn\x88\xA4\x0B\x06{\xDF\xF3\xEE,1358082944531.f6abb1d3dd9371d771c99fbb6398278f. 
into 3a5d2b750e1e4869ac270196657076b2, size=138.3m; total size for store 
is 138.3m
2013-03-07 16:49:55,789 INFO 
org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: 
completed compaction: 
regionName=TMeta,\x00\x00\x00\x00\x00\x12-$\x00\x00\x00\x00\x01Oqn\x88\xA4\x0B\x06{\xDF\xF3\xEE,1358082944531.f6abb1d3dd9371d771c99fbb6398278f., 
storeName=usermeta, fileCount=4, fileSize=138.4m, priority=3, 
time=25481694644208519; duration=4sec
2013-03-07 16:49:55,790 DEBUG 
org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: 
CompactSplitThread status: compaction_queue=(0:0), split_queue=0
2013-03-07 16:54:04,330 DEBUG 
org.apache.hadoop.hbase.io.hfile.LruBlockCache: Stats: total=32.12 MB, 
free=1.97 GB, max=2 GB, blocks=686, accesses=16361, hits=7881, 
hitRatio=48.16%, , cachingAccesses=8886, cachingHits=7580, 
cachingHitsRatio=85.30%, , evictions=0, evicted=617, evictedPerRun=Infinity
2013-03-07 16:59:04,329 DEBUG 
org.apache.hadoop.hbase.io.hfile.LruBlockCache: Stats: total=32.5 MB, 
free=1.97 GB, max=2 GB, blocks=692, accesses=17919, hits=9433, 
hitRatio=52.64%, , cachingAccesses=10444, cachingHits=9132, 
cachingHitsRatio=87.43%, , evictions=0, evicted=617, evictedPerRun=Infinity
2013-03-07 17:04:04,329 DEBUG 
org.apache.hadoop.hbase.io.hfile.LruBlockCache: Stats: total=36.72 MB, 
free=1.96 GB, max=2 GB, blocks=762, accesses=19549, hits=10993, 
hitRatio=56.23%, , cachingAccesses=12074, cachingHits=10692, 
cachingHitsRatio=88.55%, , evictions=0, evicted=617, evictedPerRun=Infinity
2013-03-07 17:09:04,329 DEBUG 
org.apache.hadoop.hbase.io.hfile.LruBlockCache: Stats: total=37.16 MB, 
free=1.96 GB, max=2 GB, blocks=768, accesses=20973, hits=12411, 
hitRatio=59.17%, , cachingAccesses=13498, cachingHits=12110, 
cachingHitsRatio=89.71%, , evictions=0, evicted=617, evictedPerRun=Infinity
2013-03-07 17:14:04,329 DEBUG 
org.apache.hadoop.hbase.io.hfile.LruBlockCache: Stats: total=37.28 MB, 
free=1.96 GB, max=2 GB, blocks=770, accesses=22474, hits=13910, 
hitRatio=61.89%, , cachingAccesses=14999, cachingHits=13609, 
cachingHitsRatio=90.73%, , evictions=0, evicted=617, evictedPerRun=Infinity
2013-03-07 17:19:04,329 DEBUG 
org.apache.hadoop.hbase.io.hfile.LruBlockCache: Stats: total=37.35 MB, 
free=1.96 GB, max=2 GB, blocks=771, accesses=23824, hits=15259, 
hitRatio=64.04%, , cachingAccesses=16349, cachingHits=14958, 
cachingHitsRatio=91.49%, , evictions=0, evicted=617, evictedPerRun=Infinity
2013-03-07 17:24:57,887 INFO org.apache.zookeeper.ClientCnxn: Client 
session timed out, have not heard from server in 159348ms for sessionid 
0x13d3c4bcba600a7, closing socket connection and attempting reconnect
2013-03-07 17:24:57,887 INFO org.apache.zookeeper.ClientCnxn: Client 
session timed out, have not heard from server in 147934ms for sessionid 
0x13d3c4bcba600a8, closing socket connection and attempting reconnect
2013-03-07 17:24:57,887 DEBUG 
org.apache.hadoop.hbase.io.hfile.LruBlockCache: Stats: total=38.04 MB, 
free=1.96 GB, max=2 GB, blocks=783, accesses=25099, hits=16522, 
hitRatio=65.82%, , cachingAccesses=17624, cachingHits=16221, 
cachingHitsRatio=92.03%, , evictions=0, evicted=617, evictedPerRun=Infinity
2013-03-07 17:24:57,887 WARN org.apache.hadoop.hdfs.DFSClient: 
DFSOutputStream ResponseProcessor exception  for block 
BP-43236042-172.17.2.10-1362490844340:blk_-242393173458437587_25099194
java.io.EOFException: Premature EOF: no length prefix available
         at 
org.apache.hadoop.hdfs.protocol.HdfsProtoUtil.vintPrefixed(HdfsProtoUtil.java:171)
         at 
org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:114)
         at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:670)
2013-03-07 17:24:57,891 WARN org.apache.hadoop.hdfs.DFSClient: Error 
Recovery for block 
BP-43236042-172.17.2.10-1362490844340:blk_-242393173458437587_25099194 
in pipeline 172.17.2.18:50010, 172.17.2.31:50010, 172.17.2.30:50010: bad 
datanode 172.17.2.18:50010
2013-03-07 17:24:57,893 ERROR 
org.apache.hadoop.hbase.regionserver.HRegionServer:
org.apache.hadoop.hbase.ipc.CallerDisconnectedException: Aborting call 
get([B@2a92c388, 
{"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"details":["ALL"]},"maxVersions":1,"row":"\\x00\\x00\\x00\\x00\\x00\\x12\\x81K"}), 
rpc version=1, client version=29, methodsFingerPrint=1891768260 from 
172.17.1.61:40494 after 0 ms, since caller disconnected
         at 
org.apache.hadoop.hbase.ipc.HBaseServer$Call.throwExceptionIfCallerDisconnected(HBaseServer.java:427)
         at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3593)
         at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3551)
         at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3575)
         at 
org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4370)
         at 
org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4343)
         at 
org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2021)
         at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
         at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
         at java.lang.reflect.Method.invoke(Method.java:601)
         at 
org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
         at 
org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1400)
2013-03-07 17:24:57,893 ERROR 
org.apache.hadoop.hbase.regionserver.HRegionServer:
org.apache.hadoop.hbase.ipc.CallerDisconnectedException: Aborting call 
get([B@7475cdd3, 
{"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"details":["ALL"]},"maxVersions":1,"row":"\\x00\\x00\\x00\\x00\\x00\\x11\\x0C\\x88"}), 
rpc version=1, client version=29, methodsFingerPrint=1891768260 from 
172.17.1.60:60179 after 2 ms, since caller disconnected
         at 
org.apache.hadoop.hbase.ipc.HBaseServer$Call.throwExceptionIfCallerDisconnected(HBaseServer.java:427)
         at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3593)
         at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3551)
         at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3575)
         at 
org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4370)
         at 
org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4343)
         at 
org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2021)
         at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
         at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
         at java.lang.reflect.Method.invoke(Method.java:601)
         at 
org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
         at 
org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1400)
2013-03-07 17:24:57,893 ERROR 
org.apache.hadoop.hbase.regionserver.HRegionServer:
org.apache.hadoop.hbase.ipc.CallerDisconnectedException: Aborting call 
get([B@46ce989e, 
{"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"details":["ALL"]},"maxVersions":1,"row":"\\x00\\x00\\x00\\x00\\x00\\x13\\x0E\\xDE"}), 
rpc version=1, client version=29, methodsFingerPrint=1891768260 from 
172.17.1.71:58512 after 2 ms, since caller disconnected
         at 
org.apache.hadoop.hbase.ipc.HBaseServer$Call.throwExceptionIfCallerDisconnected(HBaseServer.java:427)
         at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3593)
         at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3551)
         at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3575)
         at 
org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4370)
         at 
org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4343)
         at 
org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2021)
         at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
         at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
         at java.lang.reflect.Method.invoke(Method.java:601)
         at 
org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
         at 
org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1400)
2013-03-07 17:24:57,896 WARN org.apache.hadoop.ipc.HBaseServer: IPC 
Server Responder, call get([B@7475cdd3, 
{"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"details":["ALL"]},"maxVersions":1,"row":"\\x00\\x00\\x00\\x00\\x00\\x11\\x0C\\x88"}), 
rpc version=1, client version=29, methodsFingerPrint=1891768260 from 
172.17.1.60:60179: output error
2013-03-07 17:24:57,897 WARN org.apache.hadoop.ipc.HBaseServer: IPC 
Server handler 11 on 60020 caught a ClosedChannelException, this means 
that the server was processing a request but the client went away. The 
error message was: null
2013-03-07 17:24:57,897 WARN org.apache.hadoop.ipc.HBaseServer: IPC 
Server Responder, call get([B@2a92c388, 
{"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"details":["ALL"]},"maxVersions":1,"row":"\\x00\\x00\\x00\\x00\\x00\\x12\\x81K"}), 
rpc version=1, client version=29, methodsFingerPrint=1891768260 from 
172.17.1.61:40494: output error
2013-03-07 17:24:57,897 WARN org.apache.hadoop.ipc.HBaseServer: IPC 
Server handler 1 on 60020 caught a ClosedChannelException, this means 
that the server was processing a request but the client went away. The 
error message was: null
2013-03-07 17:24:57,898 WARN org.apache.hadoop.ipc.HBaseServer: IPC 
Server Responder, call get([B@46ce989e, 
{"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"details":["ALL"]},"maxVersions":1,"row":"\\x00\\x00\\x00\\x00\\x00\\x13\\x0E\\xDE"}), 
rpc version=1, client version=29, methodsFingerPrint=1891768260 from 
172.17.1.71:58512: output error
2013-03-07 17:24:57,898 WARN org.apache.hadoop.ipc.HBaseServer: IPC 
Server handler 9 on 60020 caught a ClosedChannelException, this means 
that the server was processing a request but the client went away. The 
error message was: null
2013-03-07 17:24:57,900 WARN org.apache.hadoop.hdfs.DFSClient: 
DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): 
No lease on 
/hbase/.logs/PSLBHDN002,60020,1362685744154/PSLBHDN002%2C60020%2C1362685744154.1362685744953 
File does not exist. [Lease.  Holder: 
DFSClient_NONMAPREDUCE_-420681946_25, pendingcreates: 1]
         at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2396)
         at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2387)
         at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:2319)
         at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAdditionalDatanode(NameNodeRpcServer.java:511)
         at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolServerSideTranslatorPB.java:316)
         at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44082)
         at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
         at java.security.AccessController.doPrivileged(Native Method)
         at javax.security.auth.Subject.doAs(Subject.java:415)
         at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)

         at org.apache.hadoop.ipc.Client.call(Client.java:1225)
         at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
         at $Proxy14.getAdditionalDatanode(Unknown Source)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
         at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
         at java.lang.reflect.Method.invoke(Method.java:601)
         at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
         at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
         at $Proxy14.getAdditionalDatanode(Unknown Source)
         at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolTranslatorPB.java:312)
         at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:846)
         at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:958)
         at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:755)
         at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:424)

org.apache.hadoop.hbase.YouAreDeadException: 
org.apache.hadoop.hbase.YouAreDeadException: Server REPORT rejected; 
currently processing PSLBHDN002,60020,1362685744154 as dead server
         at 
org.apache.hadoop.hbase.master.ServerManager.checkIsDead(ServerManager.java:242)
         at 
org.apache.hadoop.hbase.master.ServerManager.regionServerReport(ServerManager.java:160)
         at 
org.apache.hadoop.hbase.master.HMaster.regionServerReport(HMaster.java:966)
         at sun.reflect.GeneratedMethodAccessor24.invoke(Unknown Source)
         at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
         at java.lang.reflect.Method.invoke(Method.java:601)
         at 
org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
         at 
org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1400)

         at 
sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
         at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
         at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
         at java.lang.reflect.Constructor.newInstance(Constructor.java:525)
         at 
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
         at 
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:79)
         at 
org.apache.hadoop.hbase.regionserver.HRegionServer.tryRegionServerReport(HRegionServer.java:883)
         at 
org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:741)
         at java.lang.Thread.run(Thread.java:722)
Caused by: 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hbase.YouAreDeadException): 
org.apache.hadoop.hbase.YouAreDeadException: Server REPORT rejected; 
currently processing PSLBHDN002,60020,1362685744154 as dead server
         at 
org.apache.hadoop.hbase.master.ServerManager.checkIsDead(ServerManager.java:242)
         at 
org.apache.hadoop.hbase.master.ServerManager.regionServerReport(ServerManager.java:160)
         at 
org.apache.hadoop.hbase.master.HMaster.regionServerReport(HMaster.java:966)
         at sun.reflect.GeneratedMethodAccessor24.invoke(Unknown Source)
         at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
         at java.lang.reflect.Method.invoke(Method.java:601)
         at 
org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
         at 
org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1400)

         at 
org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:1021)
         at 
org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
         at $Proxy10.regionServerReport(Unknown Source)
         at 
org.apache.hadoop.hbase.regionserver.HRegionServer.tryRegionServerReport(HRegionServer.java:880)
         ... 2 more
2013-03-07 17:24:57,914 FATAL 
org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer abort: 
loaded coprocessors are: []
2013-03-07 17:24:57,916 INFO 
org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics: 
requestsPerSecond=0, numberOfOnlineRegions=1252, numberOfStores=1272, 
numberOfStorefiles=1698, storefileIndexSizeMB=67, rootIndexSizeKB=69282, 
totalStaticIndexSizeKB=53668, totalStaticBloomSizeKB=0, 
memstoreSizeMB=1, mbInMemoryWithoutWAL=0, numberOfPutsWithoutWAL=0, 
readRequestsCount=47756, writeRequestsCount=1542, compactionQueueSize=0, 
flushQueueSize=0, usedHeapMB=225, maxHeapMB=8185, 
blockCacheSizeMB=38.04, blockCacheFreeMB=2008.36, blockCacheCount=783, 
blockCacheHitCount=16522, blockCacheMissCount=8577, 
blockCacheEvictedCount=617, blockCacheHitRatio=65%, 
blockCacheHitCachingRatio=92%, hdfsBlocksLocalityIndex=53, 
slowHLogAppendCount=0, fsReadLatencyHistogramMean=0, 
fsReadLatencyHistogramCount=0, fsReadLatencyHistogramMedian=0, 
fsReadLatencyHistogram75th=0, fsReadLatencyHistogram95th=0, 
fsReadLatencyHistogram99th=0, fsReadLatencyHistogram999th=0, 
fsPreadLatencyHistogramMean=0, fsPreadLatencyHistogramCount=0, 
fsPreadLatencyHistogramMedian=0, fsPreadLatencyHistogram75th=0, 
fsPreadLatencyHistogram95th=0, fsPreadLatencyHistogram99th=0, 
fsPreadLatencyHistogram999th=0, fsWriteLatencyHistogramMean=0, 
fsWriteLatencyHistogramCount=0, fsWriteLatencyHistogramMedian=0, 
fsWriteLatencyHistogram75th=0, fsWriteLatencyHistogram95th=0, 
fsWriteLatencyHistogram99th=0, fsWriteLatencyHistogram999th=0
2013-03-07 17:24:57,917 INFO 
org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Unhandled 
exception: org.apache.hadoop.hbase.YouAreDeadException: Server REPORT 
rejected; currently processing PSLBHDN002,60020,1362685744154 as dead server
at 
org.apache.hadoop.hbase.master.ServerManager.checkIsDead(ServerManager.java:242)
         at 
org.apache.hadoop.hbase.master.ServerManager.regionServerReport(ServerManager.java:160)
         at 
org.apache.hadoop.hbase.master.HMaster.regionServerReport(HMaster.java:966)
         at sun.reflect.GeneratedMethodAccessor24.invoke(Unknown Source)
         at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
         at java.lang.reflect.Method.invoke(Method.java:601)
         at 
org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
         at 
org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1400)

2013-03-07 17:24:57,917 INFO org.apache.hadoop.ipc.HBaseServer: Stopping 
server on 60020
2013-03-07 17:24:57,917 INFO org.apache.hadoop.ipc.HBaseServer: IPC 
Server handler 8 on 60020: exiting

####REGIONSERVER-PSLBHDN002 GC LOG####
890.465: [GC 890.465: [ParNew: 52682K->4799K(59008K), 0.0037900 secs] 
259996K->212112K(1152704K), 0.0038640 secs] [Times: user=0.03 sys=0.00, 
real=0.01 secs]
928.767: [GC 928.767: [ParNew: 57279K->3304K(59008K), 0.0050600 secs] 
264592K->214659K(1152704K), 0.0051490 secs] [Times: user=0.04 sys=0.00, 
real=0.00 secs]
961.518: [GC 961.518: [ParNew: 55784K->5238K(59008K), 0.0041390 secs] 
267139K->216593K(1152704K), 0.0042240 secs] [Times: user=0.04 sys=0.00, 
real=0.01 secs]
994.120: [GC 994.120: [ParNew: 57657K->6490K(59008K), 0.0050850 secs] 
269012K->222617K(1152704K), 0.0051780 secs] [Times: user=0.05 sys=0.00, 
real=0.01 secs]
1044.081: [GC 1044.081: [ParNew: 58970K->402K(59008K), 0.0040990 secs] 
275097K->216577K(1152704K), 0.0041820 secs] [Times: user=0.03 sys=0.00, 
real=0.01 secs]
1087.319: [GC 1087.319: [ParNew: 52873K->6528K(59008K), 0.0055000 secs] 
269048K->223592K(1152704K), 0.0055930 secs] [Times: user=0.04 sys=0.01, 
real=0.00 secs]
1087.834: [GC 1087.834: [ParNew: 59008K->6527K(59008K), 137.6353620 
secs] 276072K->235097K(1152704K), 137.6354700 secs] [Times: user=0.08 
sys=137.62, real=137.62 secs]
1226.638: [GC 1226.638: [ParNew: 59007K->1897K(59008K), 0.0079960 secs] 
287577K->230937K(1152704K), 0.0080770 secs] [Times: user=0.05 sys=0.00, 
real=0.01 secs]
1227.251: [GC 1227.251: [ParNew: 54377K->2379K(59008K), 0.0095650 secs] 
283417K->231420K(1152704K), 0.0096340 secs] [Times: user=0.06 sys=0.00, 
real=0.01 secs]
Heap
  par new generation   total 59008K, used 32759K [0x00000005fae00000, 
0x00000005fee00000, 0x00000005fee00000)
   eden space 52480K,  57% used [0x00000005fae00000, 0x00000005fcbaaf68, 
0x00000005fe140000)
   from space 6528K,  36% used [0x00000005fe7a0000, 0x00000005fe9f2f98, 
0x00000005fee00000)
   to   space 6528K,   0% used [0x00000005fe140000, 0x00000005fe140000, 
0x00000005fe7a0000)
  concurrent mark-sweep generation total 1093696K, used 229040K 
[0x00000005fee00000, 0x0000000641a10000, 0x00000007fae00000)
  concurrent-mark-sweep perm gen total 49688K, used 30332K 
[0x00000007fae00000, 0x00000007fde86000, 0x0000000800000000)

####DATANODE-PSLBHDN002####
2013-03-07 16:49:06,877 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving 
BP-43236042-172.17.2.10-1362490844340:blk_7156899642437814505_25099084 
src: /172.17.2.18:36951 dest: /172.17.2.18:50010
2013-03-07 16:49:06,912 INFO 
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: 
Verification succeeded for 
BP-43236042-172.17.2.10-1362490844340:blk_6601684672810153961_14679425

... blocks ok...

2013-03-07 16:49:07,018 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving 
BP-43236042-172.17.2.10-1362490844340:blk_336947258563431268_25099090 
src: /172.17.2.18:36953 dest: /172.17.2.18:50010
2013-03-07 16:49:07,454 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving 
BP-43236042-172.17.2.10-1362490844340:blk_8884451173425697631_25099130 
src: /172.17.2.18:36955 dest: /172.17.2.18:50010
2013-03-07 16:49:07,469 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving 
BP-43236042-172.17.2.10-1362490844340:blk_3730904837329292545_25099132 
src: /172.17.2.18:36957 dest: /172.17.2.18:50010
2013-03-07 16:49:07,555 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving 
BP-43236042-172.17.2.10-1362490844340:blk_-8067763750327121256_25099140 
src: /172.17.2.18:36959 dest: /172.17.2.18:50010
2013-03-07 16:49:07,563 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: 
/172.17.2.18:36959, dest: /172.17.2.18:50010, bytes: 3949, op: 
HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-1491694821_78, offset: 0, 
srvID: DS-229334310-172.17.2.18-50010-1328651636364, blockid: 
BP-43236042-172.17.2.10-1362490844340:blk_-8067763750327121256_25099140, 
duration: 3062287
2013-03-07 16:49:07,563 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: 
BP-43236042-172.17.2.10-1362490844340:blk_-8067763750327121256_25099140, 
type=HAS_DOWNSTREAM_IN_PIPELINE terminating
2013-03-07 16:49:07,578 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving 
BP-43236042-172.17.2.10-1362490844340:blk_-8466118669853895314_25099141 
src: /172.17.2.18:36961 dest: /172.17.2.18:50010
2013-03-07 16:49:07,586 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: 
/172.17.2.18:36961, dest: /172.17.2.18:50010, bytes: 3899, op: 
HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_1792119388_78, offset: 0, 
srvID: DS-229334310-172.17.2.18-50010-1328651636364, blockid: 
BP-43236042-172.17.2.10-1362490844340:blk_-8466118669853895314_25099141, 
duration: 3622365
2013-03-07 16:49:07,586 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: 
BP-43236042-172.17.2.10-1362490844340:blk_-8466118669853895314_25099141, 
type=HAS_DOWNSTREAM_IN_PIPELINE terminating
2013-03-07 16:49:07,595 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving 
BP-43236042-172.17.2.10-1362490844340:blk_4901477316103379126_25099142 
src: /172.17.2.18:36963 dest: /172.17.2.18:50010
2013-03-07 16:49:07,603 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: 
/172.17.2.18:36963, dest: /172.17.2.18:50010, bytes: 7009, op: 
HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_985447659_78, offset: 0, 
srvID: DS-229334310-172.17.2.18-50010-1328651636364, blockid: 
BP-43236042-172.17.2.10-1362490844340:blk_4901477316103379126_25099142, 
duration: 3294785
2013-03-07 16:49:07,603 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: 
BP-43236042-172.17.2.10-1362490844340:blk_4901477316103379126_25099142, 
type=HAS_DOWNSTREAM_IN_PIPELINE terminating
2013-03-07 16:49:07,612 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving 
BP-43236042-172.17.2.10-1362490844340:blk_3839186567268918505_25099143 
src: /172.17.2.18:36965 dest: /172.17.2.18:50010
2013-03-07 16:49:07,619 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: 
/172.17.2.18:36965, dest: /172.17.2.18:50010, bytes: 4232, op: 
HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-257606149_78, offset: 0, 
srvID: DS-229334310-172.17.2.18-50010-1328651636364, blockid: 
BP-43236042-172.17.2.10-1362490844340:blk_3839186567268918505_25099143, 
duration: 2916628

...

2013-03-07 16:57:50,587 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: exception:
java.net.SocketTimeoutException: 480000 millis timeout while waiting for 
channel to be ready for write. ch : 
java.nio.channels.SocketChannel[connected local=/172.17.2.18:50010 
remote=/172.17.2.18:39182]
         at 
org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:247)
         at 
org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:166)
         at 
org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:214)
         at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:510)
         at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:673)
         at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:344)
         at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
         at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
         at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
         at java.lang.Thread.run(Thread.java:722)
2013-03-07 16:57:50,587 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: 
/172.17.2.18:50010, dest: /172.17.2.18:39182, bytes: 1519104, op: 
HDFS_READ, cliID: DFSClient_NONMAPREDUCE_1647875948_25, offset: 
38387712, srvID: DS-229334310-172.17.2.18-50010-1328651636364, blockid: 
BP-43236042-172.17.2.10-1362490844340:blk_-3903459174389329622_25084176, 
duration: 480101136702
2013-03-07 16:57:50,587 WARN 
org.apache.hadoop.hdfs.server.datanode.DataNode: 
DatanodeRegistration(172.17.2.18, 
storageID=DS-229334310-172.17.2.18-50010-1328651636364, infoPort=50075, 
ipcPort=50020, 
storageInfo=lv=-40;cid=CID-26cd999e-460a-4dbc-b940-9250a76930a8;nsid=276058127;c=1362491004838):Got 
exception while serving 
BP-43236042-172.17.2.10-1362490844340:blk_-3903459174389329622_25084176 
to /172.17.2.18:39182
java.net.SocketTimeoutException: 480000 millis timeout while waiting for 
channel to be ready for write. ch : 
java.nio.channels.SocketChannel[connected local=/172.17.2.18:50010 
remote=/172.17.2.18:39182]
         at 
org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:247)
         at 
org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:166)
         at 
org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:214)
         at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:510)
         at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:673)
         at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:344)
         at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
         at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
         at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
         at java.lang.Thread.run(Thread.java:722)
2013-03-07 16:57:50,587 ERROR 
org.apache.hadoop.hdfs.server.datanode.DataNode: 
PSLBHDN002:50010:DataXceiver error processing READ_BLOCK operation  src: 
/172.17.2.18:39182 dest: /172.17.2.18:50010
java.net.SocketTimeoutException: 480000 millis timeout while waiting for 
channel to be ready for write. ch : 
java.nio.channels.SocketChannel[connected local=/172.17.2.18:50010 
remote=/172.17.2.18:39182]
         at 
org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:247)
         at 
org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:166)
         at 
org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:214)
         at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:510)
         at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:673)
         at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:344)
         at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
         at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
         at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
         at java.lang.Thread.run(Thread.java:722)
2013-03-07 16:57:51,120 INFO 
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: 
Verification succeeded for 
BP-43236042-172.17.2.10-1362490844340:blk_4201959737217845852_18055629
2013-03-07 16:57:51,123 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: exception:
java.net.SocketTimeoutException: 480000 millis timeout while waiting for 
channel to be ready for write. ch : 
java.nio.channels.SocketChannel[connected local=/172.17.2.18:50010 
remote=/172.17.2.18:39187]
         at 
org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:247)
         at 
org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:166)
         at 
org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:214)
         at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:510)
         at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:673)
         at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:344)
         at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
         at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
         at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
         at java.lang.Thread.run(Thread.java:722)
2013-03-07 16:57:51,123 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: 
/172.17.2.18:50010, dest: /172.17.2.18:39187, bytes: 1320960, op: 
HDFS_READ, cliID: DFSClient_NONMAPREDUCE_1647875948_25, offset: 
27687424, srvID: DS-229334310-172.17.2.18-50010-1328651636364, blockid: 
BP-43236042-172.17.2.10-1362490844340:blk_-1709825967095542317_25089483, 
duration: 480100936815
2013-03-07 16:57:51,123 WARN 
org.apache.hadoop.hdfs.server.datanode.DataNode: 
DatanodeRegistration(172.17.2.18, 
storageID=DS-229334310-172.17.2.18-50010-1328651636364, infoPort=50075, 
ipcPort=50020, 
storageInfo=lv=-40;cid=CID-26cd999e-460a-4dbc-b940-9250a76930a8;nsid=276058127;c=1362491004838):Got 
exception while serving 
BP-43236042-172.17.2.10-1362490844340:blk_-1709825967095542317_25089483 
to /172.17.2.18:39187
java.net.SocketTimeoutException: 480000 millis timeout while waiting for 
channel to be ready for write. ch : 
java.nio.channels.SocketChannel[connected local=/172.17.2.18:50010 
remote=/172.17.2.18:39187]
         at 
org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:247)
         at 
org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:166)
         at 
org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:214)
         at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:510)
         at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:673)
         at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:344)
         at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
         at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
         at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
         at java.lang.Thread.run(Thread.java:722)
2013-03-07 16:57:51,123 ERROR 
org.apache.hadoop.hdfs.server.datanode.DataNode: 
PSLBHDN002:50010:DataXceiver error processing READ_BLOCK operation  src: 
/172.17.2.18:39187 dest: /172.17.2.18:50010
java.net.SocketTimeoutException: 480000 millis timeout while waiting for 
channel to be ready for write. ch : 
java.nio.channels.SocketChannel[connected local=/172.17.2.18:50010 
remote=/172.17.2.18:39187]
         at 
org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:247)
         at 
org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:166)
         at 
org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:214)
         at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:510)
         at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:673)
         at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:344)
         at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
         at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
         at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
         at java.lang.Thread.run(Thread.java:722)
2013-03-07 16:57:51,140 INFO 
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: 
Verification succeeded for 
BP-43236042-172.17.2.10-1362490844340:blk_-5502531532991299708_19365616

...

2013-03-07 17:23:49,291 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: Exception for 
BP-43236042-172.17.2.10-1362490844340:blk_-242393173458437587_25099194
java.net.SocketTimeoutException: 60000 millis timeout while waiting for 
channel to be ready for read. ch : 
java.nio.channels.SocketChannel[connected local=/172.17.2.18:50010 
remote=/172.17.2.18:37074]
         at 
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:165)
         at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:156)
         at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:129)
         at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
         at java.io.BufferedInputStream.read1(BufferedInputStream.java:275)
         at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
         at java.io.DataInputStream.read(DataInputStream.java:149)
         at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:192)
         at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
         at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
         at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
         at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:414)
         at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:635)
         at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:570)
         at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:103)
         at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:67)
         at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
         at java.lang.Thread.run(Thread.java:722)
2013-03-07 17:23:49,292 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: 
BP-43236042-172.17.2.10-1362490844340:blk_-242393173458437587_25099194, 
type=HAS_DOWNSTREAM_IN_PIPELINE: Thread is interrupted.
2013-03-07 17:23:49,292 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: 
BP-43236042-172.17.2.10-1362490844340:blk_-242393173458437587_25099194, 
type=HAS_DOWNSTREAM_IN_PIPELINE terminating
2013-03-07 17:23:49,292 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock 
BP-43236042-172.17.2.10-1362490844340:blk_-242393173458437587_25099194 
received exception java.net.SocketTimeoutException: 60000 millis timeout 
while waiting for channel to be ready for read. ch : 
java.nio.channels.SocketChannel[connected local=/172.17.2.18:50010 
remote=/172.17.2.18:37074]
2013-03-07 17:23:49,292 ERROR 
org.apache.hadoop.hdfs.server.datanode.DataNode: 
PSLBHDN002:50010:DataXceiver error processing WRITE_BLOCK operation  
src: /172.17.2.18:37074 dest: /172.17.2.18:50010
java.net.SocketTimeoutException: 60000 millis timeout while waiting for 
channel to be ready for read. ch : 
java.nio.channels.SocketChannel[connected local=/172.17.2.18:50010 
remote=/172.17.2.18:37074]
         at 
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:165)
         at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:156)
         at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:129)
         at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
         at java.io.BufferedInputStream.read1(BufferedInputStream.java:275)
         at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
         at java.io.DataInputStream.read(DataInputStream.java:149)
         at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:192)
         at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
         at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
         at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
         at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:414)
         at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:635)
         at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:570)
         at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:103)
         at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:67)
         at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
         at java.lang.Thread.run(Thread.java:722)

...

2013-03-07 17:24:56,722 INFO 
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: 
Verification succeeded for 
BP-43236042-172.17.2.10-1362490844340:blk_-6696144180006086952_22124135
2013-03-07 17:24:57,950 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: 
/172.17.2.18:50010, dest: /172.17.2.18:39374, bytes: 0, op: HDFS_READ, 
cliID: DFSClient_NONMAPREDUCE_1647875948_25, offset: 20673024, srvID: 
DS-229334310-172.17.2.18-50010-1328651636364, blockid: 
BP-43236042-172.17.2.10-1362490844340:blk_5852474891518087178_25099326, 
duration: 38325989
2013-03-07 17:24:57,953 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: 
/172.17.2.18:50010, dest: /172.17.2.18:39375, bytes: 0, op: HDFS_READ, 
cliID: DFSClient_NONMAPREDUCE_1647875948_25, offset: 17261056, srvID: 
DS-229334310-172.17.2.18-50010-1328651636364, blockid: 
BP-43236042-172.17.2.10-1362490844340:blk_5852474891518087178_25099326, 
duration: 42046136
2013-03-07 17:24:57,956 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: 
/172.17.2.18:50010, dest: /172.17.2.18:39373, bytes: 0, op: HDFS_READ, 
cliID: DFSClient_NONMAPREDUCE_1647875948_25, offset: 60156416, srvID: 
DS-229334310-172.17.2.18-50010-1328651636364, blockid: 
BP-43236042-172.17.2.10-1362490844340:blk_-5865708852897909851_25098079, 
duration: 41458926
2013-03-07 17:24:57,956 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: 
/172.17.2.18:50010, dest: /172.17.2.18:39377, bytes: 66564, op: 
HDFS_READ, cliID: DFSClient_NONMAPREDUCE_1647875948_25, offset: 
60156416, srvID: DS-229334310-172.17.2.18-50010-1328651636364, blockid: 
BP-43236042-172.17.2.10-1362490844340:blk_-5865708852897909851_25098079, 
duration: 34990959
2013-03-07 17:24:57,961 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: 
/172.17.2.18:50010, dest: /172.17.2.18:39377, bytes: 66564, op: 
HDFS_READ, cliID: DFSClient_NONMAPREDUCE_1647875948_25, offset: 
17261056, srvID: DS-229334310-172.17.2.18-50010-1328651636364, blockid: 
BP-43236042-172.17.2.10-1362490844340:blk_5852474891518087178_25099326, 
duration: 137842
2013-03-07 17:24:57,961 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: 
/172.17.2.18:50010, dest: /172.17.2.18:39377, bytes: 66564, op: 
HDFS_READ, cliID: DFSClient_NONMAPREDUCE_1647875948_25, offset: 
20673024, srvID: DS-229334310-172.17.2.18-50010-1328651636364, blockid: 
BP-43236042-172.17.2.10-1362490844340:blk_5852474891518087178_25099326, 
duration: 85871
2013-03-07 17:24:59,444 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: 
/172.17.2.18:50010, dest: /172.17.2.18:39367, bytes: 858624, op: 
HDFS_READ, cliID: DFSClient_NONMAPREDUCE_1647875948_25, offset: 
47441408, srvID: DS-229334310-172.17.2.18-50010-1328651636364, blockid: 
BP-43236042-172.17.2.10-1362490844340:blk_-4629995322786604332_25089514, 
duration: 165718556359
2013-03-07 17:25:13,719 INFO 
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: 
Verification succeeded for 
BP-43236042-172.17.2.10-1362490844340:blk_2896078552438954981_21046053

####DATANODE-PSLBHDN010####
2013-03-07 17:12:00,343 INFO 
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: 
Verification succeeded for 
BP-43236042-172.17.2.10-1362490844340:blk_-5635929760187386416_21468869
2013-03-07 17:15:10,538 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving 
BP-43236042-172.17.2.10-1362490844340:blk_-6822586111467020401_25099381 
src: /172.17.2.34:35557 dest: /172.17.2.35:50010
2013-03-07 17:19:11,106 INFO 
org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool 
BP-43236042-172.17.2.10-1362490844340 Total blocks: 7419098, missing 
metadata files:0, missing block files:0, missing blocks in memory:0, 
mismatched blocks:0
2013-03-07 17:19:11,123 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: Exception for 
BP-43236042-172.17.2.10-1362490844340:blk_-6822586111467020401_25099381
java.io.IOException: Premature EOF from inputStream
         at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
         at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
         at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
         at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
         at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:414)
         at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:635)
         at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:570)
         at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:103)
         at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:67)
         at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
         at java.lang.Thread.run(Thread.java:722)
2013-03-07 17:19:11,125 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: 
BP-43236042-172.17.2.10-1362490844340:blk_-6822586111467020401_25099381, 
type=LAST_IN_PIPELINE, downstreams=0:[]: Thread is interrupted.
2013-03-07 17:19:11,125 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: 
BP-43236042-172.17.2.10-1362490844340:blk_-6822586111467020401_25099381, 
type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2013-03-07 17:19:11,125 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock 
BP-43236042-172.17.2.10-1362490844340:blk_-6822586111467020401_25099381 
received exception java.io.IOException: Premature EOF from inputStream
2013-03-07 17:19:11,125 ERROR 
org.apache.hadoop.hdfs.server.datanode.DataNode: 
PSLBHDN010:50010:DataXceiver error processing WRITE_BLOCK operation  
src: /172.17.2.34:35557 dest: /172.17.2.35:50010
java.io.IOException: Premature EOF from inputStream
         at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
         at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
         at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
         at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
         at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:414)
         at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:635)
         at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:570)
         at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:103)
         at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:67)
         at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
         at java.lang.Thread.run(Thread.java:722)
2013-03-07 17:19:11,133 INFO 
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: 
Verification succeeded for 
BP-43236042-172.17.2.10-1362490844340:blk_-8970028755927243885_5056991

####DATANODE-PSLBHDN009####
2013-03-06 17:20:27,719 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: exception:
java.net.SocketTimeoutException: 480000 millis timeout while waiting for 
channel to be ready for write. ch : 
java.nio.channels.SocketChannel[connected local=/172.17.2.34:50010 
remote=/172.17.2.14:46402]
         at 
org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:247)
         at 
org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:166)
         at 
org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:214)
         at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:510)
         at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:673)
         at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:344)
         at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
         at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
         at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
         at java.lang.Thread.run(Thread.java:722)
2013-03-06 17:20:27,719 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: 
/172.17.2.34:50010, dest: /172.17.2.14:46402, bytes: 990720, op: 
HDFS_READ, cliID: DFSClient_NONMAPREDUCE_264701600_25, offset: 31173120, 
srvID: DS-1619403450-172.17.2.34-50010-1341236929007, blockid: 
BP-43236042-172.17.2.10-1362490844340:blk_-4670101896924293245_25068818, 
duration: 480020439336
2013-03-06 17:20:27,719 WARN 
org.apache.hadoop.hdfs.server.datanode.DataNode: 
DatanodeRegistration(172.17.2.34, 
storageID=DS-1619403450-172.17.2.34-50010-1341236929007, infoPort=50075, 
ipcPort=50020, 
storageInfo=lv=-40;cid=CID-26cd999e-460a-4dbc-b940-9250a76930a8;nsid=276058127;c=1362491004838):Got 
exception while serving 
BP-43236042-172.17.2.10-1362490844340:blk_-4670101896924293245_25068818 
to /172.17.2.14:46402
java.net.SocketTimeoutException: 480000 millis timeout while waiting for 
channel to be ready for write. ch : 
java.nio.channels.SocketChannel[connected local=/172.17.2.34:50010 
remote=/172.17.2.14:46402]
         at 
org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:247)
         at 
org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:166)
         at 
org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:214)
         at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:510)
         at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:673)
         at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:344)
         at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
         at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
         at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
         at java.lang.Thread.run(Thread.java:722)
2013-03-06 17:20:27,719 ERROR 
org.apache.hadoop.hdfs.server.datanode.DataNode: 
PSLBHDN009:50010:DataXceiver error processing READ_BLOCK operation  src: 
/172.17.2.14:46402 dest: /172.17.2.34:50010
java.net.SocketTimeoutException: 480000 millis timeout while waiting for 
channel to be ready for write. ch : 
java.nio.channels.SocketChannel[connected local=/172.17.2.34:50010 
remote=/172.17.2.14:46402]
         at 
org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:247)
         at 
org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:166)
         at 
org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:214)
         at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:510)
         at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:673)
         at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:344)
         at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
         at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
         at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
         at java.lang.Thread.run(Thread.java:722)

Mime
View raw message