hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jean-Daniel Cryans <jdcry...@apache.org>
Subject Re: Killing and restarting of master caused AlreadyBeingCreatedException from HLogs
Date Sat, 09 Apr 2011 16:56:45 GMT
Well maybe that's what you did, but the log does say that it's splitting logs.

J-D

On Sat, Apr 9, 2011 at 7:19 AM, Ramkrishna S Vasudevan
<ramakrishnas@huawei.com> wrote:
> Hi
>
> Yes .. i had gone thro that
> But those scenarios were something like the region server went down or data
> node is down.
>
> Here it is like only master got restarted.
>
> Regards
> Ram
>
> ****************************************************************************
> ***********
> This e-mail and attachments contain confidential information from HUAWEI,
> which is intended only for the person or entity whose address is listed
> above. Any use of the information contained herein in any way (including,
> but not limited to, total or partial disclosure, reproduction, or
> dissemination) by persons other than the intended recipient's) is
> prohibited. If you receive this e-mail in error, please notify the sender by
> phone or email immediately and delete it!
>
> -----Original Message-----
> From: Ted Yu [mailto:yuzhihong@gmail.com]
> Sent: Saturday, April 09, 2011 7:42 PM
> To: user@hbase.apache.org; ramakrishnas@huawei.com
> Subject: Re: Killing and restarting of master caused
> AlreadyBeingCreatedException from HLogs
>
> Have you read the email thread entitled 'file is already being created by
> NN_Recovery' on user mailing list ?
>
> On Sat, Apr 9, 2011 at 7:06 AM, Ramkrishna S Vasudevan <
> ramakrishnas@huawei.com> wrote:
>
>> If we kill the HMaster and try restarting it.. the following exceptions
> are
>> logged
>>
>>
>>
>> plitting hlog 2 of 2:
>> hdfs://
>> 10.18.52.108:9000/hbase/.logs/linux108,60020,1302346754067/linux108%3
>> A60020.1302350355407, length=1459
>>
>> 2011-04-09 18:02:56,017 INFO org.apache.hadoop.hbase.util.FSUtils:
>> Recovering file
>> hdfs://
>> 10.18.52.108:9000/hbase/.logs/linux108,60020,1302346754067/linux108%3
>> A60020.1302350355407
>>
>> 2011-04-09 18:02:56,037 ERROR
>> com.huawei.isap.ump.ha.client.RPCRetryAndSwitchInvoker: Exception occured
>> while connecting to server : /10.18.52.108:9000
>>
>> org.apache.hadoop.ipc.RemoteException:
>> org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: failed to
>> create file
>> /hbase/.logs/linux108,60020,1302346754067/linux108%3A60020.1302350355407
>> for
>> DFSClient_hb_m_linux108:60000_1302352358592 on client 10.18.52.108,
> because
>> this file is already being created by NN_Recovery on 10.18.52.108
>>
>>            at
>>
>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSN
>> amesystem.java:1453)
>>
>>            at
>>
>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSName
>> system.java:1291)
>>
>>            at
>>
>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.
>> java:1473)
>>
>>            at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.append(NameNode.java:628)
>>
>>            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>
>>            at
>>
>>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39
>> )
>>
>>            at
>>
>>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl
>> .java:25)
>>
>>            at java.lang.reflect.Method.invoke(Method.java:597)
>>
>>            at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:541)
>>
>>            at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1105)
>>
>>            at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1101)
>>
>>            at java.security.AccessController.doPrivileged(Native Method)
>>
>>            at javax.security.auth.Subject.doAs(Subject.java:396)
>>
>>            at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1099)
>>
>>
>>
>>            at org.apache.hadoop.ipc.Client.call(Client.java:942)
>>
>>            at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:239)
>>
>>            at $Proxy5.append(Unknown Source)
>>
>>            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>
>>            at
>>
>>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39
>> )
>>
>>            at
>>
>>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl
>> .java:25)
>>
>>            at java.lang.reflect.Method.invoke(Method.java:597)
>>
>>            at
>>
>>
> com.huawei.isap.ump.ha.client.RPCRetryAndSwitchInvoker.invokeMethod(RPCRetry
>> AndSwitchInvoker.java:157)
>>
>>            at
>>
>>
> com.huawei.isap.ump.ha.client.RPCRetryAndSwitchInvoker.invokeMethod(RPCRetry
>> AndSwitchInvoker.java:145)
>>
>>            at
>>
>>
> com.huawei.isap.ump.ha.client.RPCRetryAndSwitchInvoker.invoke(RPCRetryAndSwi
>> tchInvoker.java:54)
>>
>>            at $Proxy5.append(Unknown Source)
>>
>>            at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:741)
>>
>>            at
>>
>>
> org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.ja
>> va:366)
>>
>>            at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:665)
>>
>>            at
>> org.apache.hadoop.hbase.util.FSUtils.recoverFileLease(FSUtils.java:634)
>>
>>            at
>>
>>
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.
>> java:261)
>>
>>            at
>>
>>
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.
>> java:188)
>>
>>            at
>>
>>
> org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.ja
>> va:196)
>>
>>            at
>>
>>
> org.apache.hadoop.hbase.master.MasterFileSystem.splitLogAfterStartup(MasterF
>> ileSystem.java:180)
>>
>>            at
>>
>>
> org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:379
>> )
>>
>>            at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:278)
>>
>>
>>
>> But the HMaster is starting correctly.  Here only I have only 2 datanodes
>> and the replication factor is 2.
>>
>>
>>
>> Regards
>>
>> Ram
>>
>>
>>
>>
>>
> ****************************************************************************
>> ***********
>> This e-mail and attachments contain confidential information from HUAWEI,
>> which is intended only for the person or entity whose address is listed
>> above. Any use of the information contained herein in any way (including,
>> but not limited to, total or partial disclosure, reproduction, or
>> dissemination) by persons other than the intended recipient's) is
>> prohibited. If you receive this e-mail in error, please notify the sender
>> by
>> phone or email immediately and delete it!
>>
>>
>>
>>
>
>

Mime
View raw message