hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Tsz Wo (Nicholas), SZE (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-3385) ClassCastException when trying to append a file
Date Tue, 08 May 2012 18:51:51 GMT

     [ https://issues.apache.org/jira/browse/HDFS-3385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Tsz Wo (Nicholas), SZE updated HDFS-3385:
-----------------------------------------

    Description: 
When I try to append a file I got 

{noformat}
2012-05-08 18:13:40,506 WARN  util.KerberosName (KerberosName.java:<clinit>(87)) - Kerberos
krb5 configuration not found, setting default realm to empty
Exception in thread "main" java.lang.ClassCastException: org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo
cannot be cast to org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoUnderConstruction
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:1787)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1584)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:1824)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:425)
        ...
	at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1150)
	at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1189)
	at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1177)
	at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:221)
	at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:1)
	at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:981)
	at org.apache.hadoop.hdfs.server.datanode.DeleteMe.main(DeleteMe.java:26)
{noformat}

  was:
When I try to append a file I got 

2012-05-08 18:13:40,506 WARN  util.KerberosName (KerberosName.java:<clinit>(87)) - Kerberos
krb5 configuration not found, setting default realm to empty
Exception in thread "main" java.lang.ClassCastException: org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo
cannot be cast to org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoUnderConstruction
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:1787)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1584)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:1824)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:425)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:217)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:42592)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:396)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)

	at org.apache.hadoop.ipc.Client.call(Client.java:1159)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:184)
	at $Proxy9.append(Unknown Source)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
	at java.lang.reflect.Method.invoke(Unknown Source)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
	at $Proxy9.append(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:204)
	at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1150)
	at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1189)
	at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1177)
	at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:221)
	at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:1)
	at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:981)
	at org.apache.hadoop.hdfs.server.datanode.DeleteMe.main(DeleteMe.java:26)



It seems that ClassCastException is not shown up in the existing unit tests.  How to reproduce
this exactly?  Could you give more details?

(Let's remove the rpc stack trace in the description.)
                
> ClassCastException when trying to append a file
> -----------------------------------------------
>
>                 Key: HDFS-3385
>                 URL: https://issues.apache.org/jira/browse/HDFS-3385
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: name-node
>    Affects Versions: 3.0.0
>         Environment: HDFS
>            Reporter: Brahma Reddy Battula
>            Priority: Minor
>             Fix For: 3.0.0
>
>
> When I try to append a file I got 
> {noformat}
> 2012-05-08 18:13:40,506 WARN  util.KerberosName (KerberosName.java:<clinit>(87))
- Kerberos krb5 configuration not found, setting default realm to empty
> Exception in thread "main" java.lang.ClassCastException: org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo
cannot be cast to org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoUnderConstruction
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:1787)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1584)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:1824)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:425)
>         ...
> 	at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1150)
> 	at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1189)
> 	at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1177)
> 	at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:221)
> 	at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:1)
> 	at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:981)
> 	at org.apache.hadoop.hdfs.server.datanode.DeleteMe.main(DeleteMe.java:26)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message