ambari-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Andrew Onischuk (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (AMBARI-21544) HiveServer2 fails to start with webhdfs call to create /hdp/apps/..jar files fails with org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException
Date Fri, 21 Jul 2017 09:36:00 GMT

     [ https://issues.apache.org/jira/browse/AMBARI-21544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Andrew Onischuk updated AMBARI-21544:
-------------------------------------
    Status: Patch Available  (was: Open)

> HiveServer2 fails to start with webhdfs call to create /hdp/apps/..jar files  fails with
 org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException
> ------------------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: AMBARI-21544
>                 URL: https://issues.apache.org/jira/browse/AMBARI-21544
>             Project: Ambari
>          Issue Type: Bug
>            Reporter: Andrew Onischuk
>            Assignee: Andrew Onischuk
>             Fix For: 2.5.2
>
>         Attachments: AMBARI-21544.patch
>
>
> HiveServer2 fails to start with webhdfs call to create /hdp/apps/..jar files
> fails with org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException.
> Seeing this specifically on HA cluster where one instance of HiveServer2 fails
> to start.
> HiveServer2 start error...
>     
>     
>     
>     2017-07-18 05:27:36,795 - NameNode HA states: active_namenodes = [(u'nn2', 'ctr-e134-1499953498516-16356-01-000005.hwx.site:20070')],
standby_namenodes = [(u'nn1', 'ctr-e134-1499953498516-16356-01-000004.hwx.site:20070')], unknown_namenodes
= []
>     2017-07-18 05:27:36,797 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS
-L -w '"'"'%{http_code}'"'"' -X GET --negotiate -u : '"'"'http://ctr-e134-1499953498516-16356-01-000005.hwx.site:20070/webhdfs/v1/hdp/apps/2.6.3.0-61/mapreduce/hadoop-streaming.jar?op=GETFILESTATUS'"'"'
1>/tmp/tmpvtBOI9 2>/tmp/tmpMbcTp1''] {'logoutput': None, 'quiet': False}
>     2017-07-18 05:27:36,885 - call returned (0, '')
>     2017-07-18 05:27:36,886 - Creating new file /hdp/apps/2.6.3.0-61/mapreduce/hadoop-streaming.jar
in DFS
>     2017-07-18 05:27:36,887 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS
-L -w '"'"'%{http_code}'"'"' -X PUT --data-binary @/usr/hdp/2.6.3.0-61/hadoop-mapreduce/hadoop-streaming.jar
-H '"'"'Content-Type: application/octet-stream'"'"' --negotiate -u : '"'"'http://ctr-e134-1499953498516-16356-01-000005.hwx.site:20070/webhdfs/v1/hdp/apps/2.6.3.0-61/mapreduce/hadoop-streaming.jar?op=CREATE&overwrite=True&permission=444'"'"'
1>/tmp/tmpqYkC_P 2>/tmp/tmpT30u8x''] {'logoutput': None, 'quiet': False}
>     2017-07-18 05:27:37,135 - call returned (0, '')
>     ....
>     self._create_file(self.main_resource.resource.target, source=self.main_resource.resource.source,
mode=self.mode)
>       File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
line 423, in _create_file
>         self.util.run_command(target, 'CREATE', method='PUT', overwrite=True, assertable_result=False,
file_to_put=source, **kwargs)
>       File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
line 204, in run_command
>         raise Fail(err_msg)
>     resource_management.core.exceptions.Fail: Execution of 'curl -sS -L -w '%{http_code}'
-X PUT --data-binary @/usr/hdp/2.6.3.0-61/hadoop-mapreduce/hadoop-streaming.jar -H 'Content-Type:
application/octet-stream' --negotiate -u : 'http://ctr-e134-1499953498516-16356-01-000005.hwx.site:20070/webhdfs/v1/hdp/apps/2.6.3.0-61/mapreduce/hadoop-streaming.jar?op=CREATE&overwrite=True&permission=444''
returned status_code=403. 
>     {
>       "RemoteException": {
>         "exception": "LeaseExpiredException", 
>         "javaClassName": "org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException",

>         "message": "No lease on /hdp/apps/2.6.3.0-61/mapreduce/hadoop-streaming.jar (inode
16566): File does not exist. Holder DFSClient_NONMAPREDUCE_1130121686_152 does not have any
open files.\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3660)\n\tat
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:3463)\n\tat
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3301)\n\tat
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3261)\n\tat
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:850)\n\tat
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:503)\n\tat
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\n\tat
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)\n\tat
org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)\n\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)\n\tat
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)\n\tat java.security.AccessController.doPrivileged(Native
Method)\n\tat javax.security.auth.Subject.doAs(Subject.java:422)\n\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)\n\tat
org.apache.hadoop.ipc.Server$Handler.run(Server.java:2345)\n"
>      
>     
> NameNode log throws LeaseExpiredException...
>     
>     
>     
>     2017-07-18 05:27:36,980 INFO  delegation.AbstractDelegationTokenSecretManager (AbstractDelegationTokenSecretManager.java:createPassword(385))
- Creating password for identifier: HDFS_DELEGATION_TOKEN token 8 for hdfs, currentKey: 2
>     2017-07-18 05:27:37,054 INFO  delegation.AbstractDelegationTokenSecretManager (AbstractDelegationTokenSecretManager.java:createPassword(385))
- Creating password for identifier: HDFS_DELEGATION_TOKEN token 9 for hdfs, currentKey: 2
>     2017-07-18 05:27:37,118 INFO  ipc.Server (Server.java:logException(2428)) - IPC Server
handler 32 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from 172.27.9.200:45817
Call#2119 Retry#0: org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease
on /hdp/apps/2.6.3.0-61/mapreduce/hadoop-streaming.jar (inode 16566): File does not exist.
Holder DFSClient_NONMAPREDUCE_1130121686_152 does not have any open files.
>     2017-07-18 05:27:37,152 INFO  hdfs.StateChange (FSNamesystem.java:logAllocatedBlock(3831))
- BLOCK* allocate blk_1073741851_1027, replicas=172.27.9.200:1019, 172.27.12.200:1019, 172.27.24.212:1019
for /hdp/apps/2.6.3.0-61/mapreduce/hadoop-streaming.jar
>     2017-07-18 05:27:37,227 INFO  hdfs.StateChange (FSNamesystem.java:completeFile(3724))
- DIR* completeFile: /hdp/apps/2.6.3.0-61/mapreduce/hadoop-streaming.jar is closed by DFSClient_NONMAPREDUCE_-1879489015_153
>     2017-07-18 05:27:39,523 INFO  BlockStateChange (BlockManager.java:computeReplicationWorkForBlocks(1648))
- BLOCK* neededReplications = 0, pendingReplications = 0.
>     
> This is not specific to hadoop-streaming.jar file creation, In other cluster
> failure occurs at creating /hdp/apps/2.6.3.0-61/pig/pig.tar.gz file...
>     
>     
>     
>     2017-07-18 05:31:50,608 INFO  BlockStateChange (BlockManager.java:computeReplicationWorkForBlocks(1648))
- BLOCK* neededReplications = 0, pendingReplications = 0.
>     2017-07-18 05:31:50,685 INFO  delegation.AbstractDelegationTokenSecretManager (AbstractDelegationTokenSecretManager.java:createPassword(385))
- Creating password for identifier: HDFS_DELEGATION_TOKEN token 5 for hdfs, currentKey: 2
>     2017-07-18 05:31:50,690 INFO  hdfs.StateChange (FSNamesystem.java:logAllocatedBlock(3831))
- BLOCK* allocate blk_1073741848_1024, replicas=172.27.18.201:1019, 172.27.19.4:1019, 172.27.52.76:1019
for /hdp/apps/2.6.3.0-61/pig/pig.tar.gz
>     2017-07-18 05:31:51,228 INFO  hdfs.StateChange (FSNamesystem.java:logAllocatedBlock(3831))
- BLOCK* allocate blk_1073741849_1025, replicas=172.27.19.4:1019, 172.27.17.134:1019, 172.27.52.76:1019
for /hdp/apps/2.6.3.0-61/pig/pig.tar.gz
>     2017-07-18 05:31:51,298 INFO  ipc.Server (Server.java:logException(2428)) - IPC Server
handler 23 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from 172.27.18.201:36652
Call#1959 Retry#0: org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease
on /hdp/apps/2.6.3.0-61/pig/pig.tar.gz (inode 16561): File does not exist. Holder DFSClient_NONMAPREDUCE_1849462310_141
does not have any open files.
>     2017-07-18 05:31:51,800 INFO  hdfs.StateChange (FSNamesystem.java:logAllocatedBlock(3831))
- BLOCK* allocate blk_1073741850_1026, replicas=172.27.19.4:1019, 172.27.52.76:1019, 172.27.18.201:1019
for /hdp/apps/2.6.3.0-61/pig/pig.tar.gz
>     2017-07-18 05:31:51,823 INFO  hdfs.StateChange (FSNamesystem.java:completeFile(3724))
- DIR* completeFile: /hdp/apps/2.6.3.0-61/pig/pig.tar.gz is closed by DFSClient_NONMAPREDUCE_307451118_147
>     



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Mime
View raw message