hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Colin Patrick McCabe (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HADOOP-10780) hadoop_user_info_alloc fails on FreeBSD due to incorrect sysconf use
Date Mon, 14 Jul 2014 17:50:04 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-10780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Colin Patrick McCabe updated HADOOP-10780:
------------------------------------------

    Summary: hadoop_user_info_alloc fails on FreeBSD due to incorrect sysconf use  (was: namenode
throws java.lang.OutOfMemoryError upon DatanodeProtocol.versionRequest from datanode)

> hadoop_user_info_alloc fails on FreeBSD due to incorrect sysconf use
> --------------------------------------------------------------------
>
>                 Key: HADOOP-10780
>                 URL: https://issues.apache.org/jira/browse/HADOOP-10780
>             Project: Hadoop Common
>          Issue Type: Bug
>    Affects Versions: 2.4.1
>         Environment: FreeBSD-10/stable
> openjdk version "1.7.0_60"
> OpenJDK Runtime Environment (build 1.7.0_60-b19)
> OpenJDK 64-Bit Server VM (build 24.60-b09, mixed mode)
>            Reporter: Dmitry Sivachenko
>         Attachments: buf_sz.patch
>
>
> I am trying hadoop-2.4.1 on FreeBSD-10/stable.
> namenode starts up, but after first datanode contacts it, it throws an exception.
> All limits seem to be high enough:
> % limits -a
> Resource limits (current):
>   cputime              infinity secs
>   filesize             infinity kB
>   datasize             33554432 kB
>   stacksize              524288 kB
>   coredumpsize         infinity kB
>   memoryuse            infinity kB
>   memorylocked         infinity kB
>   maxprocesses           122778
>   openfiles              140000
>   sbsize               infinity bytes
>   vmemoryuse           infinity kB
>   pseudo-terminals     infinity
>   swapuse              infinity kB
> 14944  1  S        0:06.59 /usr/local/openjdk7/bin/java -Dproc_namenode -Xmx1000m -Dhadoop.log.dir=/var/log/hadoop
-Dhadoop.log.file=hadoop-hdfs-namenode-nezabudka3-00.log -Dhadoop.home.dir=/usr/local -Dhadoop.id.str=hdfs
-Dhadoop.root.logger=INFO,RFA -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true
-Xmx32768m -Xms32768m -Djava.library.path=/usr/local/lib -Xmx32768m -Xms32768m -Djava.library.path=/usr/local/lib
-Xmx32768m -Xms32768m -Djava.library.path=/usr/local/lib -Dhadoop.security.logger=INFO,RFAS
org.apache.hadoop.hdfs.server.namenode.NameNode
> From the namenode's log:
> 2014-07-03 23:28:15,070 WARN  [IPC Server handler 5 on 8020] ipc.Server (Server.java:run(2032))
- IPC Server handler 5 on 8020, call org.apache.hadoop.hdfs.server.protocol.Datano
> deProtocol.versionRequest from 5.255.231.209:57749 Call#842 Retry#0
> java.lang.OutOfMemoryError
>         at org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroupsForUser(Native
Method)
>         at org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroups(JniBasedUnixGroupsMapping.java:80)
>         at org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:50)
>         at org.apache.hadoop.security.Groups.getGroups(Groups.java:139)
>         at org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1417)
>         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.<init>(FSPermissionChecker.java:81)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getPermissionChecker(FSNamesystem.java:3331)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkSuperuserPrivilege(FSNamesystem.java:5491)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.versionRequest(NameNodeRpcServer.java:1082)
>         at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.versionRequest(DatanodeProtocolServerSideTranslatorPB.java:234)
>         at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:28069)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
> I did not have such an issue with hadoop-1.2.1.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message