Return-Path: X-Original-To: apmail-hadoop-common-issues-archive@minotaur.apache.org Delivered-To: apmail-hadoop-common-issues-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 66D9E11C1A for ; Mon, 14 Jul 2014 17:50:05 +0000 (UTC) Received: (qmail 52270 invoked by uid 500); 14 Jul 2014 17:50:04 -0000 Delivered-To: apmail-hadoop-common-issues-archive@hadoop.apache.org Received: (qmail 52218 invoked by uid 500); 14 Jul 2014 17:50:04 -0000 Mailing-List: contact common-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: common-issues@hadoop.apache.org Delivered-To: mailing list common-issues@hadoop.apache.org Received: (qmail 52205 invoked by uid 99); 14 Jul 2014 17:50:04 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 14 Jul 2014 17:50:04 +0000 Date: Mon, 14 Jul 2014 17:50:04 +0000 (UTC) From: "Colin Patrick McCabe (JIRA)" To: common-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Updated] (HADOOP-10780) hadoop_user_info_alloc fails on FreeBSD due to incorrect sysconf use MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HADOOP-10780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Colin Patrick McCabe updated HADOOP-10780: ------------------------------------------ Summary: hadoop_user_info_alloc fails on FreeBSD due to incorrect sysconf use (was: namenode throws java.lang.OutOfMemoryError upon DatanodeProtocol.versionRequest from datanode) > hadoop_user_info_alloc fails on FreeBSD due to incorrect sysconf use > -------------------------------------------------------------------- > > Key: HADOOP-10780 > URL: https://issues.apache.org/jira/browse/HADOOP-10780 > Project: Hadoop Common > Issue Type: Bug > Affects Versions: 2.4.1 > Environment: FreeBSD-10/stable > openjdk version "1.7.0_60" > OpenJDK Runtime Environment (build 1.7.0_60-b19) > OpenJDK 64-Bit Server VM (build 24.60-b09, mixed mode) > Reporter: Dmitry Sivachenko > Attachments: buf_sz.patch > > > I am trying hadoop-2.4.1 on FreeBSD-10/stable. > namenode starts up, but after first datanode contacts it, it throws an exception. > All limits seem to be high enough: > % limits -a > Resource limits (current): > cputime infinity secs > filesize infinity kB > datasize 33554432 kB > stacksize 524288 kB > coredumpsize infinity kB > memoryuse infinity kB > memorylocked infinity kB > maxprocesses 122778 > openfiles 140000 > sbsize infinity bytes > vmemoryuse infinity kB > pseudo-terminals infinity > swapuse infinity kB > 14944 1 S 0:06.59 /usr/local/openjdk7/bin/java -Dproc_namenode -Xmx1000m -Dhadoop.log.dir=/var/log/hadoop -Dhadoop.log.file=hadoop-hdfs-namenode-nezabudka3-00.log -Dhadoop.home.dir=/usr/local -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,RFA -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx32768m -Xms32768m -Djava.library.path=/usr/local/lib -Xmx32768m -Xms32768m -Djava.library.path=/usr/local/lib -Xmx32768m -Xms32768m -Djava.library.path=/usr/local/lib -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.namenode.NameNode > From the namenode's log: > 2014-07-03 23:28:15,070 WARN [IPC Server handler 5 on 8020] ipc.Server (Server.java:run(2032)) - IPC Server handler 5 on 8020, call org.apache.hadoop.hdfs.server.protocol.Datano > deProtocol.versionRequest from 5.255.231.209:57749 Call#842 Retry#0 > java.lang.OutOfMemoryError > at org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroupsForUser(Native Method) > at org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroups(JniBasedUnixGroupsMapping.java:80) > at org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:50) > at org.apache.hadoop.security.Groups.getGroups(Groups.java:139) > at org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1417) > at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.(FSPermissionChecker.java:81) > at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getPermissionChecker(FSNamesystem.java:3331) > at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkSuperuserPrivilege(FSNamesystem.java:5491) > at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.versionRequest(NameNodeRpcServer.java:1082) > at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.versionRequest(DatanodeProtocolServerSideTranslatorPB.java:234) > at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:28069) > at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) > I did not have such an issue with hadoop-1.2.1. -- This message was sent by Atlassian JIRA (v6.2#6252)