Return-Path: X-Original-To: apmail-hadoop-common-issues-archive@minotaur.apache.org Delivered-To: apmail-hadoop-common-issues-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id C4C87109E0 for ; Sun, 3 Nov 2013 11:12:26 +0000 (UTC) Received: (qmail 72250 invoked by uid 500); 3 Nov 2013 11:12:24 -0000 Delivered-To: apmail-hadoop-common-issues-archive@hadoop.apache.org Received: (qmail 71894 invoked by uid 500); 3 Nov 2013 11:12:19 -0000 Mailing-List: contact common-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: common-issues@hadoop.apache.org Delivered-To: mailing list common-issues@hadoop.apache.org Received: (qmail 71864 invoked by uid 99); 3 Nov 2013 11:12:17 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 03 Nov 2013 11:12:17 +0000 Date: Sun, 3 Nov 2013 11:12:17 +0000 (UTC) From: "Hudson (JIRA)" To: common-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HADOOP-9478) Fix race conditions during the initialization of Configuration related to deprecatedKeyMap MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HADOOP-9478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13812331#comment-13812331 ] Hudson commented on HADOOP-9478: -------------------------------- SUCCESS: Integrated in Hadoop-Yarn-trunk #381 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/381/]) HADOOP-9478. Fix race conditions during the initialization of Configuration related to deprecatedKeyMap (cmccabe) (cmccabe: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1538248) * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfigurationDeprecation.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HdfsConfiguration.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/util/ConfigUtil.java * /hadoop/common/trunk/hadoop-tools/hadoop-extras/src/main/java/org/apache/hadoop/tools/Logalyzer.java * /hadoop/common/trunk/hadoop-tools/hadoop-gridmix/src/main/java/org/apache/hadoop/mapred/gridmix/DistributedCacheEmulator.java > Fix race conditions during the initialization of Configuration related to deprecatedKeyMap > ------------------------------------------------------------------------------------------ > > Key: HADOOP-9478 > URL: https://issues.apache.org/jira/browse/HADOOP-9478 > Project: Hadoop Common > Issue Type: Bug > Components: conf > Affects Versions: 2.0.0-alpha > Environment: OS: > CentOS release 6.3 (Final) > JDK: > java version "1.6.0_27" > Java(TM) SE Runtime Environment (build 1.6.0_27-b07) > Java HotSpot(TM) 64-Bit Server VM (build 20.2-b06, mixed mode) > Hadoop: > hadoop-2.0.0-cdh4.1.3/hadoop-2.0.0-cdh4.2.0 > Security: > Kerberos > Reporter: Dongyong Wang > Assignee: Colin Patrick McCabe > Fix For: 2.2.1 > > Attachments: HADOOP-9478.001.patch, HADOOP-9478.002.patch, HADOOP-9478.003.patch, HADOOP-9478.004.patch, HADOOP-9478.005.patch, hadoop-9478-1.patch, hadoop-9478-2.patch > > > When we lanuch the client appliation which use kerberos security,the FileSystem can't be create because the exception ' java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.security.SecurityUtil'. > I check the exception stack trace,it maybe caused by the unsafe get operation of the deprecatedKeyMap which used by the org.apache.hadoop.conf.Configuration. > So I write a simple test case: > import org.apache.hadoop.conf.Configuration; > import org.apache.hadoop.fs.FileSystem; > import org.apache.hadoop.hdfs.HdfsConfiguration; > public class HTest { > public static void main(String[] args) throws Exception { > Configuration conf = new Configuration(); > conf.addResource("core-site.xml"); > conf.addResource("hdfs-site.xml"); > FileSystem fileSystem = FileSystem.get(conf); > System.out.println(fileSystem); > System.exit(0); > } > } > Then I launch this test case many times,the following exception is thrown: > Exception in thread "TGT Renewer for XXX" java.lang.ExceptionInInitializerError > at org.apache.hadoop.security.UserGroupInformation.getTGT(UserGroupInformation.java:719) > at org.apache.hadoop.security.UserGroupInformation.access$1100(UserGroupInformation.java:77) > at org.apache.hadoop.security.UserGroupInformation$1.run(UserGroupInformation.java:746) > at java.lang.Thread.run(Thread.java:662) > Caused by: java.lang.ArrayIndexOutOfBoundsException: 16 > at java.util.HashMap.getEntry(HashMap.java:345) > at java.util.HashMap.containsKey(HashMap.java:335) > at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1989) > at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1867) > at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:1785) > at org.apache.hadoop.conf.Configuration.get(Configuration.java:712) > at org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:731) > at org.apache.hadoop.conf.Configuration.getBoolean(Configuration.java:1047) > at org.apache.hadoop.security.SecurityUtil.(SecurityUtil.java:76) > ... 4 more > Exception in thread "main" java.io.IOException: Couldn't create proxy provider class org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider > at org.apache.hadoop.hdfs.NameNodeProxies.createFailoverProxyProvider(NameNodeProxies.java:453) > at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:133) > at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:436) > at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:403) > at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:125) > at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2262) > at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:86) > at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2296) > at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2278) > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:316) > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:162) > at HTest.main(HTest.java:11) > Caused by: java.lang.reflect.InvocationTargetException > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) > at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) > at java.lang.reflect.Constructor.newInstance(Constructor.java:513) > at org.apache.hadoop.hdfs.NameNodeProxies.createFailoverProxyProvider(NameNodeProxies.java:442) > ... 11 more > Caused by: java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.security.SecurityUtil > at org.apache.hadoop.net.NetUtils.createSocketAddrForHost(NetUtils.java:231) > at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:211) > at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:159) > at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:148) > at org.apache.hadoop.hdfs.DFSUtil.getAddressesForNameserviceId(DFSUtil.java:452) > at org.apache.hadoop.hdfs.DFSUtil.getAddresses(DFSUtil.java:434) > at org.apache.hadoop.hdfs.DFSUtil.getHaNnRpcAddresses(DFSUtil.java:496) > at org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider.(ConfiguredFailoverProxyProvider.java:88) > ... 16 more > If the HashMap used at multi-thread enviroment,not only the put operation be synchronized,the get operation(eg. containKey) should be synchronzied too. > The simple solution is trigger the init of SecurityUtil before creating the FileSystem,but I think it's should be synchronized for get of deprecatedKeyMap. > Thanks. -- This message was sent by Atlassian JIRA (v6.1#6144)