hadoop-hive-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Carl Steinbach (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HIVE-1411) DataNucleus throws NucleusException if core-3.1.1 JAR appears more than once on CLASSPATH
Date Thu, 22 Jul 2010 23:02:52 GMT

    [ https://issues.apache.org/jira/browse/HIVE-1411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12891385#action_12891385
] 

Carl Steinbach commented on HIVE-1411:
--------------------------------------

Here's some more background on what is happening:

When DataNucleus starts up it [scans every class that it finds on the CLASSPATH|http://www.datanucleus.org/extensions/plugins.html]
looking for [OSGi format plugins|http://www.ibm.com/developerworks/opensource/library/os-ecl-osgi/].
Hadoop core happens to depend on Eclipse's core-3.1.1.jar (which contains OSGi format plugins),
so this gets included in Hive's CLASSPATH since bin/hive delegates execution to bin/hadoop.
If a user installed Hadoop from a tar ball and set HADOOP_HOME to the install directory we
are all good, but if the user also ran 'ant' in $HADOOP_HOME we're in a world of pain since
this results in two copies of core-3.1.1.jar (the original copy in $HADOOP_HOME/lib and another
copy in $HADOOP_HOME/build/ivy/lib/Hadoop/common/) and bin/hadoop stupidly adds both jars
to the CLASSPATH.

I want to highlight the following points:

 # Duplicate CLASSPATH entries are a fact of life with Hadoop and Hive due to the screwed
up CLASSPATH construction code in bin/hadoop and bin/hive
 # The duplicate CLASSPATH check that DataNucleus enforces by default only applies to JARs
that contain OSGi plugins, which in our case seems to only apply to the core-3.1.1 JAR.
 # The patch that I supplied causes this harmless condition to result in a LOG message instead
of a fatal exception. I think this preferable.
 # This exception is easy to reproduce on 0.5.0, but the upgrade to datanucleus-2.3 in HIVE-1176
seems to have made it unreproducible on trunk. I think this indicates that there is a bug
in datanucleus-2.3 since it is no longer honoring the contract set by datanucleus.plugin.pluginRegistryBundleCheck
(the default value is still EXCEPTION). I recommend that we commit this patch anyway lest
we encounter this bug in the future after upgrading to a newer version of datanucleus that
fixes this problem.



> DataNucleus throws NucleusException if core-3.1.1 JAR appears more than once on CLASSPATH
> -----------------------------------------------------------------------------------------
>
>                 Key: HIVE-1411
>                 URL: https://issues.apache.org/jira/browse/HIVE-1411
>             Project: Hadoop Hive
>          Issue Type: Bug
>          Components: Metastore
>    Affects Versions: 0.4.0, 0.4.1, 0.5.0
>            Reporter: Carl Steinbach
>            Assignee: Carl Steinbach
>             Fix For: 0.6.0, 0.7.0
>
>         Attachments: HIVE-1411.patch.txt
>
>
> DataNucleus barfs when the core-3.1.1 JAR file appears more than once on the CLASSPATH:
> {code}
> 2010-03-06 12:33:25,565 ERROR exec.DDLTask (SessionState.java:printError(279)) - FAILED:
Error in metadata: javax.jdo.JDOFatalInter 
> nalException: Unexpected exception caught. 
> NestedThrowables: 
> java.lang.reflect.InvocationTargetException 
> org.apache.hadoop.hive.ql.metadata.HiveException: javax.jdo.JDOFatalInternalException:
Unexpected exception caught. 
> NestedThrowables: 
> java.lang.reflect.InvocationTargetException 
> at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:258) 
> at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:879) 
> at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:103) 
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:379) 
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:285) 
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123) 
> at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181) 
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:287) 
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) 
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597) 
> at org.apache.hadoop.util.RunJar.main(RunJar.java:156) 
> Caused by: javax.jdo.JDOFatalInternalException: Unexpected exception caught. 
> NestedThrowables: 
> java.lang.reflect.InvocationTargetException 
> at javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1186)
> at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:803) 
> at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:698) 
> at org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:164) 
> at org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:181)
> at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:125) 
> at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:104) 
> at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62) 
> at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117) 
> at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:130)
> at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:146)
> at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:118)

> at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.(HiveMetaStore.java:100)

> at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:74)

> at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:783) 
> at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:794) 
> at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:252) 
> ... 12 more 
> Caused by: java.lang.reflect.InvocationTargetException 
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) 
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597) 
> at javax.jdo.JDOHelper$16.run(JDOHelper.java:1956) 
> at java.security.AccessController.doPrivileged(Native Method) 
> at javax.jdo.JDOHelper.invoke(JDOHelper.java:1951) 
> at javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1159)
> ... 28 more 
> Caused by: org.datanucleus.exceptions.NucleusException: Plugin (Bundle) "org.eclipse.jdt.core"
is already registered. Ensure you do 
> nt have multiple JAR versions of the same plugin in the classpath. The URL "file:/Users/hadop/hadoop-0.20.1+152/build/ivy/lib/Hadoo

> p/common/core-3.1.1.jar" is already registered, and you are trying to register an identical
plugin located at URL "file:/Users/hado 
> p/hadoop-0.20.1+152/lib/core-3.1.1.jar." 
> at org.datanucleus.plugin.NonManagedPluginRegistry.registerBundle(NonManagedPluginRegistry.java:437)
> at org.datanucleus.plugin.NonManagedPluginRegistry.registerBundle(NonManagedPluginRegistry.java:343)
> at org.datanucleus.plugin.NonManagedPluginRegistry.registerExtensions(NonManagedPluginRegistry.java:227)
> at org.datanucleus.plugin.NonManagedPluginRegistry.registerExtensionPoints(NonManagedPluginRegistry.java:159)
> at org.datanucleus.plugin.PluginManager.registerExtensionPoints(PluginManager.java:82)

> at org.datanucleus.OMFContext.(OMFContext.java:164) 
> at org.datanucleus.OMFContext.(OMFContext.java:145) 
> at org.datanucleus.ObjectManagerFactoryImpl.initialiseOMFContext(ObjectManagerFactoryImpl.java:143)
> at org.datanucleus.jdo.JDOPersistenceManagerFactory.initialiseProperties(JDOPersistenceManagerFactory.java:317)
> at org.datanucleus.jdo.JDOPersistenceManagerFactory.(JDOPersistenceManagerFactory.java:261)
> at org.datanucleus.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:174)
> ... 36 more 
> 2010-03-06 12:33:25,575 ERROR ql.Driver (SessionState.java:printError(279)) - FAILED:
Execution Error, return code 1 from org.apach 
> e.hadoop.hive.ql.exec.DDLTask 
> 2010-03-06 12:42:30,457 ERROR exec.DDLTask (SessionState.java:printError(279)) - FAILED:
Error in metadata: javax.jdo.JDOFatalInter 
> nalException: Unexpected exception caught. 
> NestedThrowables: 
> java.lang.reflect.InvocationTargetException 
> org.apache.hadoop.hive.ql.metadata.HiveException: javax.jdo.JDOFatalInternalException:
Unexpected exception caught. 
> NestedThrowables:
> {code}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message