hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Steve Loughran <ste...@apache.org>
Subject Re: JNI and calling Hadoop jar files
Date Mon, 30 Mar 2009 11:24:12 GMT
jason hadoop wrote:
> The exception reference to *org.apache.hadoop.hdfs.DistributedFileSystem*,
> implies strongly that a hadoop-default.xml file, or at least a  job.xml file
> is present.
> Since hadoop-default.xml is bundled into the hadoop-0.X.Y-core.jar, the
> assumption is that the core jar is available.
> The class not found exception, the implication is that the
> hadoop-0.X.Y-core.jar is not available to jni.
> 
> Given the above constraints, the two likely possibilities are that the -core
> jar is unavailable or damaged, or that somehow the classloader being used
> does not have access to the -core  jar.
> 
> A possible reason for the jar not being available is that the application is
> running on a different machine, or as a different user and the jar is not
> actually present or perhaps readable in the expected location.
> 
> 
> 
> 
> 
> Which way is your JNI, java application calling into a native shared
> library, or a native application calling into a jvm that it instantiates via
> libjvm calls?
> 
> Could you dump the classpath that is in effect before your failing jni call?
> System.getProperty( "java.class.path"), and for that matter,
> "java.library.path", or getenv("CLASSPATH)
> and provide an ls -l of the core.jar from the class path, run as the user
> that owns the process, on the machine that the process is running on.
> 

Or something bad is happening with a dependent library of the filesystem 
that is causing the reflection-based load to fail and die with the root 
cause being lost in the process. Sometimes putting an explicit reference 
to the class you are trying to load is a good way to force the problem 
to surface earlier, and fail with better error messages.

Mime
View raw message