Return-Path: X-Original-To: apmail-hadoop-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 104CA1160B for ; Sat, 12 Apr 2014 01:03:50 +0000 (UTC) Received: (qmail 82674 invoked by uid 500); 12 Apr 2014 01:03:32 -0000 Delivered-To: apmail-hadoop-user-archive@hadoop.apache.org Received: (qmail 82513 invoked by uid 500); 12 Apr 2014 01:03:27 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 82503 invoked by uid 99); 12 Apr 2014 01:03:26 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 12 Apr 2014 01:03:26 +0000 X-ASF-Spam-Status: No, hits=4.2 required=5.0 tests=HTML_MESSAGE,RCVD_IN_BL_SPAMCOP_NET,RCVD_IN_DNSWL_NONE,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of dlmarion@hotmail.com designates 65.54.190.91 as permitted sender) Received: from [65.54.190.91] (HELO bay0-omc2-s16.bay0.hotmail.com) (65.54.190.91) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 12 Apr 2014 01:03:21 +0000 Received: from BAY403-EAS23 ([65.54.190.124]) by bay0-omc2-s16.bay0.hotmail.com with Microsoft SMTPSVC(6.0.3790.4675); Fri, 11 Apr 2014 18:02:58 -0700 X-TMN: [nmqHgbcHWM7CSPm7K3foVr7fmJT3QcgM] X-Originating-Email: [dlmarion@hotmail.com] Message-ID: Date: Fri, 11 Apr 2014 21:02:54 -0400 Subject: RE: Which Hadoop 2.x .jars are necessary for Apache Commons VFS HDFS access? Importance: normal From: dlmarion To: MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="--_com.android.email_4668790607187090" X-OriginalArrivalTime: 12 Apr 2014 01:02:58.0022 (UTC) FILETIME=[F18BE860:01CF55EA] X-Virus-Checked: Checked by ClamAV on apache.org ----_com.android.email_4668790607187090 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" If memory serves me=2C its in the hadoop-hdfs.jar file. Sent via the Samsung GALAXY S=C2=AE4=2C an AT&T 4G LTE smartphone -------- Original message -------- From: Roger Whitcomb Date:04/11/2014 8:37 PM (GMT-05:00) To: user@hadoop.apache.org Subject: RE: Which Hadoop 2.x .jars are necessary for Apache Commons VFS HD= FS access? Hi Dave=2C =E2=80=8BThanks for the responses. I guess I have a small question the= n: what exact class(es) would it be looking for that it can't find? I hav= e all the .jar files I mentioned below on the classpath=2C and it is loadin= g and executing stuff in the "org.apache.hadoop.fs.FileSystem" class (accor= ding to the stack trace below)=2C so .... there are implementing classes I = would guess=2C so what .jar file would they be in? Thanks=2C ~Roger ________________________________ From: david marion Sent: Friday=2C April 11=2C 2014 4:55 PM To: user@hadoop.apache.org Subject: RE: Which Hadoop 2.x .jars are necessary for Apache Commons VFS HD= FS access? Also=2C make sure that the jars on the classpath actually contain the HDFS = file system. I'm looking at: No FileSystem for scheme: hdfs which is an indicator for this condition. Dave ________________________________ From: dlmarion@hotmail.com To: user@hadoop.apache.org Subject: RE: Which Hadoop 2.x .jars are necessary for Apache Commons VFS HD= FS access? Date: Fri=2C 11 Apr 2014 23:48:48 +0000 Hi Roger=2C I wrote the HDFS provider for Commons VFS. I went back and looked at the = source and tests=2C and I don't see anything wrong with what you are doing.= I did develop it against Hadoop 1.1.2 at the time=2C so there might be an = issue that is not accounted for with Hadoop 2. It was also not tested with = security turned on. Are you using security? Dave > From: Roger.Whitcomb@actian.com > To: user@hadoop.apache.org > Subject: Which Hadoop 2.x .jars are necessary for Apache Commons VFS HDFS= access? > Date: Fri=2C 11 Apr 2014 20:20:06 +0000 > > Hi=2C > I'm fairly new to Hadoop=2C but not to Apache=2C and I'm having a newbie = kind of issue browsing HDFS files. I have written an Apache Commons VFS (Vi= rtual File System) browser for the Apache Pivot GUI framework (I'm the PMC = Chair for Pivot: full disclosure). And now I'm trying to get this browser t= o work with HDFS to do HDFS browsing from our application. I'm running into= a problem=2C which seems sort of basic=2C so I thought I'd ask here... > > So=2C I downloaded Hadoop 2.3.0 from one of the mirrors=2C and was able t= o track down sort of the minimum set of .jars necessary to at least (try to= ) connect using Commons VFS 2.1: > commons-collections-3.2.1.jar > commons-configuration-1.6.jar > commons-lang-2.6.jar > commons-vfs2-2.1-SNAPSHOT.jar > guava-11.0.2.jar > hadoop-auth-2.3.0.jar > hadoop-common-2.3.0.jar > log4j-1.2.17.jar > slf4j-api-1.7.5.jar > slf4j-log4j12-1.7.5.jar > > What's happening now is that I instantiated the HdfsProvider this way: > private static DefaultFileSystemManager manager =3D null=3B > > static > { > manager =3D new DefaultFileSystemManager()=3B > try { > manager.setFilesCache(new DefaultFilesCache())=3B > manager.addProvider("hdfs"=2C new HdfsFileProvider())=3B > manager.setFileContentInfoFactory(new FileContentInfoFilenameFactory())= =3B > manager.setFilesCache(new SoftRefFilesCache())=3B > manager.setReplicator(new DefaultFileReplicator())=3B > manager.setCacheStrategy(CacheStrategy.ON_RESOLVE)=3B > manager.init()=3B > } > catch (final FileSystemException e) { > throw new RuntimeException(Intl.getString("object#manager.setupError")=2C= e)=3B > } > } > > Then=2C I try to browse into an HDFS system this way: > String url =3D String.format("hdfs://%1$s:%2$d/%3$s"=2C "hadoop-master "= =2C 50070=2C hdfsPath)=3B > return manager.resolveFile(url)=3B > > Note: the client is running on Windows 7 (but could be any system that ru= ns Java)=2C and the target has been one of several Hadoop clusters on Ubunt= u VMs (basically the same thing happens no matter which Hadoop installation= I try to hit). So I'm guessing the problem is in my client configuration. > > This attempt to basically just connect to HDFS results in a bunch of erro= r messages in the log file=2C which looks like it is trying to do user vali= dation on the local machine instead of against the Hadoop (remote) cluster. > Apr 11=2C2014 18:27:38.640 GMT T[AWT-EventQueue-0](26) DEBUG FileObjectMa= nager: Trying to resolve file reference 'hdfs://hadoop-master:50070/' > Apr 11=2C2014 18:27:38.953 GMT T[AWT-EventQueue-0](26) INFO org.apache.ha= doop.conf.Configuration.deprecation: fs.default.name is deprecated. Instead= =2C use fs.defaultFS > Apr 11=2C2014 18:27:39.078 GMT T[AWT-EventQueue-0](26) DEBUG MutableMetri= csFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hado= op.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @o= rg.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime=2C value=3D[Ra= te of successful kerberos logins and latency (milliseconds)]=2C about=3D=2C= type=3DDEFAULT=2C always=3Dfalse=2C sampleName=3DOps) > Apr 11=2C2014 18:27:39.094 GMT T[AWT-EventQueue-0](26) DEBUG MutableMetri= csFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hado= op.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @o= rg.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime=2C value=3D[Ra= te of failed kerberos logins and latency (milliseconds)]=2C about=3D=2C typ= e=3DDEFAULT=2C always=3Dfalse=2C sampleName=3DOps) > Apr 11=2C2014 18:27:39.094 GMT T[AWT-EventQueue-0](26) DEBUG MutableMetri= csFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hado= op.security.UserGroupInformation$UgiMetrics.getGroups with annotation @org.= apache.hadoop.metrics2.annotation.Metric(valueName=3DTime=2C value=3D[GetGr= oups]=2C about=3D=2C type=3DDEFAULT=2C always=3Dfalse=2C sampleName=3DOps) > Apr 11=2C2014 18:27:39.094 GMT T[AWT-EventQueue-0](26) DEBUG MetricsSyste= mImpl: UgiMetrics=2C User and group related metrics > Apr 11=2C2014 18:27:39.344 GMT T[AWT-EventQueue-0](26) DEBUG Groups: Crea= ting new Groups object > Apr 11=2C2014 18:27:39.344 GMT T[AWT-EventQueue-0](26) DEBUG NativeCodeLo= ader: Trying to load the custom-built native-hadoop library... > Apr 11=2C2014 18:27:39.360 GMT T[AWT-EventQueue-0](26) DEBUG NativeCodeLo= ader: Failed to load native-hadoop with error: java.lang.UnsatisfiedLinkErr= or: no hadoop in java.library.path > Apr 11=2C2014 18:27:39.360 GMT T[AWT-EventQueue-0](26) DEBUG NativeCodeLo= ader: java.library.path=3D.... > Apr 11=2C2014 18:27:39.360 GMT T[AWT-EventQueue-0](26) WARN NativeCodeLoa= der: Unable to load native-hadoop library for your platform... using builti= n-java classes where applicable > Apr 11=2C2014 18:27:39.375 GMT T[AWT-EventQueue-0](26) DEBUG JniBasedUnix= GroupsMappingWithFallback: Falling back to shell based > Apr 11=2C2014 18:27:39.375 GMT T[AWT-EventQueue-0](26) DEBUG JniBasedUnix= GroupsMappingWithFallback: Group mapping impl=3Dorg.apache.hadoop.security.= ShellBasedUnixGroupsMapping > Apr 11=2C2014 18:27:39.375 GMT T[AWT-EventQueue-0](26) ERROR Shell: Faile= d to detect a valid hadoop home directory: HADOOP_HOME or hadoop.home.dir a= re not set. > java.io.IOException: HADOOP_HOME or hadoop.home.dir are not set. > at org.apache.hadoop.util.Shell.checkHadoopHome(Shell.java:265) > at org.apache.hadoop.util.Shell.(Shell.java:290) > at org.apache.hadoop.util.StringUtils.(StringUtils.java:76) > at org.apache.hadoop.security.Groups.parseStaticMapping(Groups.java:92) > at org.apache.hadoop.security.Groups.(Groups.java:76) > at org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups= .java:239) > at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupIn= formation.java:255) > at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(User= GroupInformation.java:232) > at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(U= serGroupInformation.java:718) > at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroup= Information.java:703) > at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGro= upInformation.java:605) > at org.apache.hadoop.fs.FileSystem$Cache$Key.(FileSystem.java:2473) > at org.apache.hadoop.fs.FileSystem$Cache$Key.(FileSystem.java:2465) > at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2331) > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:369) > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:168) > at org.apache.commons.vfs2.provider.hdfs.HdfsFileSystem.resolveFile(HdfsF= ileSystem.java:115) > at org.apache.commons.vfs2.provider.AbstractOriginatingFileProvider.findF= ile(AbstractOriginatingFileProvider.java:84) > at org.apache.commons.vfs2.provider.AbstractOriginatingFileProvider.findF= ile(AbstractOriginatingFileProvider.java:64) > at org.apache.commons.vfs2.impl.DefaultFileSystemManager.resolveFile(Defa= ultFileSystemManager.java:700) > at org.apache.commons.vfs2.impl.DefaultFileSystemManager.resolveFile(Defa= ultFileSystemManager.java:656) > at org.apache.commons.vfs2.impl.DefaultFileSystemManager.resolveFile(Defa= ultFileSystemManager.java:609) > > Apr 11=2C2014 18:27:39.391 GMT T[AWT-EventQueue-0](26) ERROR Shell: Faile= d to locate the winutils binary in the hadoop binary path: Could not locate= executable null\bin\winutils.exe in the Hadoop binaries. > java.io.IOException: Could not locate executable null\bin\winutils.exe in= the Hadoop binaries. > > Apr 11=2C2014 18:27:39.391 GMT T[AWT-EventQueue-0](26) DEBUG Groups: Grou= p mapping impl=3Dorg.apache.hadoop.security.JniBasedUnixGroupsMappingWithFa= llback=3B cacheTimeout=3D300000=3B warningDeltaMs=3D5000 > Apr 11=2C2014 18:27:39.469 GMT T[AWT-EventQueue-0](26) DEBUG UserGroupInf= ormation: hadoop login > Apr 11=2C2014 18:27:39.469 GMT T[AWT-EventQueue-0](26) DEBUG UserGroupInf= ormation: hadoop login commit > Apr 11=2C2014 18:27:39.751 GMT T[AWT-EventQueue-0](26) DEBUG UserGroupInf= ormation: using local user:NTUserPrincipal: > Apr 11=2C2014 18:27:39.751 GMT T[AWT-EventQueue-0](26) DEBUG UserGroupInf= ormation: UGI loginUser:whiro01 (auth:SIMPLE) > Apr 11=2C2014 18:27:39.813 GMT T[AWT-EventQueue-0](26) ERROR HdfsFileSyst= em: Error connecting to filesystem hdfs://hadoop-master:50070/: No FileSyst= em for scheme: hdfs > java.io.IOException: No FileSystem for scheme: hdfs > at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:230= 4) > at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2311) > at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:90) > at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2350= ) > at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2332) > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:369) > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:168) > at org.apache.commons.vfs2.provider.hdfs.HdfsFileSystem.resolveFile(HdfsF= ileSystem.java:115) > at org.apache.commons.vfs2.provider.AbstractOriginatingFileProvider.findF= ile(AbstractOriginatingFileProvider.java:84) > at org.apache.commons.vfs2.provider.AbstractOriginatingFileProvider.findF= ile(AbstractOriginatingFileProvider.java:64) > at org.apache.commons.vfs2.impl.DefaultFileSystemManager.resolveFile(Defa= ultFileSystemManager.java:700) > at org.apache.commons.vfs2.impl.DefaultFileSystemManager.resolveFile(Defa= ultFileSystemManager.java:656) > at org.apache.commons.vfs2.impl.DefaultFileSystemManager.resolveFile(Defa= ultFileSystemManager.java:609) > > So=2C my guess is that I don't have enough configuration setup on my clie= nt machine to tell Hadoop that the authentication is to be done at the remo= te end ....?? So=2C I'm trying to track down what the configuration info mi= ght be. > > Hoping to see if anyone here can see past the Commons VFS stuff that you = probably don't understand to be able to tell me what other Hadoop/HDFS file= s / configuration I need to get this working. > > Note: I want to build a GUI component that can browse to arbitrary HDFS i= nstallations=2C so I can't really be setting up a hard-coded XML file for e= ach potential Hadoop cluster I might connect to .... > > Thanks=2C > ~Roger Whitcomb > ----_com.android.email_4668790607187090 Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset="utf-8"
If memory serves me=2C its in the hadoop-hdfs.jar file.


Sent via the Samsung GALAXY S= =C2=AE4=2C an AT&=3BT 4G LTE smartphone


-------- Original message --------
From: Roger Whitcomb
Date:04/11/2014 8:37 PM (GMT-05:00)
To: user@hadoop.apache.org
Subject: RE: Which Hadoop 2.x .jars are necessary for Apache Commons VFS HD= FS access?

Hi Dave=2C

 =3B =3B =3B =3B=E2=80=8BThanks for the res= ponses.  =3BI guess I have a small question then:  =3Bwhat exact cl= ass(es) would it be looking for that it can't find?  =3BI have all the = .jar files I mentioned below on the classpath=2C and it is loading and exec= uting stuff in the "=3Borg.apache.hadoop.fs.FileSystem"=3B class (according to the st= ack trace below)=2C so .... there are implementing classes I would guess=2C= so what .jar file would they be in?


Thanks=2C

~Roger



From: david marion <= =3Bdlmarion@hotmail.com>=3B
Sent: Friday=2C April 11=2C 2014 4:55 PM
To: user@hadoop.apache.org
Subject: RE: Which Hadoop 2.x .jars are necessary for Apache Commons= VFS HDFS access?
 =3B
Also=2C make sure that the jars on the classpath actually = contain the HDFS file system. I'm looking at:

No FileSystem for scheme: hdfs

which is an indicator for this condition.

Dave


From: dlmarion@hotmail.com
To: user@hadoop.apache.org
Subject: RE: Which Hadoop 2.x .jars are necessary for Apache Commons VFS HD= FS access?
Date: Fri=2C 11 Apr 2014 23:48:48 +=3B0000

Hi Roger=2C

 =3B I wrote the HDFS provider for Commons VFS. I went back and looked = at the source and tests=2C and I don't see anything wrong with what you are= doing. I did develop it against Hadoop 1.1.2 at the time=2C so there might= be an issue that is not accounted for with Hadoop 2. It was also not tested with security turned on. Are you using se= curity?

Dave

>=3B From: Roger.Whitcomb@actian.com
>=3B To: user@hadoop.apache.org
>=3B Subject: Which Hadoop 2.x .jars are necessary for Apache Commons VFS= HDFS access?
>=3B Date: Fri=2C 11 Apr 2014 20:20:06 +=3B0000
>=3B
>=3B Hi=2C
>=3B I'm fairly new to Hadoop=2C but not to Apache=2C and I'm having a ne= wbie kind of issue browsing HDFS files. I have written an Apache Commons VF= S (Virtual File System) browser for the Apache Pivot GUI framework (I'm the= PMC Chair for Pivot: full disclosure). And now I'm trying to get this browser to work with HDFS to do HDFS browsing f= rom our application. I'm running into a problem=2C which seems sort of basi= c=2C so I thought I'd ask here...
>=3B
>=3B So=2C I downloaded Hadoop 2.3.0 from one of the mirrors=2C and was a= ble to track down sort of the minimum set of .jars necessary to at least (t= ry to) connect using Commons VFS 2.1:
>=3B commons-collections-3.2.1.jar
>=3B commons-configuration-1.6.jar
>=3B commons-lang-2.6.jar
>=3B commons-vfs2-2.1-SNAPSHOT.jar
>=3B guava-11.0.2.jar
>=3B hadoop-auth-2.3.0.jar
>=3B hadoop-common-2.3.0.jar
>=3B log4j-1.2.17.jar
>=3B slf4j-api-1.7.5.jar
>=3B slf4j-log4j12-1.7.5.jar
>=3B
>=3B What's happening now is that I instantiated the HdfsProvider this wa= y:
>=3B private static DefaultFileSystemManager manager =3D null=3B
>=3B
>=3B static
>=3B {
>=3B manager =3D new DefaultFileSystemManager()=3B
>=3B try {
>=3B manager.setFilesCache(new DefaultFilesCache())=3B
>=3B manager.addProvider("=3Bhdfs"=3B=2C new HdfsFileProvider())= =3B
>=3B manager.setFileContentInfoFactory(new FileContentInfoFilenameFactory= ())=3B
>=3B manager.setFilesCache(new SoftRefFilesCache())=3B
>=3B manager.setReplicator(new DefaultFileReplicator())=3B
>=3B manager.setCacheStrategy(CacheStrategy.ON_RESOLVE)=3B
>=3B manager.init()=3B
>=3B }
>=3B catch (final FileSystemException e) {
>=3B throw new RuntimeException(Intl.getString("=3Bobject#manager.set= upError"=3B)=2C e)=3B
>=3B }
>=3B }
>=3B
>=3B Then=2C I try to browse into an HDFS system this way:
>=3B String url =3D String.format("=3Bhdfs://%1$s:%2$d/%3$s"=3B= =2C "=3Bhadoop-master "=3B=2C 50070=2C hdfsPath)=3B
>=3B return manager.resolveFile(url)=3B
>=3B
>=3B Note: the client is running on Windows 7 (but could be any system th= at runs Java)=2C and the target has been one of several Hadoop clusters on = Ubuntu VMs (basically the same thing happens no matter which Hadoop install= ation I try to hit). So I'm guessing the problem is in my client configuration.
>=3B
>=3B This attempt to basically just connect to HDFS results in a bunch of= error messages in the log file=2C which looks like it is trying to do user= validation on the local machine instead of against the Hadoop (remote) clu= ster.
>=3B Apr 11=2C2014 18:27:38.640 GMT T[AWT-EventQueue-0](26) DEBUG FileObj= ectManager: Trying to resolve file reference 'hdfs://hadoop-master:50070/'<= br> >=3B Apr 11=2C2014 18:27:38.953 GMT T[AWT-EventQueue-0](26) INFO org.apac= he.hadoop.conf.Configuration.deprecation: fs.default.name is deprecated. In= stead=2C use fs.defaultFS
>=3B Apr 11=2C2014 18:27:39.078 GMT T[AWT-EventQueue-0](26) DEBUG Mutable= MetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache= .hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotati= on @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime=2C value=3D[Rate of successful kerberos logins and latency (milliseconds)]=2C= about=3D=2C type=3DDEFAULT=2C always=3Dfalse=2C sampleName=3DOps)
>=3B Apr 11=2C2014 18:27:39.094 GMT T[AWT-EventQueue-0](26) DEBUG Mutable= MetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache= .hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotati= on @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime=2C value=3D[Rate of failed kerberos logins and latency (milliseconds)]=2C abo= ut=3D=2C type=3DDEFAULT=2C always=3Dfalse=2C sampleName=3DOps)
>=3B Apr 11=2C2014 18:27:39.094 GMT T[AWT-EventQueue-0](26) DEBUG Mutable= MetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache= .hadoop.security.UserGroupInformation$UgiMetrics.getGroups with annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime=2C value=3D[GetGroups]=2C about=3D=2C type=3DDEFAULT=2C always=3Dfalse=2C sam= pleName=3DOps)
>=3B Apr 11=2C2014 18:27:39.094 GMT T[AWT-EventQueue-0](26) DEBUG Metrics= SystemImpl: UgiMetrics=2C User and group related metrics
>=3B Apr 11=2C2014 18:27:39.344 GMT T[AWT-EventQueue-0](26) DEBUG Groups:= Creating new Groups object
>=3B Apr 11=2C2014 18:27:39.344 GMT T[AWT-EventQueue-0](26) DEBUG NativeC= odeLoader: Trying to load the custom-built native-hadoop library...
>=3B Apr 11=2C2014 18:27:39.360 GMT T[AWT-EventQueue-0](26) DEBUG NativeC= odeLoader: Failed to load native-hadoop with error: java.lang.UnsatisfiedLi= nkError: no hadoop in java.library.path
>=3B Apr 11=2C2014 18:27:39.360 GMT T[AWT-EventQueue-0](26) DEBUG NativeC= odeLoader: java.library.path=3D.... <=3Bbunch of stuff>=3B
>=3B Apr 11=2C2014 18:27:39.360 GMT T[AWT-EventQueue-0](26) WARN NativeCo= deLoader: Unable to load native-hadoop library for your platform... using b= uiltin-java classes where applicable
>=3B Apr 11=2C2014 18:27:39.375 GMT T[AWT-EventQueue-0](26) DEBUG JniBase= dUnixGroupsMappingWithFallback: Falling back to shell based
>=3B Apr 11=2C2014 18:27:39.375 GMT T[AWT-EventQueue-0](26) DEBUG JniBase= dUnixGroupsMappingWithFallback: Group mapping impl=3Dorg.apache.hadoop.secu= rity.ShellBasedUnixGroupsMapping
>=3B Apr 11=2C2014 18:27:39.375 GMT T[AWT-EventQueue-0](26) ERROR Shell: = Failed to detect a valid hadoop home directory: HADOOP_HOME or hadoop.home.= dir are not set.
>=3B java.io.IOException: HADOOP_HOME or hadoop.home.dir are not set.
>=3B at org.apache.hadoop.util.Shell.checkHadoopHome(Shell.java:265)
>=3B at org.apache.hadoop.util.Shell.<=3Bclinit>=3B(Shell.java:290) >=3B at org.apache.hadoop.util.StringUtils.<=3Bclinit>=3B(StringUtils= .java:76)
>=3B at org.apache.hadoop.security.Groups.parseStaticMapping(Groups.java:= 92)
>=3B at org.apache.hadoop.security.Groups.<=3Binit>=3B(Groups.java:76= )
>=3B at org.apache.hadoop.security.Groups.getUserToGroupsMappingService(G= roups.java:239)
>=3B at org.apache.hadoop.security.UserGroupInformation.initialize(UserGr= oupInformation.java:255)
>=3B at org.apache.hadoop.security.UserGroupInformation.ensureInitialized= (UserGroupInformation.java:232)
>=3B at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubj= ect(UserGroupInformation.java:718)
>=3B at org.apache.hadoop.security.UserGroupInformation.getLoginUser(User= GroupInformation.java:703)
>=3B at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(Us= erGroupInformation.java:605)
>=3B at org.apache.hadoop.fs.FileSystem$Cache$Key.<=3Binit>=3B(FileSy= stem.java:2473)
>=3B at org.apache.hadoop.fs.FileSystem$Cache$Key.<=3Binit>=3B(FileSy= stem.java:2465)
>=3B at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2331) >=3B at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:369)
>=3B at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:168)
>=3B at org.apache.commons.vfs2.provider.hdfs.HdfsFileSystem.resolveFile(= HdfsFileSystem.java:115)
>=3B at org.apache.commons.vfs2.provider.AbstractOriginatingFileProvider.= findFile(AbstractOriginatingFileProvider.java:84)
>=3B at org.apache.commons.vfs2.provider.AbstractOriginatingFileProvider.= findFile(AbstractOriginatingFileProvider.java:64)
>=3B at org.apache.commons.vfs2.impl.DefaultFileSystemManager.resolveFile= (DefaultFileSystemManager.java:700)
>=3B at org.apache.commons.vfs2.impl.DefaultFileSystemManager.resolveFile= (DefaultFileSystemManager.java:656)
>=3B at org.apache.commons.vfs2.impl.DefaultFileSystemManager.resolveFile= (DefaultFileSystemManager.java:609)
>=3B
>=3B Apr 11=2C2014 18:27:39.391 GMT T[AWT-EventQueue-0](26) ERROR Shell: = Failed to locate the winutils binary in the hadoop binary path: Could not l= ocate executable null\bin\winutils.exe in the Hadoop binaries.
>=3B java.io.IOException: Could not locate executable null\bin\winutils.e= xe in the Hadoop binaries.
>=3B
>=3B Apr 11=2C2014 18:27:39.391 GMT T[AWT-EventQueue-0](26) DEBUG Groups:= Group mapping impl=3Dorg.apache.hadoop.security.JniBasedUnixGroupsMappingW= ithFallback=3B cacheTimeout=3D300000=3B warningDeltaMs=3D5000
>=3B Apr 11=2C2014 18:27:39.469 GMT T[AWT-EventQueue-0](26) DEBUG UserGro= upInformation: hadoop login
>=3B Apr 11=2C2014 18:27:39.469 GMT T[AWT-EventQueue-0](26) DEBUG UserGro= upInformation: hadoop login commit
>=3B Apr 11=2C2014 18:27:39.751 GMT T[AWT-EventQueue-0](26) DEBUG UserGro= upInformation: using local user:NTUserPrincipal: <=3Buser_name>=3B
>=3B Apr 11=2C2014 18:27:39.751 GMT T[AWT-EventQueue-0](26) DEBUG UserGro= upInformation: UGI loginUser:whiro01 (auth:SIMPLE)
>=3B Apr 11=2C2014 18:27:39.813 GMT T[AWT-EventQueue-0](26) ERROR HdfsFil= eSystem: Error connecting to filesystem hdfs://hadoop-master:50070/: No Fil= eSystem for scheme: hdfs
>=3B java.io.IOException: No FileSystem for scheme: hdfs
>=3B at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.jav= a:2304)
>=3B at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:= 2311)
>=3B at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:90) >=3B at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java= :2350)
>=3B at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2332) >=3B at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:369)
>=3B at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:168)
>=3B at org.apache.commons.vfs2.provider.hdfs.HdfsFileSystem.resolveFile(= HdfsFileSystem.java:115)
>=3B at org.apache.commons.vfs2.provider.AbstractOriginatingFileProvider.= findFile(AbstractOriginatingFileProvider.java:84)
>=3B at org.apache.commons.vfs2.provider.AbstractOriginatingFileProvider.= findFile(AbstractOriginatingFileProvider.java:64)
>=3B at org.apache.commons.vfs2.impl.DefaultFileSystemManager.resolveFile= (DefaultFileSystemManager.java:700)
>=3B at org.apache.commons.vfs2.impl.DefaultFileSystemManager.resolveFile= (DefaultFileSystemManager.java:656)
>=3B at org.apache.commons.vfs2.impl.DefaultFileSystemManager.resolveFile= (DefaultFileSystemManager.java:609)
>=3B
>=3B So=2C my guess is that I don't have enough configuration setup on my= client machine to tell Hadoop that the authentication is to be done at the= remote end ....?? So=2C I'm trying to track down what the configuration in= fo might be.
>=3B
>=3B Hoping to see if anyone here can see past the Commons VFS stuff that= you probably don't understand to be able to tell me what other Hadoop/HDFS= files / configuration I need to get this working.
>=3B
>=3B Note: I want to build a GUI component that can browse to arbitrary H= DFS installations=2C so I can't really be setting up a hard-coded XML file = for each potential Hadoop cluster I might connect to ....
>=3B
>=3B Thanks=2C
>=3B ~Roger Whitcomb
>=3B
----_com.android.email_4668790607187090--