hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Daryn Sharp <da...@yahoo-inc.com>
Subject Re: Help with error
Date Tue, 10 Apr 2012 13:03:23 GMT
The copyFromLocalFile method has a void return, but internally is calling FileUtil methods
that may return a boolean for success.  False appears to be returned if the source file cannot
be deleted, or if the dest directory cannot be created.  The boolean result is ignored by
copyFromLocalFile leading the caller to believe the copy was successful.

I'm not sure if this bug is aggravating your situation, so I'd try to manually create the
dest dir and remove the src file.

Daryn


On Apr 9, 2012, at 4:51 PM, Ralph Castain wrote:

> 
> On Apr 9, 2012, at 3:50 PM, Ralph Castain wrote:
> 
>> 
>> On Apr 9, 2012, at 2:45 PM, Kihwal Lee wrote:
>> 
>>> The path, "file:/Users/rhc/yarnrun/13", indicates that your copy operation's
destination was the local file system, instead of hdfs.
>> 
>> Yeah, I realized that too after I sent the note. Sure enough - the files are there.
> 
> 
> Quick correction: the path exists, but as a file instead of a directory, and therefore
the files to be moved there don't exist.
> 
> 
>> 
>>> What is the value of "fs.default.name" set to in core-site.xml?
>> 
>> <configuration>
>> <property>
>> <name>fs.default.name</name>
>> <value>hdfs://localhost:9000</value>
>> </property>
>> </configuration>
>> 
>> 
>>> 
>>> Kihwal
>>> 
>>> 
>>> On 4/9/12 3:26 PM, "Ralph Castain" <rhc@open-mpi.org> wrote:
>>> 
>>> Finally managed to chase down the 0.23 API docs and get the FileStatus definition.
No real joy here - I output the path and got:
>>> 
>>> code:               LOG.info("destination path " + destStatus.getPath());
>>> 
>>> 2012-04-09 14:22:48,359 INFO  Hamster (Hamster.java:getApplication(265)) - destination
path file:/Users/rhc/yarnrun/13
>>> 
>>> However, when I attempt to list it:
>>> 
>>> Ralphs-iMac:bin rhc$ ./hdfs dfs -ls /Users/rhc/yarnrun
>>> 2012-04-09 14:22:57.640 java[14292:1903] Unable to load realm info from SCDynamicStore
>>> 2012-04-09 14:22:57.686 java[14292:1903] Unable to load realm info from SCDynamicStore
>>> ls: `/Users/rhc/yarnrun': No such file or directory
>>> 
>>> I've been unable to track down the "realm" warnings, so I don't know if that
is pertinent or not. It appears the files are not getting copied across, though the location
looks okay to my eyes.
>>> 
>>> 
>>> On Apr 9, 2012, at 1:27 PM, Kihwal Lee wrote:
>>> 
>>>> It looks like the home directory does not exist but the copy went through.
>>>> Can you try to LOG the key fields in destStatus including path? It might
be ending up in an unexpected place.
>>>> 
>>>> Kihwal
>>>> 
>>>> 
>>>> 
>>>> On 4/9/12 12:45 PM, "Ralph Castain" <rhc@open-mpi.org> wrote:
>>>> 
>>>> Hi Bobby
>>>> 
>>>> On Apr 9, 2012, at 11:40 AM, Robert Evans wrote:
>>>> 
>>>>> What do you mean by relocated some supporting files to HDFS?  How do
you relocate them?  What API do you use?
>>>> 
>>>> I use the LocalResource and FileSystem classes to do the relocation, per
the Hadoop example:
>>>> 
>>>>     // set local resources for the application master
>>>>     // local files or archives as needed
>>>>     // In this scenario, the jar file for the application master is part
of the local resources
>>>>     Map<String, LocalResource> localResources = new HashMap<String,
LocalResource>();
>>>> 
>>>>     LOG.info("Copy openmpi tarball from local filesystem and add to local
environment");
>>>>     // Copy the application master jar to the filesystem
>>>>     // Create a local resource to point to the destination jar path
>>>>     FileSystem fs;
>>>>     FileStatus destStatus;
>>>>     try {
>>>>         fs = FileSystem.get(conf);
>>>>         Path src = new Path(pathOMPItarball);
>>>>         String pathSuffix = appName + "/" + appId.getId();
>>>>         Path dst = new Path(fs.getHomeDirectory(), pathSuffix);
>>>>         try {
>>>>             fs.copyFromLocalFile(false, true, src, dst);
>>>>             try {
>>>>                 destStatus = fs.getFileStatus(dst);
>>>>                 LocalResource amJarRsrc = Records.newRecord(LocalResource.class);
>>>> 
>>>>                 // Set the type of resource - file or archive
>>>>                 // archives are untarred at destination
>>>>                 amJarRsrc.setType(LocalResourceType.ARCHIVE);
>>>>                 // Set visibility of the resource
>>>>                 // Setting to most private option
>>>>                 amJarRsrc.setVisibility(LocalResourceVisibility.APPLICATION);
>>>>                 // Set the resource to be copied over
>>>>                 amJarRsrc.setResource(ConverterUtils.getYarnUrlFromPath(dst));
>>>>                 // Set timestamp and length of file so that the framework
>>>>                 // can do basic sanity checks for the local resource
>>>>                 // after it has been copied over to ensure it is the same
>>>>                 // resource the client intended to use with the application
>>>>                 amJarRsrc.setTimestamp(destStatus.getModificationTime());
>>>>                 amJarRsrc.setSize(destStatus.getLen());
>>>>                 localResources.put("openmpi",  amJarRsrc);
>>>>             } catch (Throwable t) {
>>>>                 LOG.fatal("Error on file status", t);
>>>>                 System.exit(1);
>>>>             }
>>>>         } catch (Throwable t) {
>>>>             LOG.fatal("Error on copy from local file", t);
>>>>             System.exit(1);
>>>>         }
>>>>     } catch (Throwable t) {
>>>>         LOG.fatal("Error getting filesystem configuration", t);
>>>>         System.exit(1);
>>>>     }
>>>> 
>>>> Note that this appears to work fine when the local resource type was "file"
- at least, I was able to make a simple program work that way. Problem I'm having is when
I move an archive, which is why I was hoping to look at the HDFS end to see what files are
present, and in what locations so I can set the paths accordingly.
>>>> 
>>>> Thanks
>>>> Ralph
>>>> 
>>>> 
>>>>> 
>>>>> --Bobby Evans
>>>>> 
>>>>> 
>>>>> On 4/9/12 11:10 AM, "Ralph Castain" <rhc@open-mpi.org> wrote:
>>>>> 
>>>>> Hi folks
>>>>> 
>>>>> I'm trying to develop an AM for the 0.23 branch and running into a problem
that I'm having difficulty debugging. My client relocates some supporting files to HDFS, creates
the application object for the AM, and submits it to the RM.
>>>>> 
>>>>> The file relocation request doesn't generate an error, so I must assume
it succeeded. It would be nice if there was some obvious way to verify that, but I haven't
discovered it. Can anyone give me a hint? I tried asking hdfs to -ls, but all I get is that
"." doesn't exist. I have no idea where the file would be placed, if it would persist once
the job fails, etc.
>>>>> 
>>>>> When the job is submitted, all I get is an "Error 500", which tells me
nothing. Reminds me of the old days of 40 years ago when you'd get the dreaded "error 11",
which meant anything from a divide by zero to a memory violation. Are there any debug flags
I could set that might provide more info?
>>>>> 
>>>>> Thanks
>>>>> Ralph
>>>>> 
>>>>> 
>>>> 
>>>> 
>>> 
>>> 
>> 
> 


Mime
View raw message