hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Brian C. Huffman" <bhuff...@etinternational.com>
Subject Cleanup after Yarn Job
Date Fri, 21 Feb 2014 14:03:14 GMT
All,

I'm trying to model a Yarn Client after the Distributed Shell example.  
However I'd like to add a method to cleanup the job's files after 
completion.

I've defined a cleanup routine:
   private void cleanup(ApplicationId appId, FileSystem fs)
       throws IOException {
     String PathSuffix = appName + "/" + appId.getId();
     Path Dst = new Path(fs.getHomeDirectory(), PathSuffix);
     fs.delete(Dst, true);
   }

The problem that I'm having is that I'd like to call it after 
monitorApplication exits, but in the case that the time limit was 
exceeded and killApplication is called, both the appId and the 
FileSystem objects are gone.  I could get around the appId issue since I 
really only need a String or integer representation, but since Yarn 
Client seems to be managing the filesystem object (the example uses 
FileSystem.get(conf)), I'm not sure of a way around that unless I create 
my own FileSystem object.

Any suggestions?

Thanks,
Brian


Mime
View raw message