hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Rekha Joshi <rekha...@yahoo-inc.com>
Subject Re: rmr: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /op. Name node is in safe mode.
Date Tue, 19 Jan 2010 05:20:50 GMT
They are only alternatives. hadoop fs -rmr works well for me. I do not exactly know what error
it gives you or how the call is invoked.On batch , lets say on perl below should work fine
$cmd = "hadoop fs -rmr /op";


On 1/19/10 10:31 AM, "prasenjit mukherjee" <prasen.bea@gmail.com> wrote:

Hmmm.  I am actually running it from a batch file. Is "hadoop fs -rmr"
not that stable compared to pig's rm OR hadoop's FileSystem ?

Let me try your suggestion by writing a cleanup script in pig.


On Tue, Jan 19, 2010 at 10:25 AM, Rekha Joshi <rekhajos@yahoo-inc.com> wrote:
> Can you try with dfs/ without quotes?If using pig to run jobs you can use rmf within
your script(again w/o quotes) to force remove and avoid error if file/dir not present.Or if
doing this inside hadoop job, you can use FileSystem/FileStatus to delete directories.HTH.
> Cheers,
> /R
> On 1/19/10 10:15 AM, "prasenjit mukherjee" <prasen.bea@gmail.com> wrote:
> "hadoop fs -rmr /op"
> That command always fails. I am trying to run sequential hadoop jobs.
> After the first run all subsequent runs fail while cleaning up ( aka
> removing the hadoop dir created by previous run ). What can I do to
> avoid this ?
> here is my hadoop version :
> # hadoop version
> Hadoop 0.20.0
> Subversion https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.20
> -r 763504
> Compiled by ndaley on Thu Apr  9 05:18:40 UTC 2009
> Any help is greatly appreciated.
> -Prasen

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message