hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From zangxiangyu <zangxian...@qiyi.com>
Subject RE: recovery accidently deleted pig script
Date Thu, 13 Jun 2013 09:55:21 GMT
Hi,

If the pig process is not closed ,try lsof  may helpJ

The files used by one process 

/proc/<pid>/fd/<file>,use lsof to find full path ,copy the file in memory to disk.

God bless you.

From: feng jiang [mailto:jiangfutian@gmail.com] 
Sent: Thursday, June 13, 2013 5:33 PM
To: user@hadoop.apache.org; chris@embree.us
Subject: Re: recovery accidently deleted pig script

 

The script is on local file system. it's on linux box.

 

I totally agree we need version control for the source code. this is a good example to show
the importance of version control.

 

Thank you Michael and Chris for your inputs anyway.

 

 

On Thu, Jun 13, 2013 at 10:35 AM, Chris Embree <cembree@gmail.com> wrote:

This is not a Hadoop question (IMHO).

2 words:  Version Control

Did the advent of Hadoop somehow circumvent all IT convention?

Sorry folks, it's been a rough day.


On 6/12/13, Michael Segel <michael_segel@hotmail.com> wrote:
> Where was the pig script? On HDFS?
>
> How often does your cluster clean up the trash?
>
> (Deleted stuff doesn't get cleaned up when the file is deleted... ) Its a
> configurable setting so YMMV
>
> On Jun 12, 2013, at 8:58 PM, feng jiang <jiangfutian@gmail.com> wrote:
>
>> Hi everyone,
>>
>> We have a pig script scheduled running every 4 hours. Someone accidentally
>> deleted the pig script(rm). Is there any way to recover the script?
>>
>> I am guessing Hadoop copy the program to every nodes before running. Just
>> in case it has any copy in the nodes.
>>
>>
>> Best regards,
>> Feng Jiang
>
>





 

-- 
Best regards,
Feng Jiang 


Mime
View raw message