hadoop-pig-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Pi Song (JIRA)" <j...@apache.org>
Subject [jira] Commented: (PIG-166) Disk Full
Date Tue, 01 Apr 2008 01:00:27 GMT

    [ https://issues.apache.org/jira/browse/PIG-166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12583958#action_12583958
] 

Pi Song commented on PIG-166:
-----------------------------

Alan,

You're right that temp files are already taken care of but deleteOnExit() only does clean
up temp files before JVM terminates (if no crash). If  the life cycle of those files are known,
you can delete them in the middle of processing to reclaim more space and the execution therefore
can go more off-limit.

What I implied here before is also that CreateTempFile() is not good because you cannot control
the limit of space you use so it's good to have a temp file manager that manages space usage
.

> Disk Full
> ---------
>
>                 Key: PIG-166
>                 URL: https://issues.apache.org/jira/browse/PIG-166
>             Project: Pig
>          Issue Type: Bug
>            Reporter: Amir Youssefi
>
> Occasionally spilling fills up (all) hard drive(s) on a Data Node and crashes Task Tracker
(and other processes) on that node. We need to have a safety net and fail the task before
crashing happens (and more). 
> In Pig + Hadoop setting, Task Trackers get Black Listed. And Pig console gets stock at
a percentage without returning nodes to cluster. I talked to Hadoop team to explore Max Percentage
idea. Nodes running into this problem get into permanent problems and manual cleaning by administrator
is necessary. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message