hadoop-hive-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ning Zhang (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HIVE-1651) ScriptOperator should not forward any output to downstream operators if an exception is happened
Date Tue, 21 Sep 2010 22:57:33 GMT

    [ https://issues.apache.org/jira/browse/HIVE-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12913301#action_12913301
] 

Ning Zhang commented on HIVE-1651:
----------------------------------

Discussed with Joydeep offline. The side effects of failed task should be cleaned after the
job finished. _tmp* files are already taken care of in the current code base. The only side
effect that need to be taken care of is the empty directories created by failed dynamic partition
inserts. This issue is addressed in HIVE-1655. 


> ScriptOperator should not forward any output to downstream operators if an exception
is happened
> ------------------------------------------------------------------------------------------------
>
>                 Key: HIVE-1651
>                 URL: https://issues.apache.org/jira/browse/HIVE-1651
>             Project: Hadoop Hive
>          Issue Type: Bug
>            Reporter: Ning Zhang
>            Assignee: Ning Zhang
>         Attachments: HIVE-1651.patch
>
>
> ScriptOperator spawns 2 threads for getting the stdout and stderr from the script and
then forward the output from stdout to downstream operators. In case of any exceptions to
the script (e.g., got killed), the ScriptOperator got an exception and throw it to upstream
operators until MapOperator got it and call close(abort). Before the ScriptOperator.close()
is called the script output stream can still forward output to downstream operators. We should
terminate it immediately.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message