hadoop-hive-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Zheng Shao (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HIVE-480) allow option to retry map-reduce tasks
Date Thu, 11 Jun 2009 20:04:07 GMT

    [ https://issues.apache.org/jira/browse/HIVE-480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12718623#action_12718623
] 

Zheng Shao commented on HIVE-480:
---------------------------------

As a side note, the conf in hadoop is "mapred.max.tracker.failures" which controls the number
of maximum permitted failures for each task.

> allow option to retry map-reduce tasks
> --------------------------------------
>
>                 Key: HIVE-480
>                 URL: https://issues.apache.org/jira/browse/HIVE-480
>             Project: Hadoop Hive
>          Issue Type: New Feature
>          Components: Query Processor
>            Reporter: Joydeep Sen Sarma
>
> for long running queries with multiple map-reduce jobs - this should help in dealing
with any transient cluster failures without having to re-running all the tasks.
> ideally - the entire plan can be serialized out and the actual process of executing the
workflow can be left to a pluggable workflow execution engine (since this is a problem that
has been solved many times already).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message