hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Amar Kamat (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HADOOP-4189) HADOOP-3245 is incomplete
Date Wed, 17 Sep 2008 07:21:46 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-4189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Amar Kamat updated HADOOP-4189:

    Attachment: HADOOP-4189-v2.patch

There was a corner case in the previous patch where in if the history is accessed after the
history file is created and before the job status is logged, then the cached result will never
be refreshed. Now the check makes sure that the analysis is redone whenever the job is incomplete.
Tested this on 20 nodes and the issue to do with stale values after the job is complete is
addressed. Its difficult to test this corner case.

> HADOOP-3245 is incomplete
> -------------------------
>                 Key: HADOOP-4189
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4189
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>            Reporter: Amar Kamat
>            Assignee: Amar Kamat
>            Priority: Blocker
>             Fix For: 0.19.0
>         Attachments: HADOOP-4189-v1.patch, HADOOP-4189-v2.patch, HADOOP-4189.patch
> There are 2 issues with HADOOP-3245
> - The default block size for the history files in hadoop-default.conf is set to 0. This
file will be empty throughout. It should be made null.
> - Same goes for the buffer size. 
> - InterTrackerProtocol version needs to be bumped.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message