hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Chris Douglas (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HADOOP-3603) Setting spill threshold to 100% fails to detect spill for records
Date Fri, 20 Jun 2008 21:29:45 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-3603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Chris Douglas updated HADOOP-3603:
----------------------------------

    Attachment: 3603-2.patch

Changing collector to ensure at least one call to write per record. Though it does not pacify
findbugs, it ensures that kvstart is accessed consistently and makes it easier to reason about
the spill logic.

> Setting spill threshold to 100% fails to detect spill for records
> -----------------------------------------------------------------
>
>                 Key: HADOOP-3603
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3603
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>    Affects Versions: 0.17.0
>            Reporter: Chris Douglas
>            Assignee: Chris Douglas
>            Priority: Blocker
>             Fix For: 0.18.0
>
>         Attachments: 3603-0.patch, 3603-1.patch, 3603-2.patch
>
>
> If io.sort.record.percent is set to 1.0, the simultaneous collection and spill is disabled.
However, if one exhausts the offset allocation before the serialization allocation, the limit
will not be detected and the write will block forever.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message