phoenix-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "maghamravikiran (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (PHOENIX-2649) GC/OOM during BulkLoad
Date Wed, 03 Feb 2016 22:21:39 GMT

    [ https://issues.apache.org/jira/browse/PHOENIX-2649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15131259#comment-15131259
] 

maghamravikiran commented on PHOENIX-2649:
------------------------------------------

I pushed the patch to 4.x and master branch. 

> GC/OOM during BulkLoad
> ----------------------
>
>                 Key: PHOENIX-2649
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-2649
>             Project: Phoenix
>          Issue Type: Bug
>    Affects Versions: 4.7.0
>         Environment: Mac OS, Hadoop 2.7.2, HBase 1.1.2
>            Reporter: Sergey Soldatov
>            Priority: Critical
>         Attachments: PHOENIX-2649-1.patch, PHOENIX-2649.patch
>
>
> Phoenix fails to complete  bulk load of 40Mb csv data with GC heap error during Reduce
phase. The problem is in the comparator for TableRowkeyPair. It expects that the serialized
value was written using zero-compressed encoding, but at least in my case it was written in
regular way. So, trying to obtain length for table name and row key it always get zero and
reports that those byte arrays are equal. As the result, the reducer receives all data produced
by mappers in one reduce call and fails with OOM. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message