hadoop-hive-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Namit Jain (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HIVE-457) ScriptOperator should NOT cache all data in stderr
Date Sat, 19 Dec 2009 17:37:18 GMT

    [ https://issues.apache.org/jira/browse/HIVE-457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12792872#action_12792872
] 

Namit Jain commented on HIVE-457:
---------------------------------

Yes, it is possible to do so via the configuration variable:

hive.script.recordreader.

Infact, there is another reader which we support: TypedBytesRecordReader.

The new paramter that you are adding should not be specific to TextRecordReader, but should
be general to any recordreader.

Maybe, it should be named:

set hive.record.reader.max.length=10485760;


and all recordreaders should be modified to take care of that.

> ScriptOperator should NOT cache all data in stderr
> --------------------------------------------------
>
>                 Key: HIVE-457
>                 URL: https://issues.apache.org/jira/browse/HIVE-457
>             Project: Hadoop Hive
>          Issue Type: Bug
>            Reporter: Zheng Shao
>            Assignee: Paul Yang
>            Priority: Blocker
>             Fix For: 0.5.0
>
>         Attachments: err.sh, HIVE-457.1.patch, HIVE-457.2.patch
>
>
> Sometimes user scripts output a lot of data to stderr without a new line, and this causes
Hive to go out-of-memory.
> We should directly output the data from stderr without caching it.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message