[ https://issues.apache.org/jira/browse/HADOOP-15928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Pranay Singh reassigned HADOOP-15928:
-------------------------------------
Assignee: Pranay Singh
> Excessive error logging when using HDFS in S3 environment
> ---------------------------------------------------------
>
> Key: HADOOP-15928
> URL: https://issues.apache.org/jira/browse/HADOOP-15928
> Project: Hadoop Common
> Issue Type: Improvement
> Reporter: Pranay Singh
> Assignee: Pranay Singh
> Priority: Major
>
> Problem:
> ------------
> There is excessive error logging when Impala uses HDFS in S3 environment, this issue
is caused because of defect HADOOP-14603 "S3A input stream to support ByteBufferReadable"
> Excessive error logging results in defect IMPALA-5256: "ERROR log files can get very
large". This causes the error log files to be huge.
> The following message is printed repeatedly in the error log:
> UnsupportedOperationException: Byte-buffer read unsupported by input streamjava.lang.UnsupportedOperationException:
Byte-buffer read unsupported by input stream
> at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:150)
> Root cause
> ----------------
> After investigating the issue, it appears that the above exception is printed because
> when a file is opened via hdfsOpenFileImpl() calls readDirect() which is hitting this
> exception.
> Fix:
> ----
> Since the hdfs client is not initiating the byte buffered read but is happening in a
implicit manner, we should not be generating the error log during open of a file.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org
|