hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Pranay Singh (JIRA)" <j...@apache.org>
Subject [jira] [Created] (HADOOP-15928) Excessive error logging when using HDFS in S3 environment
Date Tue, 13 Nov 2018 19:40:00 GMT
Pranay Singh created HADOOP-15928:

             Summary: Excessive error logging when using HDFS in S3 environment
                 Key: HADOOP-15928
                 URL: https://issues.apache.org/jira/browse/HADOOP-15928
             Project: Hadoop Common
          Issue Type: Improvement
            Reporter: Pranay Singh

There is excessive error logging when Impala uses HDFS in S3 environment, this issue is caused
because of  defect HADOOP-14603 "S3A input stream to support ByteBufferReadable"  

Excessive error logging results in defect IMPALA-5256: "ERROR log files can get very large".
This causes the error log files to be huge. 

The following message is printed repeatedly in the error log:

UnsupportedOperationException: Byte-buffer read unsupported by input streamjava.lang.UnsupportedOperationException:
Byte-buffer read unsupported by input stream
        at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:150)

Root cause
After investigating the issue, it appears that the above exception is printed because
when a file is opened via hdfsOpenFileImpl() calls readDirect() which is hitting this

Since the hdfs client is not initiating the byte buffered read but is happening in a implicit
manner, we should not be generating the error log during open of a file.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: common-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-dev-help@hadoop.apache.org

View raw message