hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yiqun Lin (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-10691) FileDistribution fails in hdfs oiv command due to ArrayIndexOutOfBoundsException
Date Wed, 27 Jul 2016 05:38:20 GMT

     [ https://issues.apache.org/jira/browse/HDFS-10691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Yiqun Lin updated HDFS-10691:
-----------------------------
    Description: 
I use hdfs oiv -p FileDistribution command to do a file analyse. But the {{ArrayIndexOutOfBoundsException}}
happened and lead the process terminated. The stack infos:
{code}
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 103
	at org.apache.hadoop.hdfs.tools.offlineImageViewer.FileDistributionCalculator.run(FileDistributionCalculator.java:243)
	at org.apache.hadoop.hdfs.tools.offlineImageViewer.FileDistributionCalculator.visit(FileDistributionCalculator.java:176)
	at org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(OfflineImageViewerPB.java:176)
	at org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.main(OfflineImageViewerPB.java:129)
{code}
I looked into the code and I found the exception was threw in increasing count of {{distribution}}.
And the reason for the exception is that the bucket number was more than the distribution's
length.

Here are my steps:
1).The input command params:
{code}
hdfs oiv -p FileDistribution -maxSize 104857600 -step 1024000
{code}
The {{numIntervals}} in code should be 104857600/1024000 =102(real value:102.4), so the {{distribution}}'s
length should be {{numIntervals}} + 1 = 103.
2).The {{ArrayIndexOutOfBoundsException}} will happens when the file size is in range ((maxSize/step)*step,
maxSize]. For example, if the size of one file is 104800000, and it's in range of size as
mentioned before. And the bucket number is calculated as 104800000/1024000=102.3, then in
code we do the {{Math.ceil}} of this, so the final value should be 103. But the {{distribution}}'s
length is also 103, it means the index is from 0 to 102. So the {{ArrayIndexOutOfBoundsException}}
happens.

In a word, the exception will happens when {{maxSize}} can not be divided by {{step}} and
meanwhile the size of file is in range ((maxSize/step)*step, maxSize]. The related logic should
be changed from 
{code}
int bucket = fileSize > maxSize ? distribution.length - 1 : (int) Math
            .ceil((double)fileSize / steps);
{code}
to 
{code}
int bucket =
            fileSize >= maxSize || fileSize > (maxSize / steps) * steps ?
                distribution.length - 1 : (int) Math.ceil((double) fileSize / steps);
{code}

  was:
I use hdfs oiv -p FileDistribution command to do a file analyse. But the {{ArrayIndexOutOfBoundsException}}
happened and lead the process terminated. The stack infos:
{code}
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 103
	at org.apache.hadoop.hdfs.tools.offlineImageViewer.FileDistributionCalculator.run(FileDistributionCalculator.java:243)
	at org.apache.hadoop.hdfs.tools.offlineImageViewer.FileDistributionCalculator.visit(FileDistributionCalculator.java:176)
	at org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(OfflineImageViewerPB.java:176)
	at org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.main(OfflineImageViewerPB.java:129)
{code}
I looked into the code and I found the exception was threw in increasing count of {{distribution}}.
And the reason for the exception is that the bucket number was more than the distribution's
length.

Here are my steps:
1).The input command params:
{code}
hdfs oiv -p FileDistribution -maxSize 104857600 -step 1024000
{code}
The {{numIntervals}} in code should be 104857600/1024000 =102(real value:102.4), so the {{distribution}}'s
length should be {{numIntervals}} + 1 = 103.
2).The {{ArrayIndexOutOfBoundsException}} will happens when the file size is in range (maxSize-steps,
maxSize]. For example, if the size of one file is 104800000, and it's in range of size as
mentioned before. And the bucket number is calculated as 104800000/1024000=102.3, then in
code we do the {{Math.ceil}} of this, so the final value should be 103. But the {{distribution}}'s
length is also 103, it means the index is from 0 to 102. So the {{ArrayIndexOutOfBoundsException}}
happens.

In a word, the exception will happens when {{maxSize}} can not be divided by {{step}} and
meanwhile the size of file is in range (maxSize-step, maxSize]. The related logic should be
changed from 
{code}
int bucket = fileSize > maxSize ? distribution.length - 1 : (int) Math
            .ceil((double)fileSize / steps);
{code}
to 
{code}
        int bucket =
            fileSize >= maxSize || (fileSize + steps) > maxSize ? distribution.length
- 1
                : (int) Math.ceil((double) fileSize / steps);
{code}


> FileDistribution fails in hdfs oiv command due to ArrayIndexOutOfBoundsException
> --------------------------------------------------------------------------------
>
>                 Key: HDFS-10691
>                 URL: https://issues.apache.org/jira/browse/HDFS-10691
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 2.7.1
>            Reporter: Yiqun Lin
>            Assignee: Yiqun Lin
>         Attachments: HDFS-10691.001.patch
>
>
> I use hdfs oiv -p FileDistribution command to do a file analyse. But the {{ArrayIndexOutOfBoundsException}}
happened and lead the process terminated. The stack infos:
> {code}
> Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 103
> 	at org.apache.hadoop.hdfs.tools.offlineImageViewer.FileDistributionCalculator.run(FileDistributionCalculator.java:243)
> 	at org.apache.hadoop.hdfs.tools.offlineImageViewer.FileDistributionCalculator.visit(FileDistributionCalculator.java:176)
> 	at org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(OfflineImageViewerPB.java:176)
> 	at org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.main(OfflineImageViewerPB.java:129)
> {code}
> I looked into the code and I found the exception was threw in increasing count of {{distribution}}.
And the reason for the exception is that the bucket number was more than the distribution's
length.
> Here are my steps:
> 1).The input command params:
> {code}
> hdfs oiv -p FileDistribution -maxSize 104857600 -step 1024000
> {code}
> The {{numIntervals}} in code should be 104857600/1024000 =102(real value:102.4), so the
{{distribution}}'s length should be {{numIntervals}} + 1 = 103.
> 2).The {{ArrayIndexOutOfBoundsException}} will happens when the file size is in range
((maxSize/step)*step, maxSize]. For example, if the size of one file is 104800000, and it's
in range of size as mentioned before. And the bucket number is calculated as 104800000/1024000=102.3,
then in code we do the {{Math.ceil}} of this, so the final value should be 103. But the {{distribution}}'s
length is also 103, it means the index is from 0 to 102. So the {{ArrayIndexOutOfBoundsException}}
happens.
> In a word, the exception will happens when {{maxSize}} can not be divided by {{step}}
and meanwhile the size of file is in range ((maxSize/step)*step, maxSize]. The related logic
should be changed from 
> {code}
> int bucket = fileSize > maxSize ? distribution.length - 1 : (int) Math
>             .ceil((double)fileSize / steps);
> {code}
> to 
> {code}
> int bucket =
>             fileSize >= maxSize || fileSize > (maxSize / steps) * steps ?
>                 distribution.length - 1 : (int) Math.ceil((double) fileSize / steps);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message