hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ayappan (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-6903) Crc32 checksum errors in Big-Endian Architecture
Date Thu, 21 Aug 2014 08:53:12 GMT

     [ https://issues.apache.org/jira/browse/HDFS-6903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Ayappan updated HDFS-6903:
--------------------------

    Description: 
Native Crc32 checksum calculation is not handled in Big-Endian Architecture.In this case,
the platform is ppc64. Due to this several testcases in HDFS module fails.

Running org.apache.hadoop.hdfs.TestAppendDifferentChecksum
Tests run: 3, Failures: 0, Errors: 2, Skipped: 1, Time elapsed: 13.274 sec <<< FAILURE!
- in org.apache.hadoop.hdfs.TestAppendDifferentChecksum
testAlgoSwitchRandomized(org.apache.hadoop.hdfs.TestAppendDifferentChecksum)  Time elapsed:
7.141 sec  <<< ERROR!
java.io.IOException: p=/testAlgoSwitchRandomized, length=28691, i=12288
        at org.apache.hadoop.util.NativeCrc32.nativeVerifyChunkedSums(Native Method)
        at org.apache.hadoop.util.NativeCrc32.verifyChunkedSums(NativeCrc32.java:57)
        at org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:291)
        at org.apache.hadoop.hdfs.RemoteBlockReader2.readNextPacket(RemoteBlockReader2.java:202)
        at org.apache.hadoop.hdfs.RemoteBlockReader2.read(RemoteBlockReader2.java:137)
        at org.apache.hadoop.hdfs.DFSInputStream$ByteArrayStrategy.doRead(DFSInputStream.java:682)
        at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:738)
        at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:795)
        at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:836)
        at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:644)
        at java.io.FilterInputStream.read(FilterInputStream.java:83)
        at org.apache.hadoop.hdfs.AppendTestUtil.check(AppendTestUtil.java:129)
        at org.apache.hadoop.hdfs.TestAppendDifferentChecksum.testAlgoSwitchRandomized(TestAppendDifferentChecksum.java:130)

testSwitchAlgorithms(org.apache.hadoop.hdfs.TestAppendDifferentChecksum)  Time elapsed: 1.394
sec  <<< ERROR!
java.io.IOException: p=/testSwitchAlgorithms, length=3000, i=0
        at org.apache.hadoop.util.NativeCrc32.nativeVerifyChunkedSums(Native Method)
        at org.apache.hadoop.util.NativeCrc32.verifyChunkedSums(NativeCrc32.java:57)
        at org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:291)
        at org.apache.hadoop.hdfs.RemoteBlockReader2.readNextPacket(RemoteBlockReader2.java:202)
        at org.apache.hadoop.hdfs.RemoteBlockReader2.read(RemoteBlockReader2.java:137)
        at org.apache.hadoop.hdfs.DFSInputStream$ByteArrayStrategy.doRead(DFSInputStream.java:682)
        at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:738)
        at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:795)
        at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:836)
        at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:644)
        at java.io.FilterInputStream.read(FilterInputStream.java:83)
        at org.apache.hadoop.hdfs.AppendTestUtil.check(AppendTestUtil.java:129)
        at org.apache.hadoop.hdfs.TestAppendDifferentChecksum.testSwitchAlgorithms(TestAppendDifferentChecksum.java:94)



  was:
Native Crc32 checksum calculation is not handled in Big-Endian Architecture.In this case,
the platform is ppc64. Due to this several testcases in HDFS module fails. The attached

Running org.apache.hadoop.hdfs.TestAppendDifferentChecksum
Tests run: 3, Failures: 0, Errors: 2, Skipped: 1, Time elapsed: 13.274 sec <<< FAILURE!
- in org.apache.hadoop.hdfs.TestAppendDifferentChecksum
testAlgoSwitchRandomized(org.apache.hadoop.hdfs.TestAppendDifferentChecksum)  Time elapsed:
7.141 sec  <<< ERROR!
java.io.IOException: p=/testAlgoSwitchRandomized, length=28691, i=12288
        at org.apache.hadoop.util.NativeCrc32.nativeVerifyChunkedSums(Native Method)
        at org.apache.hadoop.util.NativeCrc32.verifyChunkedSums(NativeCrc32.java:57)
        at org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:291)
        at org.apache.hadoop.hdfs.RemoteBlockReader2.readNextPacket(RemoteBlockReader2.java:202)
        at org.apache.hadoop.hdfs.RemoteBlockReader2.read(RemoteBlockReader2.java:137)
        at org.apache.hadoop.hdfs.DFSInputStream$ByteArrayStrategy.doRead(DFSInputStream.java:682)
        at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:738)
        at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:795)
        at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:836)
        at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:644)
        at java.io.FilterInputStream.read(FilterInputStream.java:83)
        at org.apache.hadoop.hdfs.AppendTestUtil.check(AppendTestUtil.java:129)
        at org.apache.hadoop.hdfs.TestAppendDifferentChecksum.testAlgoSwitchRandomized(TestAppendDifferentChecksum.java:130)

testSwitchAlgorithms(org.apache.hadoop.hdfs.TestAppendDifferentChecksum)  Time elapsed: 1.394
sec  <<< ERROR!
java.io.IOException: p=/testSwitchAlgorithms, length=3000, i=0
        at org.apache.hadoop.util.NativeCrc32.nativeVerifyChunkedSums(Native Method)
        at org.apache.hadoop.util.NativeCrc32.verifyChunkedSums(NativeCrc32.java:57)
        at org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:291)
        at org.apache.hadoop.hdfs.RemoteBlockReader2.readNextPacket(RemoteBlockReader2.java:202)
        at org.apache.hadoop.hdfs.RemoteBlockReader2.read(RemoteBlockReader2.java:137)
        at org.apache.hadoop.hdfs.DFSInputStream$ByteArrayStrategy.doRead(DFSInputStream.java:682)
        at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:738)
        at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:795)
        at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:836)
        at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:644)
        at java.io.FilterInputStream.read(FilterInputStream.java:83)
        at org.apache.hadoop.hdfs.AppendTestUtil.check(AppendTestUtil.java:129)
        at org.apache.hadoop.hdfs.TestAppendDifferentChecksum.testSwitchAlgorithms(TestAppendDifferentChecksum.java:94)




> Crc32 checksum errors in Big-Endian Architecture
> ------------------------------------------------
>
>                 Key: HDFS-6903
>                 URL: https://issues.apache.org/jira/browse/HDFS-6903
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: test
>    Affects Versions: 3.0.0, 2.4.1
>         Environment: PowerPC RHEL 7 & 6.5 ( ppc64 - Big-Endian )
>            Reporter: Ayappan
>             Fix For: 2.4.1
>
>         Attachments: HDFS-6903.patch
>
>
> Native Crc32 checksum calculation is not handled in Big-Endian Architecture.In this case,
the platform is ppc64. Due to this several testcases in HDFS module fails.
> Running org.apache.hadoop.hdfs.TestAppendDifferentChecksum
> Tests run: 3, Failures: 0, Errors: 2, Skipped: 1, Time elapsed: 13.274 sec <<<
FAILURE! - in org.apache.hadoop.hdfs.TestAppendDifferentChecksum
> testAlgoSwitchRandomized(org.apache.hadoop.hdfs.TestAppendDifferentChecksum)  Time elapsed:
7.141 sec  <<< ERROR!
> java.io.IOException: p=/testAlgoSwitchRandomized, length=28691, i=12288
>         at org.apache.hadoop.util.NativeCrc32.nativeVerifyChunkedSums(Native Method)
>         at org.apache.hadoop.util.NativeCrc32.verifyChunkedSums(NativeCrc32.java:57)
>         at org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:291)
>         at org.apache.hadoop.hdfs.RemoteBlockReader2.readNextPacket(RemoteBlockReader2.java:202)
>         at org.apache.hadoop.hdfs.RemoteBlockReader2.read(RemoteBlockReader2.java:137)
>         at org.apache.hadoop.hdfs.DFSInputStream$ByteArrayStrategy.doRead(DFSInputStream.java:682)
>         at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:738)
>         at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:795)
>         at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:836)
>         at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:644)
>         at java.io.FilterInputStream.read(FilterInputStream.java:83)
>         at org.apache.hadoop.hdfs.AppendTestUtil.check(AppendTestUtil.java:129)
>         at org.apache.hadoop.hdfs.TestAppendDifferentChecksum.testAlgoSwitchRandomized(TestAppendDifferentChecksum.java:130)
> testSwitchAlgorithms(org.apache.hadoop.hdfs.TestAppendDifferentChecksum)  Time elapsed:
1.394 sec  <<< ERROR!
> java.io.IOException: p=/testSwitchAlgorithms, length=3000, i=0
>         at org.apache.hadoop.util.NativeCrc32.nativeVerifyChunkedSums(Native Method)
>         at org.apache.hadoop.util.NativeCrc32.verifyChunkedSums(NativeCrc32.java:57)
>         at org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:291)
>         at org.apache.hadoop.hdfs.RemoteBlockReader2.readNextPacket(RemoteBlockReader2.java:202)
>         at org.apache.hadoop.hdfs.RemoteBlockReader2.read(RemoteBlockReader2.java:137)
>         at org.apache.hadoop.hdfs.DFSInputStream$ByteArrayStrategy.doRead(DFSInputStream.java:682)
>         at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:738)
>         at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:795)
>         at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:836)
>         at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:644)
>         at java.io.FilterInputStream.read(FilterInputStream.java:83)
>         at org.apache.hadoop.hdfs.AppendTestUtil.check(AppendTestUtil.java:129)
>         at org.apache.hadoop.hdfs.TestAppendDifferentChecksum.testSwitchAlgorithms(TestAppendDifferentChecksum.java:94)



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message