hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Milind Vaidya <kava...@gmail.com>
Subject Checksum Exception : Why is it happening and how to avoid it ?
Date Wed, 10 Aug 2016 13:16:46 GMT
I am trying to upload file to s3.

Locally the file is generated using :
The corresponding .crc file is generated too.

There are 2 scenarios when the file is read and hence the exception

1. While uploading
2. While trimming (extracting only required part from a bigger file)

I avoided the exception in case 1, as I bypass usage of hadoop libraries
altogether, which I could not in case of scenario 2. The exception trace is
as follows

*Caused by: org.apache.hadoop.fs.ChecksumException: Checksum error:
at 2096128*

*        at

*        at

*        at

*        at

*        at java.io.DataInputStream.read(DataInputStream.java:149)*

*        at

*        at

*        at

*        at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)*

*        at java.io.BufferedInputStream.read(BufferedInputStream.java:254)*

Hadoop Native lib version 2.7.0

This does not happen all the time. Encountered when the load is more and
file is in MBs. When tested on qa or staging env, where load is less, it
works fine.

What is going wrong here ?

View raw message