hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Andrew Purtell (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-13400) HBase Snapshot export to S3 fails with Content-MD5 errors.
Date Fri, 03 Apr 2015 18:29:53 GMT

    [ https://issues.apache.org/jira/browse/HBASE-13400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14394850#comment-14394850
] 

Andrew Purtell commented on HBASE-13400:
----------------------------------------

Surely we do not want to claim ownership of every dependency that comes in from Hadoop or
ZooKeeper and officially state that all of the APIs of Hadoop, Jackson, jets3t, ZooKeeper,
etc etc etc are part of _our_ API too. That's lunacy.

> HBase Snapshot export to S3 fails with Content-MD5 errors.
> ----------------------------------------------------------
>
>                 Key: HBASE-13400
>                 URL: https://issues.apache.org/jira/browse/HBASE-13400
>             Project: HBase
>          Issue Type: Bug
>          Components: Filesystem Integration, hadoop2
>    Affects Versions: 0.98.0
>         Environment: CentOS 6.5, Hortonworks Data Platform 2.1.2, Hadoop 2.4.0
>            Reporter: Joseph Reid
>
> We're running into issues exporting snapshots of large tables to Amazon S3.
> The snapshot completes successfully, but the snapshot export job runs into errors with
jets3t when we attempt to export to S3.  
> Error snippet, from job log:
> {code}
> 2015-04-03 16:59:16,425 INFO  [main] mapreduce.Job: Task Id : attempt_1426532296228_55454_m_000008_1,
Status : FAILED
> Error: org.apache.hadoop.fs.s3.S3Exception: org.jets3t.service.S3ServiceException: S3
Error Message. -- ResponseCode: 400, ResponseStatus: Bad Request, XML Error Message: <?xml
version="1.0" encoding="UTF-8"?><Error><Code>BadDigest</Code><Message>The
Content-MD5 you specified did not match what we received.</Message><ExpectedDigest>CWiSsgzVAJyzPy2oT8u4Ag==</ExpectedDigest><CalculatedDigest>2DIsv6jZJ8FuGtalOO8SPA==</CalculatedDigest><RequestId>CA325C738970C313</RequestId><HostId>tnE+O1zPZovaQWMhCuM4lkX0h/wN9173FQ7omxZzLb6eH0OCHASyan+mb8WBJkNn</HostId></Error>
> 	at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.handleS3ServiceException(Jets3tNativeFileSystemStore.java:405)
> 	at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.storeFile(Jets3tNativeFileSystemStore.java:115)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 	at java.lang.reflect.Method.invoke(Method.java:606)
> 	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190)
> 	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
> 	at org.apache.hadoop.fs.s3native.$Proxy19.storeFile(Unknown Source)
> 	at org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsOutputStream.close(NativeS3FileSystem.java:221)
> 	at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:70)
> 	at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:103)
> 	at org.apache.hadoop.hbase.snapshot.ExportSnapshot$ExportMapper.copyFile(ExportSnapshot.java:200)
> 	at org.apache.hadoop.hbase.snapshot.ExportSnapshot$ExportMapper.map(ExportSnapshot.java:140)
> 	at org.apache.hadoop.hbase.snapshot.ExportSnapshot$ExportMapper.map(ExportSnapshot.java:89)
> 	at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
> 	at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
> 	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
> 	at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:415)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1557)
> 	at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
> Caused by: org.jets3t.service.S3ServiceException: S3 Error Message. -- ResponseCode:
400, ResponseStatus: Bad Request, XML Error Message: <?xml version="1.0" encoding="UTF-8"?><Error><Code>BadDigest</Code><Message>The
Content-MD5 you specified did not match what we received.</Message><ExpectedDigest>CWiSsgzVAJyzPy2oT8u4Ag==</ExpectedDigest><CalculatedDigest>2DIsv6jZJ8FuGtalOO8SPA==</CalculatedDigest><RequestId>CA325C738970C313</RequestId><HostId>tnE+O1zPZovaQWMhCuM4lkX0h/wN9173FQ7omxZzLb6eH0OCHASyan+mb8WBJkNn</HostId></Error>
> 	at org.jets3t.service.S3Service.putObject(S3Service.java:2267)
> 	at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.storeFile(Jets3tNativeFileSystemStore.java:113)
> 	... 21 more
> 2015-04-03 17:03:50,613 INFO  [main] mapreduce.Job: Task Id : attempt_1426532296228_55454_m_000010_1,
Status : FAILED
> AttemptID:attempt_1426532296228_55454_m_000010_1 Timed out after 300 secs
> {\code}
> We've verified that exports to other clusters from these same snapshots work fine.  Thus
the issue appears to lie within the snapshot export utility, jets3t, and S3.
> "The Content-MD5 you specified did not match what we received" seems to indicate that
the snapshot changed between when the upload started and the error.   Can that be?  
> Related to:
> [Discussion on jets3t user group,.|https://groups.google.com/forum/#!topic/jets3t-users/Bg2qh7OdE2U]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message