hbase-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From John Roberts <johnroberts...@yahoo.com>
Subject Re: HBase: using S3 for storage
Date Tue, 15 Dec 2009 16:32:52 GMT
Never mind - I set my hbase.rootdir to s3://net.montrix.test.s3.amazonaws.com:80/ and it worked
and I can see files being written to my net.montrix.test.s3.amazonaws.com bucket in S3.  

- John

From: John Roberts <johnroberts239@yahoo.com>
To: hbase-dev@hadoop.apache.org
Sent: Tue, December 15, 2009 6:39:53 AM
Subject: Re: HBase: using S3 for storage

There were additional error messages in my master log file which indicated that I was missing
some jar's.  I downloaded jets3t-0.7.1.jar and commons-codec-1.4.jar and set the JETS3T_HOME
variable in my hbase-env.sh file.   This got me to the point where it is now trying to use
S3.  Now I get the errors below in my master log file.  At this point the only question seems
to be exactly what to set my hbase.rootdir property to.  My S3 account has buckets "net.montrix.test"
as well as "net.montrix.test.s3.amazonaws.com".   I tried setting my hbase.rootdir value to


The location of my hbase root dir on my local file system is /tmp/hbase-jroberts/hbase.  That
resulted in the error below.   So either my hbase.rootdir value is wrong or perhaps the fs.default.name
property in my hadoop-site.xml is wrong?  I have it set to s3://hbase.


2009-12-15 06:18:52,917 INFO org.apache.hadoop.hbase.master.HMaster: My address is localhost.localdomain:60000
2009-12-15 06:18:54,696 ERROR org.apache.hadoop.hbase.master.HMaster: Can not start master
org.apache.hadoop.fs.s3.S3Exception: org.jets3t.service.S3ServiceException: S3 GET failed
for '/%2Ftmp%2Fhbase-jroberts%2Fhbase' XML Error Message: <?xml version="1.0" encoding="UTF-8"?><Error><Code>NoSuchBucket</Code><Message>The
specified bucket does not exist</Message><BucketName>net.montrix.test.s3.amazonaws.com</BucketName><RequestId>E7E72017C69AB6DF</RequestId><HostId>LHSezOrfx3LrWI+IWQ1Icbz0/FRndFDsyQWIn3Oaru1ui6JXfq9Zfz1tgfUET7TG</HostId></Error>
        at org.apache.hadoop.fs.s3.Jets3tFileSystemStore.get(Jets3tFileSystemStore.java:156)

From: John Roberts <johnroberts239@yahoo.com>
To: hbase-dev@hadoop.apache.org
Sent: Tue, December 15, 2009 1:33:45 AM
Subject: Re: HBase: using S3 for storage

The stack trace is here: http://pastebin.ca/1715521

I set my hbase.rootdir value to the following:


Note that the net.montrix.test bucket exists in my S3 account.  Thanks for looking at this.


From: Andrew Purtell <apurtell@apache.org>
To: hbase-dev@hadoop.apache.org
Sent: Mon, December 14, 2009 11:27:41 PM
Subject: Re: HBase: using S3 for storage

Hi John,

Can you pastebin that stack trace?

   - Andy

From: John Roberts <johnroberts239@yahoo.com>
To: hbase-dev@hadoop.apache.org
Sent: Mon, December 14, 2009 6:49:50 PM
Subject: HBase: using S3 for storage

I'm running HBase version 0.20.2 and am trying to get my HBase server
to use S3 for storage instead of the local file system.  I tried
following the instructions here but could not get it to work:


My HBase version does not have a hadoop-site.xml file so I created one in the conf directory
with the following parameters:





also updated the hbase.rootdir property with the S3 url as per the reference above.  When
I ran
the hbase shell and tried to put a value into a table I got a deep stack trace with
no mention of S3.

Has anyone gotten HBase to use S3?  If so - could you send me the config changes you made
to get it to work?  Thanks!


  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message