From hadoop-user-return-2750-apmail-lucene-hadoop-user-archive=lucene.apache.org@lucene.apache.org Mon Oct 22 15:13:51 2007 Return-Path: Delivered-To: apmail-lucene-hadoop-user-archive@locus.apache.org Received: (qmail 9298 invoked from network); 22 Oct 2007 15:13:51 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 22 Oct 2007 15:13:51 -0000 Received: (qmail 48944 invoked by uid 500); 22 Oct 2007 15:13:37 -0000 Delivered-To: apmail-lucene-hadoop-user-archive@lucene.apache.org Received: (qmail 48655 invoked by uid 500); 22 Oct 2007 15:13:36 -0000 Mailing-List: contact hadoop-user-help@lucene.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hadoop-user@lucene.apache.org Delivered-To: mailing list hadoop-user@lucene.apache.org Received: (qmail 48646 invoked by uid 99); 22 Oct 2007 15:13:36 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 22 Oct 2007 08:13:36 -0700 X-ASF-Spam-Status: No, hits=-0.0 required=10.0 tests=SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: local policy) Received: from [69.44.16.11] (HELO getopt.org) (69.44.16.11) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 22 Oct 2007 15:13:38 +0000 Received: from [192.168.0.254] (75-mo3-2.acn.waw.pl [62.121.105.75]) (authenticated) by getopt.org (8.11.6/8.11.6) with ESMTP id l9MFCQW01832 for ; Mon, 22 Oct 2007 10:12:26 -0500 Message-ID: <471CBDCC.5070207@getopt.org> Date: Mon, 22 Oct 2007 17:12:12 +0200 From: Andrzej Bialecki User-Agent: Thunderbird 2.0.0.6 (Windows/20070728) MIME-Version: 1.0 To: hadoop-user@lucene.apache.org Subject: Re: How to Setup Hbase in 10 mintues References: <471C228A.9000006@apache.org> In-Reply-To: <471C228A.9000006@apache.org> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Checked: Checked by ClamAV on apache.org Dennis Kubes wrote: > I had a somewhat difficult time figuring out how to get hbase started. > In the end, it was pretty simple. Here are the steps: > > 1. Download hadoop from svn, untar to directory say ~/hadooptrunk and > compile through ant. > 2. Move the build hadoop-xx directory to where you want to run it, > say ~/hadoop > 3. Set the hadoop tmp directory in hadoop-site.xml (as default all > other variables should be file) > 4. Copy scripts from ~/hadooptrunk/src/contrib/hbase/bin to > ~/hadoop/src/contrib/hbase/bin > 5. Format hadoop dfs through ~/hadoop/bin/hadoop namenode -format > 6. Start the dfs through ~/hadoop/bin/start-dfs.sh (logs are > viewable in ~/hadoop/logs by default, don't need mapreduce for hbase) > 7. Go to the hbase directory ~/hadoop/src/contrib/hbase > 8. Hbase default values are fine for now, start hbase with > ~/hadoop/src/contrib/hbase/bin/start-hbase.sh (logs are viewable > in ~/hadoop/logs by default) > 9. Enter hbase shell with ~/hadoop/src/contrib/hbase/bin/hbase shell > 10. Have fun with Hbase > 11. Stop the hbase servers with > ~/hadoop/src/contrib/hbase/bin/stop-hbase.sh. Wait until the > servers are finished stopping. > 12. Stop the hadoop dfs with ~/hadoop/bin/stop-dfs.sh > > Hope this helps. Did you try to run it with LocalFS / Cygwin, and if so, did you notice any peculiarities? I tried this once, and first the start-hbase.sh script wouldn't work (missing log files? it looked like some variables in paths were expanded in a wrong way), and then when I started the master and a regionserver by hand, it would complain about missing map files and all requests would time out ... I gave up after that and moved to HDFS. -- Best regards, Andrzej Bialecki <>< ___. ___ ___ ___ _ _ __________________________________ [__ || __|__/|__||\/| Information Retrieval, Semantic Web ___|||__|| \| || | Embedded Unix, System Integration http://www.sigram.com Contact: info at sigram dot com