hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Rahul Mehta <rahul23134...@gmail.com>
Subject hbase master is not starting @ 60010 on new ubutnu 11.04 system
Date Mon, 05 Dec 2011 07:08:41 GMT
I am trying to install hbase on my local system ubutnu 11.04 .

what i have done to do that is :

   1. SSH Configuration
      1. sh-keygen -t rsa
      2. do the enter enter when it ask something.
      3. cd ~/.ssh
      4. sudo apt-get install openssh-server
      5. ssh localhost
   2. Installing and configuring Zookeeper
      1. wget
      http://archive.cloudera.com/cdh/3/zookeeper-3.3.3-cdh3u1.tar.gz
      2. tar xzvf
zookeeper-3.3.3-cdh3u1.tar.gz<http://archive.cloudera.com/cdh/3/zookeeper-3.3.3-cdh3u1.tar.gz>
      3. mv zoo.cfg zoo1.cfg (in conf of zookeeper)
      4. cp zoo1.cfg zoo2.cfg
      5. cp zoo1.cfg zoo3.cfg
      6. Make the three data directory for each config file , i made
      /home/rahul/oodebesetup/data/zookeeper/data1 , data2 data3 resp. Change
      dataDir for all three config file respectively.
      7. make myid file in each data directory and  edit file respective
      1,2 and 3 the id of server.
      8. add this in all  three config  file at below
         1. server.1=localhost:2878:3878
         2. server.2=localhost:2879:3879
         3. server.3=localhost:2880:3880
      9. change Clientport of 1:2181 , 2: 2182 , 3 :2183 resp.
      10. ./bin/zkServer.sh start <Zookeeper configuration file specified
      in conf directory e.g. zoo1.cfg>
      11. can check with jps command  there should be these processes
         1. 3466 QuorumPeerMain
         2. 3399 QuorumPeerMain
         3. 3426 QuorumPeerMain



   1. Installing and configuring Hadoop
      1. wget http://archive.cloudera.com/cdh/3/hadoop-0.20.2-cdh3u1.tar.gz
      2. tar xzvf hadoop-0.20.2-cdh3u1.tar.gz
      3. create data directory for hadoop with inside folder hdfs .
      4. edit  <HADOOP_HOME>/conf/core_sites.xml


<property>

 <name>fs.default.name</name>

 <value>hdfs://localhost:9000</value>

 <description>This is the namenode uri</description>

</property>

 <property>

 <name>hadoop.tmp.dir</name>

 <value>/home/rahul/oodebesetup/data/hadoop/hdfs</value>

 <description>This is the namenode uri / directory</description>

</property>

   1. edit  <HADOOP_HOME>/conf/hdfs-site.xml


<property>

              <name>dfs.replication</name>

     <value>1</value>

    <description>Default block replication.The actual number of
replications can be specified when the file is created. The default is used
if replication is not specified in create time. </description>

</property>

   1. edit  <HADOOP_HOME>/conf/mapred-site.xml


<property>

<name>mapred.job.tracker</name>

<value>localhost:9001</value>

<description>The host and port that the MapReduce job tracker runs at. If
"local", then jobs are run in-process as a single map and reduce
task.</description>

</property>

   1. edit <HADOOP_HOME>/conf/hadoop-env.sh
         1. export JAVA_HOME=/usr/lib/jvm/java-6-openjdk
      2. Formatting the namenode
         1. <HADOOP_HOME>/bin/hadoop namenode -format
      3.  Start Hadoop server
         1. <HADOOP_HOME>/bin/start-all.sh
      4. check with jps there should be these process.
         1. 10119 DataNode
         2. 10413 JobTracker
         3. 10338 SecondaryNameNode
         4. 10625 TaskTracker
         5. 9897 NameNode
      1. Installing and configuring HBase
      1. wget http://archive.cloudera.com/cdh/3/hbase-0.90.3-cdh3u1.tar.gz
      2. tar xzvf hbase-0.90.3-cdh3u1.tar.gz
      3. edit <HBASE_HOME>/conf/hbase_site.xml


   <property>

<name>hbase.master</name>

<value>localhost:60000</value>

<description>The host and port that the HBase master runs at.A value of
'local' runs the master and a regionserver in a single
process.</description>

</property>

<property>

<name>hbase.rootdir</name>

<value>hdfs://localhost:9000/hdfs</value>

<description>The directory shared by region servers.</description>

</property>

<property>

<name>hbase.cluster.distributed</name>

<value>true</value>

<description>The mode the cluster will be in. Possible values are false:
standalone and pseudo-distributed setups with managed Zookeeper true:
fully-distributed with unmanaged Zookeeper Quorum (see hbase-env.sh)

</description>

</property>

<property>

<name>hbase.zookeeper.property.clientPort</name>

<value>2181</value>

<description>Property from ZooKeeper's config zoo.cfg.The port at which the
clients will connect.</description>

</property>

<property>

<name>hbase.zookeeper.quorum</name>

<value>localhost</value>

<description>Comma separated list of servers in the ZooKeeper Quorum.For
example,"host1.mydomain.com,host2.mydomain.com".By default this is set to
localhost for local and pseudo-distributed modes of operation. For
afully-distributed setup, this should be set to a full list of ZooKeeper
quorum servers. If HBASE_MANAGES_ZK is set in hbase-env.sh this is the list
of servers which we will start/stop ZooKeeper on. </description>

</property>

   1. edit  <HBASE_HOME>/conf/hbase_env.sh
         1. export HBASE_MANAGES_ZK=false
         2. export JAVA_HOME=/usr/lib/jvm/java-6-openjdk
      2. start hbase server
         1. <HBASE_HOME>/bin/start-hbase.sh
      3. verify by jps command
         1. HMaster
         2. HRegionServer

But when i am running webconsole of hbase @ http://localhost:60010 it is
not opening . Please suggest why ?


-- 
Thanks & Regards

Rahul Mehta

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message