hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Vamshi Krishna <vamshi2...@gmail.com>
Subject Re: regions are not getting distributed
Date Mon, 26 Aug 2013 12:12:42 GMT
Hi all,
 The problem got solved by changing the value for below property from local
directory path to the hdfs:// path AND running hadoop before i start
running my hbase.

<property>
        <name>hbase.rootdir</name>

<!--value>hdfs://vamshi:54310/home/biginfolabs/BILSftwrs/hbase-0.94.10/data/</value-->
    <value>/home/biginfolabs/BILSftwrs/hbase-0.94.10/hbstmp/</value>
    </property>

Now, i see the data gets distributed across the servers as i wanted.
But, Is there any way to control the process of storing specific rows on
specific region servers..? (Of course is started a separate thread for this
question.)



On Mon, Aug 26, 2013 at 3:01 PM, Vamshi Krishna <vamshi2105@gmail.com>wrote:

> Ted, I guessed the problem could be due to only single zookeeper server in
> hbase.zookeepr.quorumpeer. So, i have added the region server machine also
> apart from the master. Now, i don't see any such FAIL cases as mentioned
> below. (which was the case earlier)
>
> Handling transition=RS_ZK_REGION_
> FAILED_OPEN, server=vamshi_RS,60020,1377265388499,
> region=2dc3d895b55455dd06880fb0f15bb80d
>
> But, still i don't see the regions moving to other server i.e region
> server machine.
> I have created 2 tables on master machine and stored few millions of rows
> in each. Through hbase master web UI (http://<host>:60010), i see
>
> Region Servers
> ServerNameStart timeLoad vamshi,60020,1377502384119 <http://vamshi:60030/>Mon
> Aug 26 13:03:04 IST 2013requestsPerSecond=0, numberOfOnlineRegions=4,
> usedHeapMB=170, maxHeapMB=991 vamshi_RS,60020,1377502387395<http://vamshi_RS:60030/>Mon
> Aug 26 13:03:07 IST 2013requestsPerSecond=0, numberOfOnlineRegions=0,
> usedHeapMB=24, maxHeapMB=991 Total: servers: 2
> requestsPerSecond=0, numberOfOnlineRegions=4
>
>
> Why is it so..? I have set the dfs.replication to 3.
> Is this below statement not true..?
>
> Irrespective of the cluster size, the number of regions (data) should be
> spread uniformly among all the nodes in the cluster .
> Which implies, here in above case,
> "4 regions on server 'vamshi' and zero regions on server vamshi_RS "
> SHOULD BE CHANGED automatically to
> "two regions on server 'vamshi' and two regions on server vamshi_RS" ..
> Is my understanding  right here?
>
> hbase-site.xml:
>
> <property>
>         <name>hbase.rootdir</name>
>
> <!--value>hdfs://vamshi:54310/home/biginfolabs/BILSftwrs/hbase-0.94.10/data/</value-->
>     <value>/home/biginfolabs/BILSftwrs/hbase-0.94.10/hbstmp/</value>
>     </property>
>
>     <property>
>         <name>hbase.cluster.distributed</name>
>         <value>true</value>
>     </property>
>     <property>
>         <name>hbase.master</name>
>         <value>vamshi</value>
>     </property>
>  <property>
>         <name>hbase.zookeeper.quorum</name>
>         <value>vamshi,vamshi_RS</value>
>     </property>
>     <property>
>         <name>hbase.zookeeper.property.dataDir</name>
>         <value>/home/biginfolabs/BILSftwrs/hbase-0.94.10/zkptmp</value>
>     </property>
>
>
>
>
>
> On Mon, Aug 26, 2013 at 2:12 PM, Andrew Purtell <apurtell@apache.org>wrote:
>
>> Two nodes is insufficient. Default DFS replication is 3. That would be the
>> bare minimum just for kicking the tires IMO but is still a degenerate
>> case.
>> In my opinion 5 is the lowest you should go. You shouldn't draw
>> conclusions
>> from inadequate deploys.
>>
>> On Friday, August 23, 2013, Vamshi Krishna wrote:
>>
>> > Hello all,
>> >             I set up a 2 node hbase cluster. I inserted million rows in
>> to
>> > hbase table. Machine1-master and region server
>> > machine-2-only region server.
>> > After the insertion completed, Still i see the 4 regions formed for the
>> > inserted table are residing on the machine -1 only. I waited more than 5
>> > minutes, as the default balancer period is 5min from hbase apache
>> > documentation. But, still i notice all the 4 regions of the table are on
>> > machine-1 only. not distributed automatically.
>> > I even enabled the balancer from shell using     "balance_switch true"
>> but
>> > no result even after 5 min.
>> >
>> > Could any one tell me why is it happening so.?
>> >
>> >
>> > --
>> > *Regards*
>> > *
>> > Vamshi Krishna
>> > *
>> >
>>
>>
>> --
>> Best regards,
>>
>>    - Andy
>>
>> Problems worthy of attack prove their worth by hitting back. - Piet Hein
>> (via Tom White)
>>
>
>
>
> --
> *Regards*
> *
> Vamshi Krishna
> *
>



-- 
*Regards*
*
Vamshi Krishna
*

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message