hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Bin YANG" <yangbinism...@gmail.com>
Subject Re: A basic question on HBase
Date Fri, 19 Oct 2007 06:33:45 GMT
Dear edward yoon & Michael Stack,

After using the hadoop branch-0.15, hbase runs correctly.

Thank you very much!

Best wishes,
Bin YANG

On 10/19/07, Bin YANG <yangbinisme82@gmail.com> wrote:
> Thank you! I can download it now!
>
> On 10/19/07, edward yoon <webmaster@udanax.org> wrote:
> >
> > Run the following on the command-line:
> >
> >       $ svn co http://svn.apache.org/repos/asf/lucene/hadoop/trunk hadoop
> >
> > See also for more information about the Hbase and Hbase Shell client program:
> >
> > - http://wiki.apache.org/lucene-hadoop/Hbase
> > - http://wiki.apache.org/lucene-hadoop/Hbase/HbaseShell
> >
> >
> > Edward.
> > ----
> > B. Regards,
> > Edward yoon (Assistant Manager/R&D Center/NHN, corp.)
> > +82-31-600-6183, +82-10-7149-7856
> >
> >
> > > Date: Fri, 19 Oct 2007 13:46:51 +0800
> > > From: yangbinisme82@gmail.com
> > > To: hadoop-user@lucene.apache.org
> > > Subject: Re: A basic question on HBase
> > >
> > > Dear Michael Stack:
> > >
> > > I am afraid that I cannot connect to the svn,
> > >
> > > Error: PROPFIND request failed on '/viewvc/lucene/hadoop/trunk'
> > > Error: PROPFIND of '/viewvc/lucene/hadoop/trunk': 302 Found
> > > (http://svn.apache.org)
> > >
> > > and
> > >
> > > Error: PROPFIND request failed on '/viewvc/lucene/hadoop/branches/branch-0.15'
> > > Error: PROPFIND of '/viewvc/lucene/hadoop/branches/branch-0.15': 302
> > > Found (http://svn.apache.org)
> > >
> > > Would you please send me a 0.15 version of hadoop, or give some
> > > information on how to connect to the svn successfully?
> > >
> > > Best wishes,
> > > Bin YANG
> > >
> > >
> > >
> > >
> > >
> > > On 10/19/07, Michael Stack  wrote:
> > >> (Ignore my last message. I had missed your back and forth with Edward).
> > >>
> > >> Regards step 3. below, you are starting both mapreduce and dfs daemons.
> > >> You only need dfs daemons running hbase so you could do
> > >> ./bin/start-dfs.sh instead.
> > >>
> > >> Are you using hadoop 0.14.x? (It looks like it going by the commands
> > >> and log excerpt below). If so, please use TRUNK or the 0.15.0 candidate
> > >> (Branch is here
> > >> http://svn.apache.org/viewvc/lucene/hadoop/branches/branch-0.15/).
> > >> There is a big difference between hbase 0.14.0 and 0.15.0 (The 0.15.0
> > >> candidate contains the first hbase release). For example vestige log
> > >> files are properly split and distributed in later hbases where before
> > >> they threw the "Can not start region server because..." exception.
> > >>
> > >> As Edward points out, the master does not seem to be getting the region
> > >> server 'report-for-duty' message (which doesn't jibe with the region
> > >> server log that says -ROOT- has been deployed because master assigns
> > >> regions).
> > >>
> > >> Regards your not being able to reformat -- presuming no valuable data in
> > >> your hdfs, that all is running on localhost, and that you are moving
> > >> from hadoop 0.14.0 to 0.15.0 -- just remove /tmp/hadoop-hadoop dir.
> > >>
> > >> St.Ack
> > >>
> > >>
> > >>
> > >>
> > >> Bin YANG wrote:
> > >>> Dear edward,
> > >>>
> > >>> I will show you the steps what I have done:
> > >>>
> > >>> 1. hadoop-site.xml
> > >>>
> > >>>
> > >>> fs.default.name
> > >>> localhost:9000
> > >>> Namenode
> > >>>
> > >>>
> > >>>
> > >>> mapred.job.tracker
> > >>> localhost:9001
> > >>> JobTracker
> > >>>
> > >>>
> > >>>
> > >>> dfs.replication
> > >>> 1
> > >>>
> > >>>
> > >>> 2. /hadoop-0.14.2$ bin/hadoop namenode -format
> > >>> 3. bin/start-all.sh
> > >>> 4. hbase.site.xml
> > >>>
> > >>>
> > >>>
> > >>> hbase.master
> > >>> localhost:60000
> > >>> The host and port that the HBase master runs at.
> > >>> TODO: Support 'local' (All running in single context).
> > >>>
> > >>>
> > >>>
> > >>> hbase.regionserver
> > >>> localhost:60010
> > >>> The host and port a HBase region server runs at.
> > >>>
> > >>>
> > >>>
> > >>>
> > >>>
> > >>> 5. bin/hbase-start.sh
> > >>>
> > >>> The log:
> > >>> 1. hbase-hadoop-regionserver-yangbin.log
> > >>>
> > >>> 2007-10-18 15:40:58,588 INFO org.apache.hadoop.util.NativeCodeLoader:
> > >>> Loaded the native-hadoop library
> > >>> 2007-10-18 15:40:58,592 INFO
> > >>> org.apache.hadoop.io.compress.zlib.ZlibFactory: Successfully loaded
&
> > >>> initialized native-zlib library
> > >>> 2007-10-18 15:40:58,690 INFO org.apache.hadoop.ipc.Server: IPC Server
> > >>> listener on 60010: starting
> > >>> 2007-10-18 15:40:58,692 INFO org.apache.hadoop.ipc.Server: IPC Server
> > >>> handler 3 on 60010: starting
> > >>> 2007-10-18 15:40:58,694 INFO org.apache.hadoop.ipc.Server: IPC Server
> > >>> handler 4 on 60010: starting
> > >>> 2007-10-18 15:40:58,692 INFO org.apache.hadoop.ipc.Server: IPC Server
> > >>> handler 2 on 60010: starting
> > >>> 2007-10-18 15:40:58,691 INFO org.apache.hadoop.ipc.Server: IPC Server
> > >>> handler 1 on 60010: starting
> > >>> 2007-10-18 15:40:58,696 INFO org.apache.hadoop.ipc.Server: IPC Server
> > >>> handler 5 on 60010: starting
> > >>> 2007-10-18 15:40:58,691 INFO org.apache.hadoop.ipc.Server: IPC Server
> > >>> handler 0 on 60010: starting
> > >>> 2007-10-18 15:40:58,696 INFO org.apache.hadoop.ipc.Server: IPC Server
> > >>> handler 6 on 60010: starting
> > >>> 2007-10-18 15:40:58,697 INFO org.apache.hadoop.ipc.Server: IPC Server
> > >>> handler 7 on 60010: starting
> > >>> 2007-10-18 15:40:58,698 INFO org.apache.hadoop.ipc.Server: IPC Server
> > >>> handler 8 on 60010: starting
> > >>> 2007-10-18 15:40:58,699 INFO org.apache.hadoop.hbase.HRegionServer:
> > >>> HRegionServer started at: 127.0.1.1:60010
> > >>> 2007-10-18 15:40:58,709 INFO org.apache.hadoop.ipc.Server: IPC Server
> > >>> handler 9 on 60010: starting
> > >>> 2007-10-18 15:40:58,867 INFO org.apache.hadoop.hbase.HStore: HStore
> > >>> online for --ROOT--,,0/info
> > >>> 2007-10-18 15:40:58,872 INFO org.apache.hadoop.hbase.HRegion: region
> > >>> --ROOT--,,0 available
> > >>> 2007-10-18 18:21:55,558 INFO org.apache.hadoop.ipc.Client: Retrying
> > >>> connect to server: localhost/127.0.0.1:60000. Already tried 1 time(s).
> > >>> 2007-10-18 18:21:56,577 INFO org.apache.hadoop.ipc.Client: Retrying
> > >>> connect to server: localhost/127.0.0.1:60000. Already tried 2 time(s).
> > >>> 2007-10-18 18:21:57,585 INFO org.apache.hadoop.ipc.Client: Retrying
> > >>> connect to server: localhost/127.0.0.1:60000. Already tried 3 time(s).
> > >>> 2007-10-18 18:21:58,593 INFO org.apache.hadoop.ipc.Client: Retrying
> > >>> connect to server: localhost/127.0.0.1:60000. Already tried 4 time(s).
> > >>> 2007-10-18 18:22:05,874 ERROR org.apache.hadoop.hbase.HRegionServer:
> > >>> Can not start region server because
> > >>> org.apache.hadoop.hbase.RegionServerRunningException: region server
> > >>> already running at 127.0.1.1:60010 because logdir
> > >>> /tmp/hadoop-hadoop/hbase/log_yangbin_60010 exists
> > >>> at org.apache.hadoop.hbase.HRegionServer.(HRegionServer.java:482)
> > >>> at org.apache.hadoop.hbase.HRegionServer.(HRegionServer.java:407)
> > >>> at org.apache.hadoop.hbase.HRegionServer.main(HRegionServer.java:1357)
> > >>>
> > >>> 2007-10-18 19:57:40,243 INFO org.apache.hadoop.util.NativeCodeLoader:
> > >>> Loaded the native-hadoop library
> > >>> 2007-10-18 19:57:40,274 INFO
> > >>> org.apache.hadoop.io.compress.zlib.ZlibFactory: Successfully loaded
&
> > >>> initialized native-zlib library
> > >>> 2007-10-18 19:57:40,364 INFO org.apache.hadoop.ipc.Server: IPC Server
> > >>> listener on 60010: starting
> > >>> 2007-10-18 19:57:40,366 INFO org.apache.hadoop.ipc.Server: IPC Server
> > >>> handler 0 on 60010: starting
> > >>> 2007-10-18 19:57:40,367 INFO org.apache.hadoop.ipc.Server: IPC Server
> > >>> handler 1 on 60010: starting
> > >>> 2007-10-18 19:57:40,368 INFO org.apache.hadoop.ipc.Server: IPC Server
> > >>> handler 2 on 60010: starting
> > >>> 2007-10-18 19:57:40,368 INFO org.apache.hadoop.ipc.Server: IPC Server
> > >>> handler 3 on 60010: starting
> > >>> 2007-10-18 19:57:40,369 INFO org.apache.hadoop.ipc.Server: IPC Server
> > >>> handler 4 on 60010: starting
> > >>> 2007-10-18 19:57:40,370 INFO org.apache.hadoop.ipc.Server: IPC Server
> > >>> handler 5 on 60010: starting
> > >>> 2007-10-18 19:57:40,371 INFO org.apache.hadoop.ipc.Server: IPC Server
> > >>> handler 6 on 60010: starting
> > >>> 2007-10-18 19:57:40,371 INFO org.apache.hadoop.ipc.Server: IPC Server
> > >>> handler 7 on 60010: starting
> > >>> 2007-10-18 19:57:40,372 INFO org.apache.hadoop.ipc.Server: IPC Server
> > >>> handler 8 on 60010: starting
> > >>> 2007-10-18 19:57:40,373 INFO org.apache.hadoop.hbase.HRegionServer:
> > >>> HRegionServer started at: 127.0.1.1:60010
> > >>> 2007-10-18 19:57:40,384 INFO org.apache.hadoop.ipc.Server: IPC Server
> > >>> handler 9 on 60010: starting
> > >>> 2007-10-18 19:57:41,118 INFO org.apache.hadoop.hbase.HStore: HStore
> > >>> online for --ROOT--,,0/info
> > >>> 2007-10-18 19:57:41,125 INFO org.apache.hadoop.hbase.HRegion: region
> > >>> --ROOT--,,0 available
> > >>>
> > >>> 2. hbase-hadoop-master-yangbin.log
> > >>>
> > >>> There is a lot of the below statement
> > >>>
> > >>> 2007-10-18 15:52:52,885 INFO org.apache.hadoop.ipc.Client: Retrying
> > >>> connect to server: /127.0.1.1:60010. Already tried 1 time(s).
> > >>> 2007-10-18 15:52:53,892 INFO org.apache.hadoop.ipc.Client: Retrying
> > >>> connect to server: /127.0.1.1:60010. Already tried 2 time(s).
> > >>> 2007-10-18 15:52:54,900 INFO org.apache.hadoop.ipc.Client: Retrying
> > >>> connect to server: /127.0.1.1:60010. Already tried 3 time(s).
> > >>> 2007-10-18 15:52:55,904 INFO org.apache.hadoop.ipc.Client: Retrying
> > >>> connect to server: /127.0.1.1:60010. Already tried 4 time(s).
> > >>> 2007-10-18 15:52:56,912 INFO org.apache.hadoop.ipc.Client: Retrying
> > >>> connect to server: /127.0.1.1:60010. Already tried 5 time(s).
> > >>> 2007-10-18 15:52:57,924 INFO org.apache.hadoop.ipc.Client: Retrying
> > >>> connect to server: /127.0.1.1:60010. Already tried 6 time(s).
> > >>> 2007-10-18 15:52:58,928 INFO org.apache.hadoop.ipc.Client: Retrying
> > >>> connect to server: /127.0.1.1:60010. Already tried 7 time(s).
> > >>> 2007-10-18 15:52:59,932 INFO org.apache.hadoop.ipc.Client: Retrying
> > >>> connect to server: /127.0.1.1:60010. Already tried 8 time(s).
> > >>> 2007-10-18 15:53:00,936 INFO org.apache.hadoop.ipc.Client: Retrying
> > >>> connect to server: /127.0.1.1:60010. Already tried 9 time(s).
> > >>> 2007-10-18 15:53:01,939 INFO org.apache.hadoop.ipc.Client: Retrying
> > >>> connect to server: /127.0.1.1:60010. Already tried 10 time(s).
> > >>> 2007-10-18 15:53:02,943 INFO org.apache.hadoop.ipc.RPC: Server at
> > >>> /127.0.1.1:60010 not available yet, Zzzzz...
> > >>>
> > >>>
> > >>
> > >>
> > >
> > >
> > > --
> > > Bin YANG
> > > Department of Computer Science and Engineering
> > > Fudan University
> > > Shanghai, P. R. China
> > > EMail: yangbinisme82@gmail.com
> >
> > _________________________________________________________________
> > Windows Live Hotmail and Microsoft Office Outlook – together at last. Get it now.
> > http://office.microsoft.com/en-us/outlook/HA102225181033.aspx?pid=CL100626971033
>
>
> --
> Bin YANG
> Department of Computer Science and Engineering
> Fudan University
> Shanghai, P. R. China
> EMail: yangbinisme82@gmail.com
>


-- 
Bin YANG
Department of Computer Science and Engineering
Fudan University
Shanghai, P. R. China
EMail: yangbinisme82@gmail.com

Mime
View raw message