hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ryan Rawson <ryano...@gmail.com>
Subject Re: Connection problem during data import into hbase
Date Sat, 21 Feb 2009 09:54:49 GMT
i run in to that a lot - disabling a table doesnt seem to work all the time.

i think the zk support in 0.20 will help fix many of these 'cant find
regionserver' and other sync issues.

On Sat, Feb 21, 2009 at 1:44 AM, Amandeep Khurana <amansk@gmail.com> wrote:

> Here's another thing thats happening. I was trying to truncate the table.
>
> hbase(main):001:0> truncate 'in_table'
> Truncating in_table; it may take a while
> Disabling table...
> NativeException: org.apache.hadoop.hbase.RegionException: Retries
> exhausted,
> it took too long to wait for the table in_table to be disabled.
>    from org/apache/hadoop/hbase/client/HBaseAdmin.java:387:in
> `disableTable'
>    from org/apache/hadoop/hbase/client/HBaseAdmin.java:348:in
> `disableTable'
>     from sun.reflect.NativeMethodAccessorImpl:-2:in `invoke0'
>    from sun.reflect.NativeMethodAccessorImpl:-1:in `invoke'
>    from sun.reflect.DelegatingMethodAccessorImpl:-1:in `invoke'
>    from java.lang.reflect.Method:-1:in `invoke'
>    from org/jruby/javasupport/JavaMethod.java:250:in
> `invokeWithExceptionHandling'
>    from org/jruby/javasupport/JavaMethod.java:219:in `invoke'
>    from org/jruby/javasupport/JavaClass.java:416:in `execute'
>     from org/jruby/internal/runtime/methods/SimpleCallbackMethod.java:67:in
> `call'
>    from org/jruby/internal/runtime/methods/DynamicMethod.java:78:in `call'
>    from org/jruby/runtime/CallSite.java:155:in `cacheAndCall'
>    from org/jruby/runtime/CallSite.java:332:in `call'
>    from org/jruby/evaluator/ASTInterpreter.java:649:in `callNode'
>    from org/jruby/evaluator/ASTInterpreter.java:324:in `evalInternal'
>
> I left it for a few minutes and tried again. It worked. There was no load
> on
> the cluster at all. changed the config (both) and added
> dfs.datanode.socket.write.timeout property with value 0. I also defined the
> property in the job config.
>
> Amandeep
>
>
> Amandeep Khurana
> Computer Science Graduate Student
> University of California, Santa Cruz
>
>
> On Sat, Feb 21, 2009 at 1:23 AM, Amandeep Khurana <amansk@gmail.com>
> wrote:
>
> > I have 1 master + 2 slaves.
> > Am using 0.19.0 for both Hadoop and Hbase.
> > I didnt change any config from the default except the hbase.rootdir and
> the
> > hbase.master.
> >
> > I have gone through the FAQs but couldnt find anything. What exactly are
> > you pointing to?
> >
> >
> > Amandeep Khurana
> > Computer Science Graduate Student
> > University of California, Santa Cruz
> >
> >
> > On Sat, Feb 21, 2009 at 1:14 AM, stack <stack@duboce.net> wrote:
> >
> >> It looks like regionserver hosting root crashed:
> >>
> >> org.apache.hadoop.hbase.client.NoServerForRegionException: Timed out
> >> trying
> >> to locate root region
> >>
> >> How many servers you running?
> >>
> >> You made similar config. to that reported by Larry Compton in a mail
> from
> >> earlier today?  (See FAQ and Troubleshooting page for more on his listed
> >> configs.)
> >>
> >> St.Ack
> >>
> >>
> >> On Sat, Feb 21, 2009 at 1:01 AM, Amandeep Khurana <amansk@gmail.com>
> >> wrote:
> >>
> >> > Yes, the table exists before I start the job.
> >> >
> >> > I am not using TableOutputFormat. I picked up the sample code from the
> >> docs
> >> > and am using it.
> >> >
> >> > Here's the job conf:
> >> >
> >> > JobConf conf = new JobConf(getConf(), IN_TABLE_IMPORT.class);
> >> >        FileInputFormat.setInputPaths(conf, new Path("import_data"));
> >> >        conf.setMapperClass(MapClass.class);
> >> >        conf.setNumReduceTasks(0);
> >> >        conf.setOutputFormat(NullOutputFormat.class);
> >> >        JobClient.runJob(conf);
> >> >
> >> > Interestingly, the hbase shell isnt working now either. Its giving
> >> errors
> >> > even when I give the command "list"...
> >> >
> >> >
> >> >
> >> > Amandeep Khurana
> >> > Computer Science Graduate Student
> >> > University of California, Santa Cruz
> >> >
> >> >
> >> > On Sat, Feb 21, 2009 at 12:10 AM, stack <stack@duboce.net> wrote:
> >> >
> >> > > The table exists before you start the MR job?
> >> > >
> >> > > When you say 'midway through the job', are you using
> tableoutputformat
> >> to
> >> > > insert into your table?
> >> > >
> >> > > Which version of hbase?
> >> > >
> >> > > St.Ack
> >> > >
> >> > > On Fri, Feb 20, 2009 at 9:55 PM, Amandeep Khurana <amansk@gmail.com
> >
> >> > > wrote:
> >> > >
> >> > > > I dont know if this is related or not, but it seems to be. After
> >> this
> >> > map
> >> > > > reduce job, I tried to count the number of entries in the table
in
> >> > hbase
> >> > > > through the shell. It failed with the following error:
> >> > > >
> >> > > > hbase(main):002:0> count 'in_table'
> >> > > > NativeException: java.lang.NullPointerException: null
> >> > > >    from java.lang.String:-1:in `<init>'
> >> > > >    from org/apache/hadoop/hbase/util/Bytes.java:92:in `toString'
> >> > > >    from
> >> > > org/apache/hadoop/hbase/client/RetriesExhaustedException.java:50:in
> >> > > > `getMessage'
> >> > > >    from
> >> > > org/apache/hadoop/hbase/client/RetriesExhaustedException.java:40:in
> >> > > > `<init>'
> >> > > >    from
> >> org/apache/hadoop/hbase/client/HConnectionManager.java:841:in
> >> > > > `getRegionServerWithRetries'
> >> > > >    from org/apache/hadoop/hbase/client/MetaScanner.java:56:in
> >> > `metaScan'
> >> > > >    from org/apache/hadoop/hbase/client/MetaScanner.java:30:in
> >> > `metaScan'
> >> > > >    from
> >> org/apache/hadoop/hbase/client/HConnectionManager.java:411:in
> >> > > > `getHTableDescriptor'
> >> > > >    from org/apache/hadoop/hbase/client/HTable.java:219:in
> >> > > > `getTableDescriptor'
> >> > > >    from sun.reflect.NativeMethodAccessorImpl:-2:in `invoke0'
> >> > > >    from sun.reflect.NativeMethodAccessorImpl:-1:in `invoke'
> >> > > >    from sun.reflect.DelegatingMethodAccessorImpl:-1:in `invoke'
> >> > > >    from java.lang.reflect.Method:-1:in `invoke'
> >> > > >    from org/jruby/javasupport/JavaMethod.java:250:in
> >> > > > `invokeWithExceptionHandling'
> >> > > >    from org/jruby/javasupport/JavaMethod.java:219:in `invoke'
> >> > > >    from org/jruby/javasupport/JavaClass.java:416:in `execute'
> >> > > > ... 145 levels...
> >> > > >    from
> org/jruby/internal/runtime/methods/DynamicMethod.java:74:in
> >> > > `call'
> >> > > >    from
> org/jruby/internal/runtime/methods/CompiledMethod.java:48:in
> >> > > `call'
> >> > > >    from org/jruby/runtime/CallSite.java:123:in `cacheAndCall'
> >> > > >    from org/jruby/runtime/CallSite.java:298:in `call'
> >> > > >    from
> >> > > >
> >> > > >
> >> > >
> >> >
> >>
> ruby/hadoop/install/hbase_minus_0_dot_19_dot_0/bin//hadoop/install/hbase/bin/../bin/hirb.rb:429:in
> >> > > > `__file__'
> >> > > >    from
> >> > > >
> >> > > >
> >> > >
> >> >
> >>
> ruby/hadoop/install/hbase_minus_0_dot_19_dot_0/bin//hadoop/install/hbase/bin/../bin/hirb.rb:-1:in
> >> > > > `__file__'
> >> > > >    from
> >> > > >
> >> > > >
> >> > >
> >> >
> >>
> ruby/hadoop/install/hbase_minus_0_dot_19_dot_0/bin//hadoop/install/hbase/bin/../bin/hirb.rb:-1:in
> >> > > > `load'
> >> > > >    from org/jruby/Ruby.java:512:in `runScript'
> >> > > >    from org/jruby/Ruby.java:432:in `runNormally'
> >> > > >    from org/jruby/Ruby.java:312:in `runFromMain'
> >> > > >    from org/jruby/Main.java:144:in `run'
> >> > > >    from org/jruby/Main.java:89:in `run'
> >> > > >    from org/jruby/Main.java:80:in `main'
> >> > > >    from /hadoop/install/hbase/bin/../bin/HBase.rb:444:in `count'
> >> > > >    from /hadoop/install/hbase/bin/../bin/hirb.rb:348:in `count'
> >> > > >    from (hbase):3:in `binding'
> >> > > >
> >> > > >
> >> > > > Amandeep Khurana
> >> > > > Computer Science Graduate Student
> >> > > > University of California, Santa Cruz
> >> > > >
> >> > > >
> >> > > > On Fri, Feb 20, 2009 at 9:46 PM, Amandeep Khurana <
> amansk@gmail.com
> >> >
> >> > > > wrote:
> >> > > >
> >> > > > > Here's what it throws on the console:
> >> > > > >
> >> > > > > 09/02/20 21:45:29 INFO mapred.JobClient: Task Id :
> >> > > > > attempt_200902201300_0019_m_000006_0, Status : FAILED
> >> > > > > java.io.IOException: table is null
> >> > > > >         at IN_TABLE_IMPORT$MapClass.map(IN_TABLE_IMPORT.java:33)
> >> > > > >         at IN_TABLE_IMPORT$MapClass.map(IN_TABLE_IMPORT.java:1)
> >> > > > >         at
> >> org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
> >> > > > >         at
> org.apache.hadoop.mapred.MapTask.run(MapTask.java:332)
> >> > > > >         at org.apache.hadoop.mapred.Child.main(Child.java:155)
> >> > > > >
> >> > > > > attempt_200902201300_0019_m_000006_0:
> >> > > > > org.apache.hadoop.hbase.client.NoServerForRegionException:
Timed
> >> out
> >> > > > trying
> >> > > > > to locate root region
> >> > > > > attempt_200902201300_0019_m_000006_0:   at
> >> > > > >
> >> > > >
> >> > >
> >> >
> >>
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRootRegion(HConnectionManager.java:768)
> >> > > > > attempt_200902201300_0019_m_000006_0:   at
> >> > > > >
> >> > > >
> >> > >
> >> >
> >>
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:448)
> >> > > > > attempt_200902201300_0019_m_000006_0:   at
> >> > > > >
> >> > > >
> >> > >
> >> >
> >>
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.relocateRegion(HConnectionManager.java:430)
> >> > > > > attempt_200902201300_0019_m_000006_0:   at
> >> > > > >
> >> > > >
> >> > >
> >> >
> >>
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:557)
> >> > > > > attempt_200902201300_0019_m_000006_0:   at
> >> > > > >
> >> > > >
> >> > >
> >> >
> >>
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:457)
> >> > > > > attempt_200902201300_0019_m_000006_0:   at
> >> > > > >
> >> > > >
> >> > >
> >> >
> >>
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.relocateRegion(HConnectionManager.java:430)
> >> > > > > attempt_200902201300_0019_m_000006_0:   at
> >> > > > >
> >> > > >
> >> > >
> >> >
> >>
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:557)
> >> > > > > attempt_200902201300_0019_m_000006_0:   at
> >> > > > >
> >> > > >
> >> > >
> >> >
> >>
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:461)
> >> > > > > attempt_200902201300_0019_m_000006_0:   at
> >> > > > >
> >> > > >
> >> > >
> >> >
> >>
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:423)
> >> > > > > attempt_200902201300_0019_m_000006_0:   at
> >> > > > > org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:114)
> >> > > > > attempt_200902201300_0019_m_000006_0:   at
> >> > > > > org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:97)
> >> > > > > attempt_200902201300_0019_m_000006_0:   at
> >> > > > > IN_TABLE_IMPORT$MapClass.configure(IN_TABLE_IMPORT.java:120)
> >> > > > > attempt_200902201300_0019_m_000006_0:   at
> >> > > > >
> >> >
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:58)
> >> > > > > attempt_200902201300_0019_m_000006_0:   at
> >> > > > >
> >> > > >
> >> > >
> >> >
> >>
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:83)
> >> > > > > attempt_200902201300_0019_m_000006_0:   at
> >> > > > > org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:34)
> >> > > > > attempt_200902201300_0019_m_000006_0:   at
> >> > > > >
> >> >
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:58)
> >> > > > > attempt_200902201300_0019_m_000006_0:   at
> >> > > > >
> >> > > >
> >> > >
> >> >
> >>
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:83)
> >> > > > > attempt_200902201300_0019_m_000006_0:   at
> >> > > > > org.apache.hadoop.mapred.MapTask.run(MapTask.java:328)
> >> > > > > attempt_200902201300_0019_m_000006_0:   at
> >> > > > > org.apache.hadoop.mapred.Child.main(Child.java:155)
> >> > > > >
> >> > > > >
> >> > > > >
> >> > > > >
> >> > > > >
> >> > > > > Amandeep Khurana
> >> > > > > Computer Science Graduate Student
> >> > > > > University of California, Santa Cruz
> >> > > > >
> >> > > > >
> >> > > > > On Fri, Feb 20, 2009 at 9:43 PM, Amandeep Khurana <
> >> amansk@gmail.com
> >> > > > >wrote:
> >> > > > >
> >> > > > >> I am trying to import data from a flat file into Hbase
using a
> >> Map
> >> > > > Reduce
> >> > > > >> job. There are close to 2 million rows. Mid way into
the job,
> it
> >> > > starts
> >> > > > >> giving me connection problems and eventually kills the
job.
> When
> >> the
> >> > > > error
> >> > > > >> comes, the hbase shell also stops working.
> >> > > > >>
> >> > > > >> This is what I get:
> >> > > > >>
> >> > > > >> 2009-02-20 21:37:14,407 INFO org.apache.hadoop.ipc.HBaseClass:
> >> > > Retrying
> >> > > > connect to server: /171.69.102.52:60020. Already tried 0 time(s).
> >> > > > >>
> >> > > > >> What could be going wrong?
> >> > > > >>
> >> > > > >> Amandeep
> >> > > > >>
> >> > > > >>
> >> > > > >> Amandeep Khurana
> >> > > > >> Computer Science Graduate Student
> >> > > > >> University of California, Santa Cruz
> >> > > > >>
> >> > > > >
> >> > > > >
> >> > > >
> >> > >
> >> >
> >>
> >
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message