From hbase-user-return-3248-apmail-hadoop-hbase-user-archive=hadoop.apache.org@hadoop.apache.org Sat Feb 21 09:55:19 2009 Return-Path: Delivered-To: apmail-hadoop-hbase-user-archive@minotaur.apache.org Received: (qmail 12725 invoked from network); 21 Feb 2009 09:55:19 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 21 Feb 2009 09:55:19 -0000 Received: (qmail 80216 invoked by uid 500); 21 Feb 2009 09:55:18 -0000 Delivered-To: apmail-hadoop-hbase-user-archive@hadoop.apache.org Received: (qmail 80191 invoked by uid 500); 21 Feb 2009 09:55:18 -0000 Mailing-List: contact hbase-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hbase-user@hadoop.apache.org Delivered-To: mailing list hbase-user@hadoop.apache.org Received: (qmail 80180 invoked by uid 99); 21 Feb 2009 09:55:18 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 21 Feb 2009 01:55:18 -0800 X-ASF-Spam-Status: No, hits=3.7 required=10.0 tests=HTML_MESSAGE,NORMAL_HTTP_TO_IP,SPF_PASS,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of ryanobjc@gmail.com designates 209.85.221.20 as permitted sender) Received: from [209.85.221.20] (HELO mail-qy0-f20.google.com) (209.85.221.20) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 21 Feb 2009 09:55:11 +0000 Received: by qyk13 with SMTP id 13so2110111qyk.5 for ; Sat, 21 Feb 2009 01:54:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:content-type; bh=TmmvAiqlKJbXgFudnwEC/6YLljTRE/EuZCLFEuZy/2M=; b=GBX9fbX3/nCsXq/fagDjIY9LEwMQSUZ7f/sQAEa9VjP8MCuv+HM1vgyMNwy4p6V9l9 6vathLm1c4ZdMk+eK45yVUlcnbDsJF+VWV4j7Aj1cJ8jEP9cALLL4xq7okXqBblly5Za 31I2h2b6e9gR3hf8dK13LjRwleZAlZ025zwx8= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; b=V7OyAGXtSyA9zo6+38uxoyAVK37CNv0d9xljkAN/s0g5E56OQmznsKmsHFTpWcKAK8 xwq/s9Sf30Ne3q0OX3Z7IOHDZov3XDzDRbLZ3o5pHyHsp64L4gipYFP4DEpXbZ4qutA9 Qr6P282lkS5x0bWWhTMsJKzFpNXcM/AF8Q1aA= MIME-Version: 1.0 Received: by 10.224.45.75 with SMTP id d11mr2919089qaf.152.1235210089704; Sat, 21 Feb 2009 01:54:49 -0800 (PST) In-Reply-To: <35a22e220902210144j5d8b3c58hf6536e98c45a8132@mail.gmail.com> References: <35a22e220902202143u69afc33cq47eef604d7fc3021@mail.gmail.com> <35a22e220902202146h2fd72c8dq8126f09102d105cd@mail.gmail.com> <35a22e220902202155m682a8e28j8aa85dc2ecb71f20@mail.gmail.com> <7c962aed0902210010x16bc9328i8ea9043b906f6598@mail.gmail.com> <35a22e220902210101x2b8dd7fdq92920e7947890dcb@mail.gmail.com> <7c962aed0902210114m549d2362o3c809a447fe9013@mail.gmail.com> <35a22e220902210123u59ddea5fif047c02ff4b36079@mail.gmail.com> <35a22e220902210144j5d8b3c58hf6536e98c45a8132@mail.gmail.com> Date: Sat, 21 Feb 2009 01:54:49 -0800 Message-ID: <78568af10902210154y78815057o9811da6f2c8fa9b1@mail.gmail.com> Subject: Re: Connection problem during data import into hbase From: Ryan Rawson To: hbase-user@hadoop.apache.org Content-Type: multipart/alternative; boundary=0015175cb5649eac2104636ac21e X-Virus-Checked: Checked by ClamAV on apache.org --0015175cb5649eac2104636ac21e Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit i run in to that a lot - disabling a table doesnt seem to work all the time. i think the zk support in 0.20 will help fix many of these 'cant find regionserver' and other sync issues. On Sat, Feb 21, 2009 at 1:44 AM, Amandeep Khurana wrote: > Here's another thing thats happening. I was trying to truncate the table. > > hbase(main):001:0> truncate 'in_table' > Truncating in_table; it may take a while > Disabling table... > NativeException: org.apache.hadoop.hbase.RegionException: Retries > exhausted, > it took too long to wait for the table in_table to be disabled. > from org/apache/hadoop/hbase/client/HBaseAdmin.java:387:in > `disableTable' > from org/apache/hadoop/hbase/client/HBaseAdmin.java:348:in > `disableTable' > from sun.reflect.NativeMethodAccessorImpl:-2:in `invoke0' > from sun.reflect.NativeMethodAccessorImpl:-1:in `invoke' > from sun.reflect.DelegatingMethodAccessorImpl:-1:in `invoke' > from java.lang.reflect.Method:-1:in `invoke' > from org/jruby/javasupport/JavaMethod.java:250:in > `invokeWithExceptionHandling' > from org/jruby/javasupport/JavaMethod.java:219:in `invoke' > from org/jruby/javasupport/JavaClass.java:416:in `execute' > from org/jruby/internal/runtime/methods/SimpleCallbackMethod.java:67:in > `call' > from org/jruby/internal/runtime/methods/DynamicMethod.java:78:in `call' > from org/jruby/runtime/CallSite.java:155:in `cacheAndCall' > from org/jruby/runtime/CallSite.java:332:in `call' > from org/jruby/evaluator/ASTInterpreter.java:649:in `callNode' > from org/jruby/evaluator/ASTInterpreter.java:324:in `evalInternal' > > I left it for a few minutes and tried again. It worked. There was no load > on > the cluster at all. changed the config (both) and added > dfs.datanode.socket.write.timeout property with value 0. I also defined the > property in the job config. > > Amandeep > > > Amandeep Khurana > Computer Science Graduate Student > University of California, Santa Cruz > > > On Sat, Feb 21, 2009 at 1:23 AM, Amandeep Khurana > wrote: > > > I have 1 master + 2 slaves. > > Am using 0.19.0 for both Hadoop and Hbase. > > I didnt change any config from the default except the hbase.rootdir and > the > > hbase.master. > > > > I have gone through the FAQs but couldnt find anything. What exactly are > > you pointing to? > > > > > > Amandeep Khurana > > Computer Science Graduate Student > > University of California, Santa Cruz > > > > > > On Sat, Feb 21, 2009 at 1:14 AM, stack wrote: > > > >> It looks like regionserver hosting root crashed: > >> > >> org.apache.hadoop.hbase.client.NoServerForRegionException: Timed out > >> trying > >> to locate root region > >> > >> How many servers you running? > >> > >> You made similar config. to that reported by Larry Compton in a mail > from > >> earlier today? (See FAQ and Troubleshooting page for more on his listed > >> configs.) > >> > >> St.Ack > >> > >> > >> On Sat, Feb 21, 2009 at 1:01 AM, Amandeep Khurana > >> wrote: > >> > >> > Yes, the table exists before I start the job. > >> > > >> > I am not using TableOutputFormat. I picked up the sample code from the > >> docs > >> > and am using it. > >> > > >> > Here's the job conf: > >> > > >> > JobConf conf = new JobConf(getConf(), IN_TABLE_IMPORT.class); > >> > FileInputFormat.setInputPaths(conf, new Path("import_data")); > >> > conf.setMapperClass(MapClass.class); > >> > conf.setNumReduceTasks(0); > >> > conf.setOutputFormat(NullOutputFormat.class); > >> > JobClient.runJob(conf); > >> > > >> > Interestingly, the hbase shell isnt working now either. Its giving > >> errors > >> > even when I give the command "list"... > >> > > >> > > >> > > >> > Amandeep Khurana > >> > Computer Science Graduate Student > >> > University of California, Santa Cruz > >> > > >> > > >> > On Sat, Feb 21, 2009 at 12:10 AM, stack wrote: > >> > > >> > > The table exists before you start the MR job? > >> > > > >> > > When you say 'midway through the job', are you using > tableoutputformat > >> to > >> > > insert into your table? > >> > > > >> > > Which version of hbase? > >> > > > >> > > St.Ack > >> > > > >> > > On Fri, Feb 20, 2009 at 9:55 PM, Amandeep Khurana > > >> > > wrote: > >> > > > >> > > > I dont know if this is related or not, but it seems to be. After > >> this > >> > map > >> > > > reduce job, I tried to count the number of entries in the table in > >> > hbase > >> > > > through the shell. It failed with the following error: > >> > > > > >> > > > hbase(main):002:0> count 'in_table' > >> > > > NativeException: java.lang.NullPointerException: null > >> > > > from java.lang.String:-1:in `' > >> > > > from org/apache/hadoop/hbase/util/Bytes.java:92:in `toString' > >> > > > from > >> > > org/apache/hadoop/hbase/client/RetriesExhaustedException.java:50:in > >> > > > `getMessage' > >> > > > from > >> > > org/apache/hadoop/hbase/client/RetriesExhaustedException.java:40:in > >> > > > `' > >> > > > from > >> org/apache/hadoop/hbase/client/HConnectionManager.java:841:in > >> > > > `getRegionServerWithRetries' > >> > > > from org/apache/hadoop/hbase/client/MetaScanner.java:56:in > >> > `metaScan' > >> > > > from org/apache/hadoop/hbase/client/MetaScanner.java:30:in > >> > `metaScan' > >> > > > from > >> org/apache/hadoop/hbase/client/HConnectionManager.java:411:in > >> > > > `getHTableDescriptor' > >> > > > from org/apache/hadoop/hbase/client/HTable.java:219:in > >> > > > `getTableDescriptor' > >> > > > from sun.reflect.NativeMethodAccessorImpl:-2:in `invoke0' > >> > > > from sun.reflect.NativeMethodAccessorImpl:-1:in `invoke' > >> > > > from sun.reflect.DelegatingMethodAccessorImpl:-1:in `invoke' > >> > > > from java.lang.reflect.Method:-1:in `invoke' > >> > > > from org/jruby/javasupport/JavaMethod.java:250:in > >> > > > `invokeWithExceptionHandling' > >> > > > from org/jruby/javasupport/JavaMethod.java:219:in `invoke' > >> > > > from org/jruby/javasupport/JavaClass.java:416:in `execute' > >> > > > ... 145 levels... > >> > > > from > org/jruby/internal/runtime/methods/DynamicMethod.java:74:in > >> > > `call' > >> > > > from > org/jruby/internal/runtime/methods/CompiledMethod.java:48:in > >> > > `call' > >> > > > from org/jruby/runtime/CallSite.java:123:in `cacheAndCall' > >> > > > from org/jruby/runtime/CallSite.java:298:in `call' > >> > > > from > >> > > > > >> > > > > >> > > > >> > > >> > ruby/hadoop/install/hbase_minus_0_dot_19_dot_0/bin//hadoop/install/hbase/bin/../bin/hirb.rb:429:in > >> > > > `__file__' > >> > > > from > >> > > > > >> > > > > >> > > > >> > > >> > ruby/hadoop/install/hbase_minus_0_dot_19_dot_0/bin//hadoop/install/hbase/bin/../bin/hirb.rb:-1:in > >> > > > `__file__' > >> > > > from > >> > > > > >> > > > > >> > > > >> > > >> > ruby/hadoop/install/hbase_minus_0_dot_19_dot_0/bin//hadoop/install/hbase/bin/../bin/hirb.rb:-1:in > >> > > > `load' > >> > > > from org/jruby/Ruby.java:512:in `runScript' > >> > > > from org/jruby/Ruby.java:432:in `runNormally' > >> > > > from org/jruby/Ruby.java:312:in `runFromMain' > >> > > > from org/jruby/Main.java:144:in `run' > >> > > > from org/jruby/Main.java:89:in `run' > >> > > > from org/jruby/Main.java:80:in `main' > >> > > > from /hadoop/install/hbase/bin/../bin/HBase.rb:444:in `count' > >> > > > from /hadoop/install/hbase/bin/../bin/hirb.rb:348:in `count' > >> > > > from (hbase):3:in `binding' > >> > > > > >> > > > > >> > > > Amandeep Khurana > >> > > > Computer Science Graduate Student > >> > > > University of California, Santa Cruz > >> > > > > >> > > > > >> > > > On Fri, Feb 20, 2009 at 9:46 PM, Amandeep Khurana < > amansk@gmail.com > >> > > >> > > > wrote: > >> > > > > >> > > > > Here's what it throws on the console: > >> > > > > > >> > > > > 09/02/20 21:45:29 INFO mapred.JobClient: Task Id : > >> > > > > attempt_200902201300_0019_m_000006_0, Status : FAILED > >> > > > > java.io.IOException: table is null > >> > > > > at IN_TABLE_IMPORT$MapClass.map(IN_TABLE_IMPORT.java:33) > >> > > > > at IN_TABLE_IMPORT$MapClass.map(IN_TABLE_IMPORT.java:1) > >> > > > > at > >> org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50) > >> > > > > at > org.apache.hadoop.mapred.MapTask.run(MapTask.java:332) > >> > > > > at org.apache.hadoop.mapred.Child.main(Child.java:155) > >> > > > > > >> > > > > attempt_200902201300_0019_m_000006_0: > >> > > > > org.apache.hadoop.hbase.client.NoServerForRegionException: Timed > >> out > >> > > > trying > >> > > > > to locate root region > >> > > > > attempt_200902201300_0019_m_000006_0: at > >> > > > > > >> > > > > >> > > > >> > > >> > org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRootRegion(HConnectionManager.java:768) > >> > > > > attempt_200902201300_0019_m_000006_0: at > >> > > > > > >> > > > > >> > > > >> > > >> > org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:448) > >> > > > > attempt_200902201300_0019_m_000006_0: at > >> > > > > > >> > > > > >> > > > >> > > >> > org.apache.hadoop.hbase.client.HConnectionManager$TableServers.relocateRegion(HConnectionManager.java:430) > >> > > > > attempt_200902201300_0019_m_000006_0: at > >> > > > > > >> > > > > >> > > > >> > > >> > org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:557) > >> > > > > attempt_200902201300_0019_m_000006_0: at > >> > > > > > >> > > > > >> > > > >> > > >> > org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:457) > >> > > > > attempt_200902201300_0019_m_000006_0: at > >> > > > > > >> > > > > >> > > > >> > > >> > org.apache.hadoop.hbase.client.HConnectionManager$TableServers.relocateRegion(HConnectionManager.java:430) > >> > > > > attempt_200902201300_0019_m_000006_0: at > >> > > > > > >> > > > > >> > > > >> > > >> > org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:557) > >> > > > > attempt_200902201300_0019_m_000006_0: at > >> > > > > > >> > > > > >> > > > >> > > >> > org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:461) > >> > > > > attempt_200902201300_0019_m_000006_0: at > >> > > > > > >> > > > > >> > > > >> > > >> > org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:423) > >> > > > > attempt_200902201300_0019_m_000006_0: at > >> > > > > org.apache.hadoop.hbase.client.HTable.(HTable.java:114) > >> > > > > attempt_200902201300_0019_m_000006_0: at > >> > > > > org.apache.hadoop.hbase.client.HTable.(HTable.java:97) > >> > > > > attempt_200902201300_0019_m_000006_0: at > >> > > > > IN_TABLE_IMPORT$MapClass.configure(IN_TABLE_IMPORT.java:120) > >> > > > > attempt_200902201300_0019_m_000006_0: at > >> > > > > > >> > > org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:58) > >> > > > > attempt_200902201300_0019_m_000006_0: at > >> > > > > > >> > > > > >> > > > >> > > >> > org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:83) > >> > > > > attempt_200902201300_0019_m_000006_0: at > >> > > > > org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:34) > >> > > > > attempt_200902201300_0019_m_000006_0: at > >> > > > > > >> > > org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:58) > >> > > > > attempt_200902201300_0019_m_000006_0: at > >> > > > > > >> > > > > >> > > > >> > > >> > org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:83) > >> > > > > attempt_200902201300_0019_m_000006_0: at > >> > > > > org.apache.hadoop.mapred.MapTask.run(MapTask.java:328) > >> > > > > attempt_200902201300_0019_m_000006_0: at > >> > > > > org.apache.hadoop.mapred.Child.main(Child.java:155) > >> > > > > > >> > > > > > >> > > > > > >> > > > > > >> > > > > > >> > > > > Amandeep Khurana > >> > > > > Computer Science Graduate Student > >> > > > > University of California, Santa Cruz > >> > > > > > >> > > > > > >> > > > > On Fri, Feb 20, 2009 at 9:43 PM, Amandeep Khurana < > >> amansk@gmail.com > >> > > > >wrote: > >> > > > > > >> > > > >> I am trying to import data from a flat file into Hbase using a > >> Map > >> > > > Reduce > >> > > > >> job. There are close to 2 million rows. Mid way into the job, > it > >> > > starts > >> > > > >> giving me connection problems and eventually kills the job. > When > >> the > >> > > > error > >> > > > >> comes, the hbase shell also stops working. > >> > > > >> > >> > > > >> This is what I get: > >> > > > >> > >> > > > >> 2009-02-20 21:37:14,407 INFO org.apache.hadoop.ipc.HBaseClass: > >> > > Retrying > >> > > > connect to server: /171.69.102.52:60020. Already tried 0 time(s). > >> > > > >> > >> > > > >> What could be going wrong? > >> > > > >> > >> > > > >> Amandeep > >> > > > >> > >> > > > >> > >> > > > >> Amandeep Khurana > >> > > > >> Computer Science Graduate Student > >> > > > >> University of California, Santa Cruz > >> > > > >> > >> > > > > > >> > > > > > >> > > > > >> > > > >> > > >> > > > > > --0015175cb5649eac2104636ac21e--