hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Arno Strittmatter <...@venevita.net>
Subject Re: hbase shell >list results in NativeException: java.lang.NullPointerException: null
Date Tue, 23 Dec 2008 20:46:32 GMT
The hadoop namenode and the hadoop jobtracker are on the same machine  
as the hbase server.
4 additional machines are configured as datanodes; each with slightly  
different disk layout. All machines are on CentOS 5.2 and have two
network interfaces. (I cleaned the hosts file according to the Hbase  
FAQ on each one of them.)
Java:
java version "1.6.0_10"
Java(TM) SE Runtime Environment (build 1.6.0_10-b33)
Java HotSpot(TM) 64-Bit Server VM (build 11.0-b15, mixed mode)

I put hadoop into debug mode, but can not find anything that looks  
like an error either.

On the port issue: I am confused - I did not change the port  
configuration and assumed 8020 was the default port for dfs.
What else could change the port? My /etc/services file contains "intu- 
ec-svcdisc 8020/tcp".

Attached my hadoop configuration file.

- Arno

[root@yowb0 conf]# cat hadoop-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
   <name>fs.default.name</name>
   <value>hdfs://yowb0i0.mydomain.com</value>
   <description>The name of the default file system.  A URI whose
   scheme and authority determine the FileSystem implementation.  The
   uri's scheme determines the config property (fs.SCHEME.impl) naming
   the FileSystem implementation class.  The uri's authority is used to
   determine the host, port, etc. for a filesystem.</description>
</property>

<property>
   <name>mapred.job.tracker</name>
   <value>yowb0i0.mydomain.com:7070</value>
   <description>The host and port that the MapReduce job tracker runs
   at.  If "local", then jobs are run in-process as a single map
   and reduce task.
   </description>
</property>

<property>
   <name>dfs.name.dir</name>
   <value>/storage_sdb/dfs/name</value>
   <description>Determines where on the local filesystem the DFS name  
node
       should store the name table(fsimage).  If this is a comma- 
delimited list
       of directories then the name table is replicated in all of the
       directories, for redundancy. </description>
</property>

<property>
   <name>dfs.data.dir</name>
   <value>/storage_sdb/dfs/data</value>
   <description>Determines where on the local filesystem an DFS data  
node
   should store its blocks.  If this is a comma-delimited
   list of directories, then data will be stored in all named
   directories, typically on different devices.
   Directories that do not exist are ignored.
   </description>
</property>

<property>
   <name>mapred.system.dir</name>
   <value>/storage_sdb/mapred/system</value>
   <description>The shared directory where MapReduce stores control  
files.
   </description>
</property>

<property>
   <name>mapred.local.dir</name>
   <value>/storage_sdb/mapred/local</value>
   <description>The local directory where MapReduce stores intermediate
   data files.  May be a comma-separated list of
   directories on different devices in order to spread disk i/o.
   Directories that do not exist are ignored.
   </description>
</property>



</configuration>
[root@yowb0 conf]#




On Dec 22, 2008, at 2:08 PM, Jean-Daniel Cryans wrote:

> This is weird because I don't see anywhere in your config where you  
> change
> your port to 8020 for the Namenode...
>
> How did you setup Hadoop? Are the datanodes on the same server as your
> region servers? What does your machine setup looks like?
>
> J-D
>
> On Mon, Dec 22, 2008 at 2:58 PM, Arno Strittmatter  
> <stm@venevita.net> wrote:
>
>> I changed to port to 9000. As a result the hadoop server could not be
>> reached any more, according to the log.
>> Furthermore, using my original port hbase created a directory  
>> structure
>> under the dir I specify (attached)
>> On the hadoop namenode nothing is listening on 9000 (netstat -an |  
>> grep
>> 9000 comes back empty.)
>> Seems like the problem is somewhere else.
>>
>> Thx - Arno
>>
>> [root@yowb0 hbase-0.18.1]# /usr/local/hadoop/hadoop-0.18.2/bin/ 
>> hadoop fs
>> -lsr hdfs://yowb0i0:8020/hbase4test/
>> drwxr-xr-x   - root supergroup          0 2008-12-22 01:16
>> /hbase4test/-ROOT-
>> drwxr-xr-x   - root supergroup          0 2008-12-22 01:16
>> /hbase4test/-ROOT-/70236052
>> drwxr-xr-x   - root supergroup          0 2008-12-22 01:16
>> /hbase4test/-ROOT-/70236052/info
>> drwxr-xr-x   - root supergroup          0 2008-12-22 01:16
>> /hbase4test/-ROOT-/70236052/info/info
>> drwxr-xr-x   - root supergroup          0 2008-12-22 01:16
>> /hbase4test/-ROOT-/70236052/info/mapfiles
>> drwxr-xr-x   - root supergroup          0 2008-12-22 01:16
>> /hbase4test/.META.
>> drwxr-xr-x   - root supergroup          0 2008-12-22 01:16
>> /hbase4test/.META./1028785192
>> drwxr-xr-x   - root supergroup          0 2008-12-22 01:16
>> /hbase4test/.META./1028785192/historian
>> drwxr-xr-x   - root supergroup          0 2008-12-22 01:16
>> /hbase4test/.META./1028785192/historian/info
>> drwxr-xr-x   - root supergroup          0 2008-12-22 01:16
>> /hbase4test/.META./1028785192/historian/mapfiles
>> drwxr-xr-x   - root supergroup          0 2008-12-22 01:16
>> /hbase4test/.META./1028785192/info
>> drwxr-xr-x   - root supergroup          0 2008-12-22 01:16
>> /hbase4test/.META./1028785192/info/info
>> drwxr-xr-x   - root supergroup          0 2008-12-22 01:16
>> /hbase4test/.META./1028785192/info/mapfiles
>> -rw-r--r--   3 root supergroup          3 2008-12-22 01:16
>> /hbase4test/hbase.version
>> [root@yowb0 hbase-0.18.1]#
>>
>>
>> On Dec 22, 2008, at 11:53 AM, Jean-Daniel Cryans wrote:
>>
>> Arno,
>>>
>>> The hbase.rootdir should look something like this for you :hdfs://
>>> yowb0i0.mydomain.com:9000/tmparno/<
>>> http://yowb0i0.mydomain.com:8020/tmparno/>
>>>
>>>
>>> From what I see in your logs it is able to contact the Namenode  
>>> but unable
>>> to retrieve the file, I think this is because you are using the  
>>> wrong
>>> port.
>>>
>>> J-D
>>>
>>> On Mon, Dec 22, 2008 at 10:56 AM, Arno Strittmatter  
>>> <stm@venevita.net
>>>> wrote:
>>>
>>> The quick response is much appreciated, thank you Jean-Daniel.
>>>>
>>>>
>>>> After configuring debug I deleted the old logs, configured a new  
>>>> hbase
>>>> directory (to be clean) and started in distributed mode.
>>>> The the attached tarball contains configuration files, log files,  
>>>> and a
>>>> screen trace.
>>>> There is an error in the region server log files about a possible  
>>>> data
>>>> loss, but the reason seems not evident.
>>>> I think the debug mode resulted in the shell showing some  
>>>> internal tables
>>>> upon the list command, still the error appears.
>>>>
>>>> Thank you
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> On Dec 22, 2008, at 7:25 AM, Jean-Daniel Cryans wrote:
>>>>
>>>> Arno,
>>>>
>>>>>
>>>>> Could you enable DEBUG and see what it tells you? See
>>>>> http://wiki.apache.org/hadoop/Hbase/FAQ#5
>>>>>
>>>>> Also, I may recognize this trace and this NPE is thrown after  
>>>>> all the
>>>>> retries were done to get a single row but for a dumb reason, we  
>>>>> do not
>>>>> check
>>>>> if something is null. (Even if it's not that, I'm going to open  
>>>>> a bug in
>>>>> jira right away since I've already seen that)
>>>>>
>>>>> What it would hide under this is that it cannot contact your  
>>>>> region
>>>>> server.
>>>>> Maybe something is wrong with your configuration, please have a  
>>>>> look at
>>>>> the
>>>>> logs and look for anything exception.
>>>>>
>>>>> Thx,
>>>>>
>>>>> J-D
>>>>>
>>>>> On Mon, Dec 22, 2008 at 5:43 AM, Arno Strittmatter <stm@venevita.net

>>>>> >
>>>>> wrote:
>>>>>
>>>>> Hello
>>>>>
>>>>>>
>>>>>> fresh install of hbase 0.18.1 & hadoop 0.18.2
>>>>>> hbase in non distributed mode w/o hadoop works fine
>>>>>> as soon as I add the configuration:
>>>>>> <property>
>>>>>> <name>hbase.rootdir</name>
>>>>>> <value>hdfs://yowb0i0.mydomain.com:8020/tmparno/</value>
>>>>>> <description>The directory shared by region servers.
>>>>>> </description>
>>>>>> </property>
>>>>>> and run bin/hbase shell
>>>>>>
>>>>>> list
>>>>>>>
>>>>>>> I get an error stack. However the hbase directory does get  
>>>>>>> created on
>>>>>> hdfs.
>>>>>> The attached error stack.
>>>>>>
>>>>>> Thank you,
>>>>>> Arno
>>>>>>
>>>>>> root@yowb0 conf]# ../bin/hbase shell
>>>>>> HBase Shell; enter 'help<RETURN>' for list of supported commands.
>>>>>> Version: 0.18.1, r707159, Wed Oct 22 12:43:06 PDT 2008
>>>>>> hbase(main):001:0> list
>>>>>> NativeException: java.lang.NullPointerException: null
>>>>>>   from org/apache/hadoop/hbase/client/ServerCallable.java:71:in
>>>>>> `getRegio
>>>>>> Name'
>>>>>>   from org/apache/hadoop/hbase/client/HConnectionManager.java: 
>>>>>> 863:in
>>>>>> `get
>>>>>> egionServerWithRetries'
>>>>>>   from org/apache/hadoop/hbase/client/MetaScanner.java:56:in
>>>>>> `metaScan'
>>>>>>   from org/apache/hadoop/hbase/client/MetaScanner.java:30:in
>>>>>> `metaScan'
>>>>>>   from org/apache/hadoop/hbase/client/HConnectionManager.java: 
>>>>>> 297:in
>>>>>> `lis
>>>>>> Tables'
>>>>>>   from org/apache/hadoop/hbase/client/HBaseAdmin.java:117:in
>>>>>> `listTables'
>>>>>>   from sun.reflect.NativeMethodAccessorImpl:-2:in `invoke0'
>>>>>>   from sun.reflect.NativeMethodAccessorImpl:-1:in `invoke'
>>>>>>   from sun.reflect.DelegatingMethodAccessorImpl:-1:in `invoke'
>>>>>>   from java.lang.reflect.Method:-1:in `invoke'
>>>>>>   from org/jruby/javasupport/JavaMethod.java:250:in
>>>>>> `invokeWithExceptionH
>>>>>> ndling'
>>>>>>   from org/jruby/javasupport/JavaMethod.java:219:in `invoke'
>>>>>>   from org/jruby/javasupport/JavaClass.java:416:in `execute'
>>>>>>   from
>>>>>> org/jruby/internal/runtime/methods/SimpleCallbackMethod.java: 
>>>>>> 67:in
>>>>>> `call'
>>>>>>   from org/jruby/internal/runtime/methods/DynamicMethod.java: 
>>>>>> 70:in
>>>>>> `call'
>>>>>>   from org/jruby/runtime/CallSite.java:123:in `cacheAndCall'
>>>>>> ... 131 levels...
>>>>>>   from
>>>>>> ruby 
>>>>>> .usr.local.hbase.hbase_minus_0_dot_18_dot_1.bin.hirbInvokermet
>>>>>> od__23$RUBY$startOpt:-1:in `call'
>>>>>>   from org/jruby/internal/runtime/methods/DynamicMethod.java: 
>>>>>> 74:in
>>>>>> `call'
>>>>>>   from org/jruby/internal/runtime/methods/CompiledMethod.java: 
>>>>>> 48:in
>>>>>> `call
>>>>>>   from org/jruby/runtime/CallSite.java:123:in `cacheAndCall'
>>>>>>   from org/jruby/runtime/CallSite.java:298:in `call'
>>>>>>   from
>>>>>> ruby/usr/local/hbase/hbase_minus_0_dot_18_dot_1/bin//usr/local/ 
>>>>>> hba
>>>>>> e/hbase-0.18.1/bin/../bin/hirb.rb:351:in `__file__'
>>>>>>   from
>>>>>> ruby/usr/local/hbase/hbase_minus_0_dot_18_dot_1/bin//usr/local/ 
>>>>>> hba
>>>>>> e/hbase-0.18.1/bin/../bin/hirb.rb:-1:in `__file__'
>>>>>>   from
>>>>>> ruby/usr/local/hbase/hbase_minus_0_dot_18_dot_1/bin//usr/local/ 
>>>>>> hba
>>>>>> e/hbase-0.18.1/bin/../bin/hirb.rb:-1:in `load'
>>>>>>   from org/jruby/Ruby.java:512:in `runScript'
>>>>>>   from org/jruby/Ruby.java:432:in `runNormally'
>>>>>>   from org/jruby/Ruby.java:312:in `runFromMain'
>>>>>>   from org/jruby/Main.java:144:in `run'
>>>>>>   from org/jruby/Main.java:89:in `run'
>>>>>>   from org/jruby/Main.java:80:in `main'
>>>>>>   from /usr/local/hbase/hbase-0.18.1/bin/../bin/hirb.rb:242:in  
>>>>>> `list'
>>>>>>   from (hbase):2:in `binding'hbase(main):002:0>
>>>>>> hbase(main):003:0*
>>>>>>
>>>>>>
>>>>>>
>>>>
>>>>
>>


Mime
View raw message