hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ted Yu <yuzhih...@gmail.com>
Subject Re: Terminal initialization failed; falling back to unsupported java.lang.IncompatibleClassChangeError: Found class jline.Terminal, but interface was expected
Date Sun, 27 Aug 2017 22:57:56 GMT
If you used the downloaded 2.0.0-alpha2 release, it was built with hadoop
2.7.1

You'd better build with hadoop-3.0 profile since you're using hadoop-3.

The jline version is 2.11 (see lib/ruby/jruby-complete-9.1.10.0.jar)

Is there any other version of jline in the classpath ?

Cheers

On Sun, Aug 27, 2017 at 1:10 PM, Alexandr Porunov <
alexandr.porunov@gmail.com> wrote:

> Hello,
>
> I am trying to install hbase for about 4 days, but without success.
> I am getting a strange error:
> Terminal initialization failed; falling back to unsupported
> java.lang.IncompatibleClassChangeError: Found class jline.Terminal, but
> interface was expected
>
> What does it mean? How to fix it?
> I am using hadoop 3.0.0-alpha4, hbase 2.0.0-alpha2 and zookeeper 3.4.10.
>
> Is it possible to get them work together?
>
> I am trying to execute "bin/hbase shell" to open hbase shell but still it
> doesn't work. What am I doing wrong?
> It seams that hbase is able to create its files in hdfs but why shell is
> not working?
>
> Here is hbase-site.xml:
>
> <configuration>
> <property>
>   <name>hbase.rootdir</name>
>   <value>hdfs://127.0.0.1:8020/hbase</value>
> </property>
> <property>
>   <name>hbase.zookeeper.quorum</name>
>   <value>127.0.0.1</value>
> </property>
> </configuration>
>
> Here is hdfs-site.xml:
>
> <configuration>
>     <property>
>         <name>dfs.namenode.name.dir</name>
>         <value>file:///srv/hadoop/hdfs/nn</value>
>         <final>true</final>
>     </property>
>
>     <property>
>         <name>dfs.datanode.data.dir</name>
>         <value>file:///srv/hadoop/hdfs/dn</value>
>         <final>true</final>
>     </property>
>     <property>
>         <name>dfs.namenode.http-address</name>
>         <value>127.0.0.1:50070</value>
>         <final>true</final>
>     </property>
>
>     <property>
>         <name>dfs.secondary.namenode.http-address</name>
>         <value>127.0.0.1:50090</value>
>         <final>true</final>
>     </property>
>
>     <property>
>         <name>dfs.hosts</name>
>         <value>/etc/hadoop/conf/dfs.hosts</value>
>     </property>
>
>     <property>
>         <name>dfs.hosts.exclude</name>
>         <value>/etc/hadoop/conf/dfs.hosts.exclude</value>
>     </property>
>
>     <property>
>         <name>dfs.replication</name>
>         <value>1</value>
>     </property>
> </configuration>
>
>
> Here is core-site.xml:
>
> <configuration>
>     <property>
>         <name>fs.defaultFS</name>
>         <value>hdfs://127.0.0.1:8020</value>
>     </property>
>     <property>
>         <name>io.native.lib.available</name>
>         <value>True</value>
>     </property>
>     <property>
>         <name>fs.trash.interval</name>
>         <value>60</value>
>     </property>
>     <property>
>         <name>io.file.buffer.size</name>
>         <value>65536</value>
>     </property>
> </configuration>
>
> Here is the output when I am trying to execute "bin/hbase shell":
> http://paste.openstack.org/show/619575/
>
> In /etc/environment I have the next option:
> export HADOOP_USER_CLASSPATH_FIRST=true
>
> Don't know what else can be done to repair this issue..
> Please, help with any ideas.
>
> Best regards
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message