hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Lars George <lars.geo...@gmail.com>
Subject Re: Question about Zookeeper quorum
Date Mon, 22 Nov 2010 12:44:53 GMT
Glad to hear that solved it :)

HBaseConfiguration actually reads both as it simply extends the Hadoop
Configuration class. Best of both worlds.

Lars

On Mon, Nov 22, 2010 at 1:02 PM, Hari Sreekumar
<hsreekumar@clickable.com> wrote:
> Hi Lars,
>
>         Aaah, Thanks man, it works now! I was initiating the default config
> all the while! Printing out the config turns was really helpful. So
> getConf() reads the hadoop default and core-site.xml files, and
> HBaseConfiguration() reads hbase properties.
>
> Thanks a lot,
> Hari
>
> On Mon, Nov 22, 2010 at 4:41 PM, Lars George <lars.george@gmail.com> wrote:
>
>> Hi Hari,
>>
>> getConf() returns a Hadoop configuration instance, that does not
>> include HBase's. You need to call
>>
>> Configuration conf = HBaseConfiguration.create();
>> Job job = new Job(conf);
>>
>> for it to be read.
>>
>> Lars
>>
>>
>> On Mon, Nov 22, 2010 at 11:32 AM, Hari Sreekumar
>> <hsreekumar@clickable.com> wrote:
>> > Hi,
>> >   I tried using this code to read configuration (I am using Toolrunner) :
>> >
>> > public int run(String[] args) throws Exception    {
>> >        String fileName = args[0];
>> >        Path inputPath = new Path(args[0]);
>> >
>> >        Job job = new Job(getConf());
>> >        System.out.println("Conf_hbase: " +
>> > getConf().get("hbase.zookeeper.quorum"));
>> >        System.out.println("Conf_hadoop: " +
>> > getConf().get("dfs.replication"));
>> >
>> >        job.setJarByClass(BulkUpload.class);
>> >        FileInputFormat.setInputPaths(job, inputPath);
>> >        job.setJobName(NAME + "_" + fileName);
>> >
>> >        job.setMapperClass(BulkUploadMapper.class);
>> >        job.setInputFormatClass(TextInputFormat.class);
>> >
>> >        TableMapReduceUtil.initTableReducerJob("CustomerData", null, job);
>> >        //System.out.println("")
>> >        job.setNumReduceTasks(0);
>> >
>> >        boolean success = job.waitForCompletion(true);
>> >        return success ? 0 : 1;
>> >    }
>> >
>> > Is this code right to get the configuration? Because I am getting both
>> > dfs.replication and hbase.zookeeper.quorum as null this way. But when I
>> > change dfs.replication in my config file, I do see a change in
>> replication
>> > after upload into HDFS.
>> >
>> > On Mon, Nov 22, 2010 at 2:36 PM, Hari Sreekumar <
>> hsreekumar@clickable.com>wrote:
>> >
>> >> Hi Lars,
>> >>
>> >>       I start them through HBase implicitly. I'll try printing the
>> config
>> >> values and post.
>> >>
>> >> thanks,
>> >> Hari
>> >>
>> >>
>> >> On Mon, Nov 22, 2010 at 1:58 PM, Lars George <lars.george@gmail.com
>> >wrote:
>> >>
>> >>> Hi Hari,
>> >>>
>> >>> Are you starting them yourself or have HBase start them for you
>> >>> implicitly?
>> >>>
>> >>> Lars
>> >>>
>> >>> On Nov 22, 2010, at 6:19, Hari Sreekumar <hsreekumar@clickable.com>
>> >>> wrote:
>> >>>
>> >>> > Hey Lars,
>> >>> >
>> >>> >          I have HQuorumPeer running on all nodes that I specify
in my
>> >>> > hbase-site file. One thing I wanted to clarify.. what is the default
>> >>> value
>> >>> > of HBASE_MANAGES_ZK ? Because I have not explicitly set it to true
in
>> my
>> >>> > hbase-env.sh file.
>> >>> >
>> >>> > thanks,
>> >>> > hari
>> >>> >
>> >>> > On Mon, Nov 22, 2010 at 10:39 AM, Lars George <lars.george@gmail.com
>> >
>> >>> wrote:
>> >>> >
>> >>> >> Hi Hari,
>> >>> >>
>> >>> >> On which of these for machines do you have a ZooKeeper daemon
>> running
>> >>> as
>> >>> >> well?
>> >>> >>
>> >>> >> Lars
>> >>> >>
>> >>> >> On Mon, Nov 22, 2010 at 5:51 AM, Hari Sreekumar
>> >>> >> <hsreekumar@clickable.com> wrote:
>> >>> >>> Hi,
>> >>> >>>
>> >>> >>>   But it is reading settings from hbase-site.xml. If it
was not
>> >>> reading
>> >>> >> my
>> >>> >>> changes, the problem wouldn't have gotten fixed when I
add ejabber
>> to
>> >>> the
>> >>> >>> quroum right? After all, it is responding to changes I
make in my
>> xml
>> >>> >> file.
>> >>> >>> What else can be the issue here?
>> >>> >>>
>> >>> >>> hari
>> >>> >>>
>> >>> >>> On Mon, Nov 22, 2010 at 12:54 AM, Lars George <
>> lars.george@gmail.com>
>> >>> >> wrote:
>> >>> >>>
>> >>> >>>> Hi Hari,
>> >>> >>>>
>> >>> >>>> You are missing the quorum setting. It seems the hbase-site.xml
is
>> >>> >> missing
>> >>> >>>> from the classpath on the clients. Did you pack it
into the jar?
>> >>> >>>>
>> >>> >>>> And yes, even one ZK server is fine in such a small
cluster.
>> >>> >>>>
>> >>> >>>> You can see it is trying to connect to localhost which
is the
>> default
>> >>> if
>> >>> >>>> the site file is missing.
>> >>> >>>>
>> >>> >>>> Regards,
>> >>> >>>> Lars
>> >>> >>>>
>> >>> >>>> On Nov 21, 2010, at 19:22, Hari Sreekumar <
>> hsreekumar@clickable.com>
>> >>> >>>> wrote:
>> >>> >>>>
>> >>> >>>>> Hi,
>> >>> >>>>>   Is it necessary that all RegionServers must
also be part of the
>> ZK
>> >>> >>>>> Quorum? I have a 4 node cluster, with node hadoop1
being master
>> and
>> >>> >>>> hadoop2,
>> >>> >>>>> hadoop3 and ejabber being the slaves (Both in case
of hadoop and
>> for
>> >>> >>>> HBase).
>> >>> >>>>>
>> >>> >>>>> When I keep only 3 nodes in the zookeeper.quorum
property:
>> >>> >>>>> <name>hbase.zookeeper.quorum</name>
>> >>> >>>>> <value>hadoop1,hadoop2,hadoop3</value>
>> >>> >>>>>
>> >>> >>>>> I get this exception for all tasks that run on
ejabber(the 4th
>> >>> node):
>> >>> >>>>>
>> >>> >>>>> 2010-11-21 23:35:47,785 INFO org.apache.zookeeper.ClientCnxn:
>> >>> >> Attempting
>> >>> >>>>> connection to server localhost/127.0.0.1:2181
>> >>> >>>>> 2010-11-21 23:35:47,790 WARN org.apache.zookeeper.ClientCnxn:
>> >>> >> Exception
>> >>> >>>>> closing session 0x0 to sun.nio.ch.SelectionKeyImpl@7c2e1f1f
>> >>> >>>>> java.net.ConnectException: Connection refused
>> >>> >>>>>       at sun.nio.ch.SocketChannelImpl.checkConnect(Native
Method)
>> >>> >>>>>       at
>> >>> >>>>>
>> >>> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
>> >>> >>>>>       at
>> >>> >>>>>
>> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:933)
>> >>> >>>>> 2010-11-21 23:35:47,791 WARN org.apache.zookeeper.ClientCnxn:
>> >>> Ignoring
>> >>> >>>>> exception during shutdown input
>> >>> >>>>> java.nio.channels.ClosedChannelException
>> >>> >>>>>       at
>> >>> >>>>>
>> >>> sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:638)
>> >>> >>>>>       at
>> >>> >> sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
>> >>> >>>>>       at
>> >>> >>>>>
>> >>> >>
>> org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:999)
>> >>> >>>>>       at
>> >>> >>>>>
>> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
>> >>> >>>>> 2010-11-21 23:35:47,791 WARN org.apache.zookeeper.ClientCnxn:
>> >>> Ignoring
>> >>> >>>>> exception during shutdown output
>> >>> >>>>> java.nio.channels.ClosedChannelException
>> >>> >>>>>       at
>> >>> >>>>>
>> >>> >>
>> sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:649)
>> >>> >>>>>       at
>> >>> >> sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
>> >>> >>>>>       at
>> >>> >>>>>
>> >>> >>
>> >>>
>> org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004)
>> >>> >>>>>       at
>> >>> >>>>>
>> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
>> >>> >>>>> 2010-11-21 23:35:47,925 WARN
>> >>> >>>>> org.apache.hadoop.hbase.zookeeper.ZooKeeperWrapper:
Failed to
>> create
>> >>> >>>> /hbase
>> >>> >>>>> -- check quorum servers, currently=localhost:2181
>> >>> >>>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
>> >>> >>>>> KeeperErrorCode = ConnectionLoss for /hbase
>> >>> >>>>>       at
>> >>> >>>>>
>> org.apache.zookeeper.KeeperException.create(KeeperException.java:90)
>> >>> >>>>>       at
>> >>> >>>>>
>> org.apache.zookeeper.KeeperException.create(KeeperException.java:42)
>> >>> >>>>>       at
>> org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:780)
>> >>> >>>>>       at
>> org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:808)
>> >>> >>>>>       at
>> >>> >>>>>
>> >>> >>>>
>> >>> >>
>> >>>
>> org.apache.hadoop.hbase.zookeeper.ZooKeeperWrapper.ensureExists(ZooKeeperWrapper.java:405)
>> >>> >>>>>       at
>> >>> >>>>>
>> >>> >>>>
>> >>> >>
>> >>>
>> org.apache.hadoop.hbase.zookeeper.ZooKeeperWrapper.ensureParentExists(ZooKeeperWrapper.java:432)
>> >>> >>>>>       at
>> >>> >>>>>
>> >>> >>>>
>> >>> >>
>> >>>
>> org.apache.hadoop.hbase.zookeeper.ZooKeeperWrapper.checkOutOfSafeMode(ZooKeeperWrapper.java:545)
>> >>> >>>>>
>> >>> >>>>> When I add ejabber also to the ZK quorum and restart
HBase, I
>> don't
>> >>> >> get
>> >>> >>>> this
>> >>> >>>>> exception. My understanding was that a small cluster
like mine
>> >>> should
>> >>> >>>> only
>> >>> >>>>> need one ZK machine.
>> >>> >>>>>
>> >>> >>>>> Thanks,
>> >>> >>>>> Hari
>> >>> >>>>
>> >>> >>>
>> >>> >>
>> >>>
>> >>
>> >>
>> >
>>
>

Mime
View raw message