hbase-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From bus...@apache.org
Subject [01/15] hbase git commit: HBASE-14066 clean out old docbook docs from branch-1.
Date Tue, 14 Jul 2015 02:50:17 GMT
Repository: hbase
Updated Branches:
  refs/heads/branch-1.2 4b6f0149c -> 09a422e80


http://git-wip-us.apache.org/repos/asf/hbase/blob/09a422e8/src/main/docbkx/zookeeper.xml
----------------------------------------------------------------------
diff --git a/src/main/docbkx/zookeeper.xml b/src/main/docbkx/zookeeper.xml
deleted file mode 100644
index 206ccf5..0000000
--- a/src/main/docbkx/zookeeper.xml
+++ /dev/null
@@ -1,513 +0,0 @@
-<?xml version="1.0"?>
-<chapter
-  xml:id="zookeeper"
-  version="5.0"
-  xmlns="http://docbook.org/ns/docbook"
-  xmlns:xlink="http://www.w3.org/1999/xlink"
-  xmlns:xi="http://www.w3.org/2001/XInclude"
-  xmlns:svg="http://www.w3.org/2000/svg"
-  xmlns:m="http://www.w3.org/1998/Math/MathML"
-  xmlns:html="http://www.w3.org/1999/xhtml"
-  xmlns:db="http://docbook.org/ns/docbook">
-  <!--
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
--->
-
-  <title>ZooKeeper<indexterm>
-      <primary>ZooKeeper</primary>
-    </indexterm></title>
-
-  <para>A distributed Apache HBase installation depends on a running ZooKeeper cluster.
All
-    participating nodes and clients need to be able to access the running ZooKeeper ensemble.
Apache
-    HBase by default manages a ZooKeeper "cluster" for you. It will start and stop the ZooKeeper
-    ensemble as part of the HBase start/stop process. You can also manage the ZooKeeper ensemble
-    independent of HBase and just point HBase at the cluster it should use. To toggle HBase
-    management of ZooKeeper, use the <varname>HBASE_MANAGES_ZK</varname> variable
in
-      <filename>conf/hbase-env.sh</filename>. This variable, which defaults to
-      <varname>true</varname>, tells HBase whether to start/stop the ZooKeeper
ensemble servers as
-    part of HBase start/stop.</para>
-
-  <para>When HBase manages the ZooKeeper ensemble, you can specify ZooKeeper configuration
using its
-    native <filename>zoo.cfg</filename> file, or, the easier option is to just
specify ZooKeeper
-    options directly in <filename>conf/hbase-site.xml</filename>. A ZooKeeper
configuration option
-    can be set as a property in the HBase <filename>hbase-site.xml</filename>
XML configuration file
-    by prefacing the ZooKeeper option name with <varname>hbase.zookeeper.property</varname>.
For
-    example, the <varname>clientPort</varname> setting in ZooKeeper can be changed
by setting the
-      <varname>hbase.zookeeper.property.clientPort</varname> property. For all
default values used
-    by HBase, including ZooKeeper configuration, see <xref
-      linkend="hbase_default_configurations" />. Look for the
-      <varname>hbase.zookeeper.property</varname> prefix. For the full list of
ZooKeeper configurations, see ZooKeeper's
-          <filename>zoo.cfg</filename>. HBase does not ship with a <filename>zoo.cfg</filename>
so
-        you will need to browse the <filename>conf</filename> directory in an
appropriate ZooKeeper
-        download.</para>
-
-  <para>You must at least list the ensemble servers in <filename>hbase-site.xml</filename>
using the
-      <varname>hbase.zookeeper.quorum</varname> property. This property defaults
to a single
-    ensemble member at <varname>localhost</varname> which is not suitable for
a fully distributed
-    HBase. (It binds to the local machine only and remote clients will not be able to connect).
</para>
-  <note
-    xml:id="how_many_zks">
-    <title>How many ZooKeepers should I run?</title>
-
-    <para>You can run a ZooKeeper ensemble that comprises 1 node only but in production
it is
-      recommended that you run a ZooKeeper ensemble of 3, 5 or 7 machines; the more members
an
-      ensemble has, the more tolerant the ensemble is of host failures. Also, run an odd
number of
-      machines. In ZooKeeper, an even number of peers is supported, but it is normally not
used
-      because an even sized ensemble requires, proportionally, more peers to form a quorum
than an
-      odd sized ensemble requires. For example, an ensemble with 4 peers requires 3 to form
a
-      quorum, while an ensemble with 5 also requires 3 to form a quorum. Thus, an ensemble
of 5
-      allows 2 peers to fail, and thus is more fault tolerant than the ensemble of 4, which
allows
-      only 1 down peer. </para>
-    <para>Give each ZooKeeper server around 1GB of RAM, and if possible, its own dedicated
disk (A
-      dedicated disk is the best thing you can do to ensure a performant ZooKeeper ensemble).
For
-      very heavily loaded clusters, run ZooKeeper servers on separate machines from RegionServers
-      (DataNodes and TaskTrackers).</para>
-  </note>
-
-  <para>For example, to have HBase manage a ZooKeeper quorum on nodes
-      <emphasis>rs{1,2,3,4,5}.example.com</emphasis>, bound to port 2222 (the
default is 2181)
-    ensure <varname>HBASE_MANAGE_ZK</varname> is commented out or set to <varname>true</varname>
in
-      <filename>conf/hbase-env.sh</filename> and then edit <filename>conf/hbase-site.xml</filename>
-    and set <varname>hbase.zookeeper.property.clientPort</varname> and
-      <varname>hbase.zookeeper.quorum</varname>. You should also set
-      <varname>hbase.zookeeper.property.dataDir</varname> to other than the default
as the default
-    has ZooKeeper persist data under <filename>/tmp</filename> which is often
cleared on system
-    restart. In the example below we have ZooKeeper persist to
-      <filename>/user/local/zookeeper</filename>.</para>
-  <programlisting language="java"><![CDATA[
-  <configuration>
-    ...
-    <property>
-      <name>hbase.zookeeper.property.clientPort</name>
-      <value>2222</value>
-      <description>Property from ZooKeeper's config zoo.cfg.
-      The port at which the clients will connect.
-      </description>
-    </property>
-    <property>
-      <name>hbase.zookeeper.quorum</name>
-      <value>rs1.example.com,rs2.example.com,rs3.example.com,rs4.example.com,rs5.example.com</value>
-      <description>Comma separated list of servers in the ZooKeeper Quorum.
-      For example, "host1.mydomain.com,host2.mydomain.com,host3.mydomain.com".
-      By default this is set to localhost for local and pseudo-distributed modes
-      of operation. For a fully-distributed setup, this should be set to a full
-      list of ZooKeeper quorum servers. If HBASE_MANAGES_ZK is set in hbase-env.sh
-      this is the list of servers which we will start/stop ZooKeeper on.
-      </description>
-    </property>
-    <property>
-      <name>hbase.zookeeper.property.dataDir</name>
-      <value>/usr/local/zookeeper</value>
-      <description>Property from ZooKeeper's config zoo.cfg.
-      The directory where the snapshot is stored.
-      </description>
-    </property>
-    ...
-  </configuration>]]></programlisting>
-  <caution
-    xml:id="zk.version">
-    <title>What verion of ZooKeeper should I use?</title>
-    <para>The newer version, the better. For example, some folks have been bitten by
<link
-        xlink:href="https://issues.apache.org/jira/browse/ZOOKEEPER-1277">ZOOKEEPER-1277</link>.
If
-      running zookeeper 3.5+, you can ask hbase to make use of the new multi operation by
enabling <xref
-        linkend="hbase.zookeeper.useMulti" />" in your <filename>hbase-site.xml</filename>.
</para>
-  </caution>
-  <caution>
-    <title>ZooKeeper Maintenance</title>
-    <para>Be sure to set up the data dir cleaner described under <link
-        xlink:href="http://zookeeper.apache.org/doc/r3.1.2/zookeeperAdmin.html#sc_maintenance">Zookeeper
-        Maintenance</link> else you could have 'interesting' problems a couple of months
in; i.e.
-      zookeeper could start dropping sessions if it has to run through a directory of hundreds
of
-      thousands of logs which is wont to do around leader reelection time -- a process rare
but run
-      on occasion whether because a machine is dropped or happens to hiccup.</para>
-  </caution>
-
-  <section>
-    <title>Using existing ZooKeeper ensemble</title>
-
-    <para>To point HBase at an existing ZooKeeper cluster, one that is not managed
by HBase, set
-        <varname>HBASE_MANAGES_ZK</varname> in <filename>conf/hbase-env.sh</filename>
to
-      false</para>
-    <screen language="bourne">
-  ...
-  # Tell HBase whether it should manage its own instance of Zookeeper or not.
-  export HBASE_MANAGES_ZK=false</screen>
-    <para>Next set ensemble locations and client port, if non-standard, in
-        <filename>hbase-site.xml</filename>, or add a suitably configured
-        <filename>zoo.cfg</filename> to HBase's <filename>CLASSPATH</filename>.
HBase will prefer
-      the configuration found in <filename>zoo.cfg</filename> over any settings
in
-        <filename>hbase-site.xml</filename>.</para>
-
-    <para>When HBase manages ZooKeeper, it will start/stop the ZooKeeper servers as
a part of the
-      regular start/stop scripts. If you would like to run ZooKeeper yourself, independent
of HBase
-      start/stop, you would do the following</para>
-
-    <screen language="bourne">
-${HBASE_HOME}/bin/hbase-daemons.sh {start,stop} zookeeper
-</screen>
-
-    <para>Note that you can use HBase in this manner to spin up a ZooKeeper cluster,
unrelated to
-      HBase. Just make sure to set <varname>HBASE_MANAGES_ZK</varname> to <varname>false</varname>
-      if you want it to stay up across HBase restarts so that when HBase shuts down, it doesn't
take
-      ZooKeeper down with it.</para>
-
-    <para>For more information about running a distinct ZooKeeper cluster, see the
ZooKeeper <link
-        xlink:href="http://hadoop.apache.org/zookeeper/docs/current/zookeeperStarted.html">Getting
-        Started Guide</link>. Additionally, see the <link
-        xlink:href="http://wiki.apache.org/hadoop/ZooKeeper/FAQ#A7">ZooKeeper Wiki</link>
or the <link
-        xlink:href="http://zookeeper.apache.org/doc/r3.3.3/zookeeperAdmin.html#sc_zkMulitServerSetup">ZooKeeper
-        documentation</link> for more information on ZooKeeper sizing. </para>
-  </section>
-
-
-  <section
-    xml:id="zk.sasl.auth">
-    <title>SASL Authentication with ZooKeeper</title>
-    <para>Newer releases of Apache HBase (&gt;= 0.92) will support connecting to
a ZooKeeper Quorum
-      that supports SASL authentication (which is available in Zookeeper versions 3.4.0 or
-      later).</para>
-
-    <para>This describes how to set up HBase to mutually authenticate with a ZooKeeper
Quorum.
-      ZooKeeper/HBase mutual authentication (<link
-        xlink:href="https://issues.apache.org/jira/browse/HBASE-2418">HBASE-2418</link>)
is required
-      as part of a complete secure HBase configuration (<link
-        xlink:href="https://issues.apache.org/jira/browse/HBASE-3025">HBASE-3025</link>).
For
-      simplicity of explication, this section ignores additional configuration required (Secure
HDFS
-      and Coprocessor configuration). It's recommended to begin with an HBase-managed Zookeeper
-      configuration (as opposed to a standalone Zookeeper quorum) for ease of learning. </para>
-
-    <section>
-      <title>Operating System Prerequisites</title>
-
-      <para> You need to have a working Kerberos KDC setup. For each <code>$HOST</code>
that will
-        run a ZooKeeper server, you should have a principle <code>zookeeper/$HOST</code>.
For each
-        such host, add a service key (using the <code>kadmin</code> or <code>kadmin.local</code>
-        tool's <code>ktadd</code> command) for <code>zookeeper/$HOST</code>
and copy this file to
-          <code>$HOST</code>, and make it readable only to the user that will
run zookeeper on
-          <code>$HOST</code>. Note the location of this file, which we will use
below as
-          <filename>$PATH_TO_ZOOKEEPER_KEYTAB</filename>. </para>
-
-      <para> Similarly, for each <code>$HOST</code> that will run an HBase
server (master or
-        regionserver), you should have a principle: <code>hbase/$HOST</code>.
For each host, add a
-        keytab file called <filename>hbase.keytab</filename> containing a service
key for
-          <code>hbase/$HOST</code>, copy this file to <code>$HOST</code>,
and make it readable only
-        to the user that will run an HBase service on <code>$HOST</code>. Note
the location of this
-        file, which we will use below as <filename>$PATH_TO_HBASE_KEYTAB</filename>.
</para>
-
-      <para> Each user who will be an HBase client should also be given a Kerberos
principal. This
-        principal should usually have a password assigned to it (as opposed to, as with the
HBase
-        servers, a keytab file) which only this user knows. The client's principal's
-          <code>maxrenewlife</code> should be set so that it can be renewed enough
so that the user
-        can complete their HBase client processes. For example, if a user runs a long-running
HBase
-        client process that takes at most 3 days, we might create this user's principal within
-          <code>kadmin</code> with: <code>addprinc -maxrenewlife 3days</code>.
The Zookeeper client
-        and server libraries manage their own ticket refreshment by running threads that
wake up
-        periodically to do the refreshment. </para>
-
-      <para>On each host that will run an HBase client (e.g. <code>hbase shell</code>),
add the
-        following file to the HBase home directory's <filename>conf</filename>
directory:</para>
-
-      <programlisting language="java">
-Client {
-  com.sun.security.auth.module.Krb5LoginModule required
-  useKeyTab=false
-  useTicketCache=true;
-};
-                </programlisting>
-
-      <para>We'll refer to this JAAS configuration file as <filename>$CLIENT_CONF</filename>
-        below.</para>
-    </section>
-    <section>
-      <title>HBase-managed Zookeeper Configuration</title>
-
-      <para>On each node that will run a zookeeper, a master, or a regionserver, create
a <link
-          xlink:href="http://docs.oracle.com/javase/1.4.2/docs/guide/security/jgss/tutorials/LoginConfigFile.html">JAAS</link>
-        configuration file in the conf directory of the node's <filename>HBASE_HOME</filename>
-        directory that looks like the following:</para>
-
-      <programlisting language="java">
-Server {
-  com.sun.security.auth.module.Krb5LoginModule required
-  useKeyTab=true
-  keyTab="$PATH_TO_ZOOKEEPER_KEYTAB"
-  storeKey=true
-  useTicketCache=false
-  principal="zookeeper/$HOST";
-};
-Client {
-  com.sun.security.auth.module.Krb5LoginModule required
-  useKeyTab=true
-  useTicketCache=false
-  keyTab="$PATH_TO_HBASE_KEYTAB"
-  principal="hbase/$HOST";
-};
-                </programlisting>
-
-      <para>where the <filename>$PATH_TO_HBASE_KEYTAB</filename> and
-          <filename>$PATH_TO_ZOOKEEPER_KEYTAB</filename> files are what you created
above, and
-          <code>$HOST</code> is the hostname for that node.</para>
-
-      <para>The <code>Server</code> section will be used by the Zookeeper
quorum server, while the
-          <code>Client</code> section will be used by the HBase master and regionservers.
The path
-        to this file should be substituted for the text <filename>$HBASE_SERVER_CONF</filename>
in
-        the <filename>hbase-env.sh</filename> listing below.</para>
-
-      <para> The path to this file should be substituted for the text
-          <filename>$CLIENT_CONF</filename> in the <filename>hbase-env.sh</filename>
listing below. </para>
-
-      <para>Modify your <filename>hbase-env.sh</filename> to include the
following:</para>
-
-      <programlisting language="bourne">
-export HBASE_OPTS="-Djava.security.auth.login.config=$CLIENT_CONF"
-export HBASE_MANAGES_ZK=true
-export HBASE_ZOOKEEPER_OPTS="-Djava.security.auth.login.config=$HBASE_SERVER_CONF"
-export HBASE_MASTER_OPTS="-Djava.security.auth.login.config=$HBASE_SERVER_CONF"
-export HBASE_REGIONSERVER_OPTS="-Djava.security.auth.login.config=$HBASE_SERVER_CONF"
-                </programlisting>
-
-      <para>where <filename>$HBASE_SERVER_CONF</filename> and <filename>$CLIENT_CONF</filename>
are
-        the full paths to the JAAS configuration files created above.</para>
-
-      <para>Modify your <filename>hbase-site.xml</filename> on each node
that will run zookeeper,
-        master or regionserver to contain:</para>
-
-      <programlisting language="java"><![CDATA[
-<configuration>
-  <property>
-    <name>hbase.zookeeper.quorum</name>
-    <value>$ZK_NODES</value>
-  </property>
-  <property>
-    <name>hbase.cluster.distributed</name>
-    <value>true</value>
-  </property>
-  <property>
-    <name>hbase.zookeeper.property.authProvider.1</name>
-    <value>org.apache.zookeeper.server.auth.SASLAuthenticationProvider</value>
-  </property>
-  <property>
-    <name>hbase.zookeeper.property.kerberos.removeHostFromPrincipal</name>
-    <value>true</value>
-  </property>
-  <property>
-    <name>hbase.zookeeper.property.kerberos.removeRealmFromPrincipal</name>
-    <value>true</value>
-  </property>
-</configuration>
-                  ]]></programlisting>
-
-      <para>where <code>$ZK_NODES</code> is the comma-separated list of
hostnames of the Zookeeper
-        Quorum hosts.</para>
-
-      <para>Start your hbase cluster by running one or more of the following set of
commands on the
-        appropriate hosts: </para>
-
-      <screen>
-bin/hbase zookeeper start
-bin/hbase master start
-bin/hbase regionserver start
-                </screen>
-
-    </section>
-
-    <section>
-      <title>External Zookeeper Configuration</title>
-      <para>Add a JAAS configuration file that looks like:</para>
-      <programlisting language="java">
-Client {
-  com.sun.security.auth.module.Krb5LoginModule required
-  useKeyTab=true
-  useTicketCache=false
-  keyTab="$PATH_TO_HBASE_KEYTAB"
-  principal="hbase/$HOST";
-};
-                </programlisting>
-      <para>where the <filename>$PATH_TO_HBASE_KEYTAB</filename> is the
keytab created above for
-        HBase services to run on this host, and <code>$HOST</code> is the hostname
for that node.
-        Put this in the HBase home's configuration directory. We'll refer to this file's
full
-        pathname as <filename>$HBASE_SERVER_CONF</filename> below.</para>
-
-      <para>Modify your hbase-env.sh to include the following:</para>
-
-      <programlisting language="bourne">
-export HBASE_OPTS="-Djava.security.auth.login.config=$CLIENT_CONF"
-export HBASE_MANAGES_ZK=false
-export HBASE_MASTER_OPTS="-Djava.security.auth.login.config=$HBASE_SERVER_CONF"
-export HBASE_REGIONSERVER_OPTS="-Djava.security.auth.login.config=$HBASE_SERVER_CONF"
-                </programlisting>
-
-
-      <para>Modify your <filename>hbase-site.xml</filename> on each node
that will run a master or
-        regionserver to contain:</para>
-
-      <programlisting language="xml"><![CDATA[
-<configuration>
-  <property>
-    <name>hbase.zookeeper.quorum</name>
-    <value>$ZK_NODES</value>
-  </property>
-  <property>
-    <name>hbase.cluster.distributed</name>
-    <value>true</value>
-  </property>
-</configuration>
-                  ]]>
-                </programlisting>
-
-      <para>where <code>$ZK_NODES</code> is the comma-separated list of
hostnames of the Zookeeper
-        Quorum hosts.</para>
-
-      <para> Add a <filename>zoo.cfg</filename> for each Zookeeper Quorum
host containing:</para>
-      <programlisting language="java">
-authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
-kerberos.removeHostFromPrincipal=true
-kerberos.removeRealmFromPrincipal=true
-                  </programlisting>
-      <para>Also on each of these hosts, create a JAAS configuration file containing:</para>
-      <programlisting language="java">
-Server {
-  com.sun.security.auth.module.Krb5LoginModule required
-  useKeyTab=true
-  keyTab="$PATH_TO_ZOOKEEPER_KEYTAB"
-  storeKey=true
-  useTicketCache=false
-  principal="zookeeper/$HOST";
-};
-                  </programlisting>
-      <para>where <code>$HOST</code> is the hostname of each Quorum host.
We will refer to the full
-        pathname of this file as <filename>$ZK_SERVER_CONF</filename> below.
</para>
-
-      <para> Start your Zookeepers on each Zookeeper Quorum host with:</para>
-      <programlisting language="bourne">
-SERVER_JVMFLAGS="-Djava.security.auth.login.config=$ZK_SERVER_CONF" bin/zkServer start
-                  </programlisting>
-
-      <para> Start your HBase cluster by running one or more of the following set of
commands on the
-        appropriate nodes: </para>
-
-      <screen>
-bin/hbase master start
-bin/hbase regionserver start
-                </screen>
-
-
-    </section>
-
-    <section>
-      <title>Zookeeper Server Authentication Log Output</title>
-      <para>If the configuration above is successful, you should see something similar
to the
-        following in your Zookeeper server logs:</para>
-      <screen>
-11/12/05 22:43:39 INFO zookeeper.Login: successfully logged in.
-11/12/05 22:43:39 INFO server.NIOServerCnxnFactory: binding to port 0.0.0.0/0.0.0.0:2181
-11/12/05 22:43:39 INFO zookeeper.Login: TGT refresh thread started.
-11/12/05 22:43:39 INFO zookeeper.Login: TGT valid starting at:        Mon Dec 05 22:43:39
UTC 2011
-11/12/05 22:43:39 INFO zookeeper.Login: TGT expires:                  Tue Dec 06 22:43:39
UTC 2011
-11/12/05 22:43:39 INFO zookeeper.Login: TGT refresh sleeping until: Tue Dec 06 18:36:42 UTC
2011
-..
-11/12/05 22:43:59 INFO auth.SaslServerCallbackHandler:
-  Successfully authenticated client: authenticationID=hbase/ip-10-166-175-249.us-west-1.compute.internal@HADOOP.LOCALDOMAIN;
-  authorizationID=hbase/ip-10-166-175-249.us-west-1.compute.internal@HADOOP.LOCALDOMAIN.
-11/12/05 22:43:59 INFO auth.SaslServerCallbackHandler: Setting authorizedID: hbase
-11/12/05 22:43:59 INFO server.ZooKeeperServer: adding SASL authorization for authorizationID:
hbase
-                </screen>
-
-    </section>
-
-    <section>
-      <title>Zookeeper Client Authentication Log Output</title>
-      <para>On the Zookeeper client side (HBase master or regionserver), you should
see something
-        similar to the following:</para>
-      <screen>
-11/12/05 22:43:59 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=ip-10-166-175-249.us-west-1.compute.internal:2181
sessionTimeout=180000 watcher=master:60000
-11/12/05 22:43:59 INFO zookeeper.ClientCnxn: Opening socket connection to server /10.166.175.249:2181
-11/12/05 22:43:59 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is
14851@ip-10-166-175-249
-11/12/05 22:43:59 INFO zookeeper.Login: successfully logged in.
-11/12/05 22:43:59 INFO client.ZooKeeperSaslClient: Client will use GSSAPI as SASL mechanism.
-11/12/05 22:43:59 INFO zookeeper.Login: TGT refresh thread started.
-11/12/05 22:43:59 INFO zookeeper.ClientCnxn: Socket connection established to ip-10-166-175-249.us-west-1.compute.internal/10.166.175.249:2181,
initiating session
-11/12/05 22:43:59 INFO zookeeper.Login: TGT valid starting at:        Mon Dec 05 22:43:59
UTC 2011
-11/12/05 22:43:59 INFO zookeeper.Login: TGT expires:                  Tue Dec 06 22:43:59
UTC 2011
-11/12/05 22:43:59 INFO zookeeper.Login: TGT refresh sleeping until: Tue Dec 06 18:30:37 UTC
2011
-11/12/05 22:43:59 INFO zookeeper.ClientCnxn: Session establishment complete on server ip-10-166-175-249.us-west-1.compute.internal/10.166.175.249:2181,
sessionid = 0x134106594320000, negotiated timeout = 180000
-                </screen>
-    </section>
-
-    <section>
-      <title>Configuration from Scratch</title>
-
-      <para>This has been tested on the current standard Amazon Linux AMI. First setup
KDC and
-        principals as described above. Next checkout code and run a sanity check.</para>
-
-      <screen>
-git clone git://git.apache.org/hbase.git
-cd hbase
-mvn clean test -Dtest=TestZooKeeperACL
-                </screen>
-
-      <para>Then configure HBase as described above. Manually edit target/cached_classpath.txt
(see
-        below): </para>
-      <screen>
-bin/hbase zookeeper &amp;
-bin/hbase master &amp;
-bin/hbase regionserver &amp;
-                </screen>
-    </section>
-
-
-    <section>
-      <title>Future improvements</title>
-
-      <section>
-        <title>Fix target/cached_classpath.txt</title>
-        <para> You must override the standard hadoop-core jar file from the
-            <code>target/cached_classpath.txt</code> file with the version containing
the
-          HADOOP-7070 fix. You can use the following script to do this:</para>
-        <screen language="bourne">
-echo `find ~/.m2 -name "*hadoop-core*7070*SNAPSHOT.jar"` ':' `cat target/cached_classpath.txt`
| sed 's/ //g' > target/tmp.txt
-mv target/tmp.txt target/cached_classpath.txt
-                </screen>
-      </section>
-
-      <section>
-        <title>Set JAAS configuration programmatically</title>
-
-
-        <para>This would avoid the need for a separate Hadoop jar that fixes <link
-            xlink:href="https://issues.apache.org/jira/browse/HADOOP-7070">HADOOP-7070</link>.
-        </para>
-      </section>
-
-      <section>
-        <title>Elimination of <code>kerberos.removeHostFromPrincipal</code>
and
-            <code>kerberos.removeRealmFromPrincipal</code></title>
-        <para />
-      </section>
-
-    </section>
-
-
-  </section>
-  <!-- SASL Authentication with ZooKeeper -->
-
-
-
-
-</chapter>


Mime
View raw message