eagle-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From m.@apache.org
Subject svn commit: r1778394 [1/2] - in /eagle/site: ./ docs/ docs/tutorial/ post/2015/10/27/ sup/
Date Thu, 12 Jan 2017 07:44:47 GMT
Author: mw
Date: Thu Jan 12 07:44:47 2017
New Revision: 1778394

URL: http://svn.apache.org/viewvc?rev=1778394&view=rev
Log:
remove incubator from downloading urls

Modified:
    eagle/site/docs/FAQ.html
    eagle/site/docs/ambari-plugin-install.html
    eagle/site/docs/cloudera-integration.html
    eagle/site/docs/community.html
    eagle/site/docs/configuration.html
    eagle/site/docs/deployment-env.html
    eagle/site/docs/deployment-in-docker.html
    eagle/site/docs/deployment-in-production.html
    eagle/site/docs/deployment-in-sandbox.html
    eagle/site/docs/development-in-intellij.html
    eagle/site/docs/development-in-macosx.html
    eagle/site/docs/download-latest.html
    eagle/site/docs/download.html
    eagle/site/docs/hbase-auth-activity-monitoring.html
    eagle/site/docs/hbase-data-activity-monitoring.html
    eagle/site/docs/hdfs-auth-activity-monitoring.html
    eagle/site/docs/hdfs-data-activity-monitoring.html
    eagle/site/docs/hive-query-activity-monitoring.html
    eagle/site/docs/import-hdfs-auditLog.html
    eagle/site/docs/installation.html
    eagle/site/docs/jmx-metric-monitoring.html
    eagle/site/docs/mapr-integration.html
    eagle/site/docs/quick-start-0.3.0.html
    eagle/site/docs/quick-start.html
    eagle/site/docs/serviceconfiguration.html
    eagle/site/docs/tutorial/classification.html
    eagle/site/docs/tutorial/ldap.html
    eagle/site/docs/tutorial/notificationplugin.html
    eagle/site/docs/tutorial/policy.html
    eagle/site/docs/tutorial/site-0.3.0.html
    eagle/site/docs/tutorial/topologymanagement.html
    eagle/site/docs/tutorial/userprofile.html
    eagle/site/feed.xml
    eagle/site/index.html
    eagle/site/post/2015/10/27/apache-eagle-announce-cn.html
    eagle/site/sup/index.html

Modified: eagle/site/docs/FAQ.html
URL: http://svn.apache.org/viewvc/eagle/site/docs/FAQ.html?rev=1778394&r1=1778393&r2=1778394&view=diff
==============================================================================
--- eagle/site/docs/FAQ.html (original)
+++ eagle/site/docs/FAQ.html Thu Jan 12 07:44:47 2017
@@ -223,14 +223,16 @@
       <p>Add the following line in host machine’s hosts file</p>
     </blockquote>
 
-    <pre><code>127.0.0.1 sandbox.hortonworks.com
+    <div class="highlighter-rouge"><pre class="highlight"><code>127.0.0.1 sandbox.hortonworks.com
 </code></pre>
+    </div>
   </li>
   <li>
     <p><strong>Q2. Not able to send data into kafka using kafka console producer</strong>:</p>
 
-    <pre><code>/usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --broker-list localhost:6667 --topic sandbox_hdfs_audit_log
+    <div class="highlighter-rouge"><pre class="highlight"><code>/usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --broker-list localhost:6667 --topic sandbox_hdfs_audit_log
 </code></pre>
+    </div>
 
     <blockquote>
       <p>Apache Kafka broker are binding to host sandbox.hortonworks.com</p>

Modified: eagle/site/docs/ambari-plugin-install.html
URL: http://svn.apache.org/viewvc/eagle/site/docs/ambari-plugin-install.html?rev=1778394&r1=1778393&r2=1778394&view=diff
==============================================================================
--- eagle/site/docs/ambari-plugin-install.html (original)
+++ eagle/site/docs/ambari-plugin-install.html Thu Jan 12 07:44:47 2017
@@ -227,8 +227,9 @@
   <li>
     <p>Create a Kafka<sup id="fnref:KAFKA"><a href="#fn:KAFKA" class="footnote">1</a></sup> topic if you have not. Here is an example command.</p>
 
-    <pre><code>$ /usr/hdp/current/kafka-broker/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic sandbox_hdfs_audit_log
+    <div class="highlighter-rouge"><pre class="highlight"><code>$ /usr/hdp/current/kafka-broker/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic sandbox_hdfs_audit_log
 </code></pre>
+    </div>
   </li>
   <li>
     <p>Stream HDFS log data to Kafka, and refer to <a href="/docs/import-hdfs-auditLog.html">here</a> on how to do it .</p>
@@ -244,8 +245,9 @@
   <li>
     <p>Install Eagle Ambari plugin</p>
 
-    <pre><code>$ /usr/hdp/current/eagle/bin/eagle-ambari.sh install
+    <div class="highlighter-rouge"><pre class="highlight"><code>$ /usr/hdp/current/eagle/bin/eagle-ambari.sh install
 </code></pre>
+    </div>
   </li>
   <li>
     <p>Restart <a href="http://127.0.0.1:8000/">Ambari</a> click on disable and enable Ambari back.</p>
@@ -259,10 +261,11 @@
   <li>
     <p>Add Policies and meta data required by running the below script.</p>
 
-    <pre><code>$ cd &lt;eagle-home&gt;
+    <div class="highlighter-rouge"><pre class="highlight"><code>$ cd &lt;eagle-home&gt;
 $ examples/sample-sensitivity-resource-create.sh
 $ examples/sample-policy-create.sh
 </code></pre>
+    </div>
   </li>
 </ol>
 

Modified: eagle/site/docs/cloudera-integration.html
URL: http://svn.apache.org/viewvc/eagle/site/docs/cloudera-integration.html?rev=1778394&r1=1778393&r2=1778394&view=diff
==============================================================================
--- eagle/site/docs/cloudera-integration.html (original)
+++ eagle/site/docs/cloudera-integration.html Thu Jan 12 07:44:47 2017
@@ -227,8 +227,8 @@ This tutorial is to address these issues
 <ul>
   <li>Zookeeper (installed through Cloudera Manager)</li>
   <li>Kafka (installed through Cloudera Manager)</li>
-  <li>Storm (<code>0.9.x</code> or <code>0.10.x</code>, installed manually)</li>
-  <li>Logstash (<code>2.X</code>, installed manually on NameNode)</li>
+  <li>Storm (<code class="highlighter-rouge">0.9.x</code> or <code class="highlighter-rouge">0.10.x</code>, installed manually)</li>
+  <li>Logstash (<code class="highlighter-rouge">2.X</code>, installed manually on NameNode)</li>
 </ul>
 
 <h3 id="kafka">Kafka</h3>
@@ -239,17 +239,18 @@ This tutorial is to address these issues
 
 <ul>
   <li>
-    <p>Open Cloudera Manager and open “kafka” configuration, then set <code>“zookeeper Root”</code> to <code>“/”</code>.</p>
+    <p>Open Cloudera Manager and open “kafka” configuration, then set <code class="highlighter-rouge">“zookeeper Root”</code> to <code class="highlighter-rouge">“/”</code>.</p>
   </li>
   <li>
-    <p>If Kafka cannot be started successfully, check kafka’s log. If stack trace shows: <code>“java.lang.OutOfMemoryError: Java heap space”</code>. Increase heap size by setting <code>"KAFKA_HEAP_OPTS"</code>in <code>/bin/kafka-server-start.sh</code>.</p>
+    <p>If Kafka cannot be started successfully, check kafka’s log. If stack trace shows: <code class="highlighter-rouge">“java.lang.OutOfMemoryError: Java heap space”</code>. Increase heap size by setting <code class="highlighter-rouge">"KAFKA_HEAP_OPTS"</code>in <code class="highlighter-rouge">/bin/kafka-server-start.sh</code>.</p>
   </li>
 </ul>
 
 <p>Example:</p>
 
-<pre><code>                  export KAFKA_HEAP_OPTS="-Xmx2G -Xms2G"
+<div class="highlighter-rouge"><pre class="highlight"><code>                  export KAFKA_HEAP_OPTS="-Xmx2G -Xms2G"
 </code></pre>
+</div>
 
 <h4 id="verification">Verification</h4>
 
@@ -257,15 +258,17 @@ This tutorial is to address these issues
   <li>Step1: create a kafka topic (here I created a topic called “test”, which will be used in  logstash configuration file to receive hdfsAudit log messages from Cloudera.</li>
 </ul>
 
-<pre><code>bin/kafka-topics.sh --create --zookeeper 127.0.0.1:2181 --replication-factor 1 --partitions 1 --topic test
+<div class="highlighter-rouge"><pre class="highlight"><code>bin/kafka-topics.sh --create --zookeeper 127.0.0.1:2181 --replication-factor 1 --partitions 1 --topic test
 </code></pre>
+</div>
 
 <ul>
   <li>Step2: check if topic has been created successfully.</li>
 </ul>
 
-<pre><code>bin/kafka-topics.sh --list --zookeeper 127.0.0.1:2181
+<div class="highlighter-rouge"><pre class="highlight"><code>bin/kafka-topics.sh --list --zookeeper 127.0.0.1:2181
 </code></pre>
+</div>
 
 <p>this command will show all created topics.</p>
 
@@ -273,9 +276,10 @@ This tutorial is to address these issues
   <li>Step3: open two terminals, start “producer” and “consumer” separately.</li>
 </ul>
 
-<pre><code>/usr/bin/kafka-console-producer --broker-list hostname:9092 --topic test
+<div class="highlighter-rouge"><pre class="highlight"><code>/usr/bin/kafka-console-producer --broker-list hostname:9092 --topic test
 /usr/bin/kafka-console-consumer --zookeeper hostname:2181 --topic test
 </code></pre>
+</div>
 
 <ul>
   <li>Step4: type in some message in producer. If consumer can receive the messages sent from producer, then kafka is working fine. Otherwise please check the configuration and logs to identify the root cause of issues.</li>
@@ -287,33 +291,36 @@ This tutorial is to address these issues
 
 <p>You can follow <a href="https://www.elastic.co/downloads/logstash">logstash online doc</a> to download and install logstash on your machine:</p>
 
-<p>Or you can install it through <code>yum</code> if you are using centos:</p>
+<p>Or you can install it through <code class="highlighter-rouge">yum</code> if you are using centos:</p>
 
 <ul>
   <li>download and install the public signing key:</li>
 </ul>
 
-<pre><code>rpm --import  https://packages.elastic.co/GPG-KEY-elasticsearch
+<div class="highlighter-rouge"><pre class="highlight"><code>rpm --import  https://packages.elastic.co/GPG-KEY-elasticsearch
 </code></pre>
+</div>
 
 <ul>
-  <li>Add the following lines in <code>/etc/yum.repos.d/</code> directory in a file with a <code>.repo</code> suffix, for example <code>logstash.repo</code>.</li>
+  <li>Add the following lines in <code class="highlighter-rouge">/etc/yum.repos.d/</code> directory in a file with a <code class="highlighter-rouge">.repo</code> suffix, for example <code class="highlighter-rouge">logstash.repo</code>.</li>
 </ul>
 
-<pre><code>[logstash-2.3]
+<div class="highlighter-rouge"><pre class="highlight"><code>[logstash-2.3]
 name=Logstash repository for 2.3.x packages
 baseurl=https://packages.elastic.co/logstash/2.3/centos
 gpgcheck=1
 gpgkey=https://packages.elastic.co/GPG-KEY-elasticsearch
 enabled=1
 </code></pre>
+</div>
 
 <ul>
-  <li>Then install it using <code>yum</code>:</li>
+  <li>Then install it using <code class="highlighter-rouge">yum</code>:</li>
 </ul>
 
-<pre><code>yum install logstash
+<div class="highlighter-rouge"><pre class="highlight"><code>yum install logstash
 </code></pre>
+</div>
 
 <h4 id="create-conf-file">Create conf file</h4>
 
@@ -321,8 +328,9 @@ enabled=1
 
 <h4 id="start-logstash">Start logstash</h4>
 
-<pre><code>bin/logstash -f conf/first-pipeline.conf
+<div class="highlighter-rouge"><pre class="highlight"><code>bin/logstash -f conf/first-pipeline.conf
 </code></pre>
+</div>
 
 <h4 id="verification-1">Verification</h4>
 
@@ -333,51 +341,55 @@ enabled=1
 
 <h4 id="installation-1">Installation</h4>
 
-<p>Download Apache Storm from <a href="http://storm.apache.org/downloads.html">here</a>, the version you choose should be <code>0.10.x</code> or <code>0.9.x</code> release.
+<p>Download Apache Storm from <a href="http://storm.apache.org/downloads.html">here</a>, the version you choose should be <code class="highlighter-rouge">0.10.x</code> or <code class="highlighter-rouge">0.9.x</code> release.
 Then follow <a href="http://storm.apache.org/releases/0.10.0/Setting-up-a-Storm-cluster.html">Apache Storm online doc</a>) to install Apache Storm on your cluster.</p>
 
-<p>In <code>/etc/profile</code>, add this:</p>
+<p>In <code class="highlighter-rouge">/etc/profile</code>, add this:</p>
 
-<pre><code>export PATH=$PATH:/opt/apache-storm-0.10.1/bin/
+<div class="highlighter-rouge"><pre class="highlight"><code>export PATH=$PATH:/opt/apache-storm-0.10.1/bin/
 </code></pre>
+</div>
 
 <p>save the profile and then type:</p>
 
-<pre><code>source /etc/profile 
+<div class="highlighter-rouge"><pre class="highlight"><code>source /etc/profile 
 </code></pre>
+</div>
 
 <p>to make it work.</p>
 
 <h4 id="configuration-1">Configuration</h4>
 
-<p>In <code>storm/conf/storm.yaml</code>, change the hostname to your own host.</p>
+<p>In <code class="highlighter-rouge">storm/conf/storm.yaml</code>, change the hostname to your own host.</p>
 
 <h4 id="start-apache-storm">Start Apache Storm</h4>
 
 <p>In Termial, type:</p>
 
-<pre><code>$: storm nimbus
+<div class="highlighter-rouge"><pre class="highlight"><code>$: storm nimbus
 $: storm supervisor
 $: storm UI
 </code></pre>
+</div>
 
 <h4 id="verification-2">Verification</h4>
 
-<p>Open storm UI in your browser, default URL is : <code>http://hostname:8080/index.html</code>.</p>
+<p>Open storm UI in your browser, default URL is : <code class="highlighter-rouge">http://hostname:8080/index.html</code>.</p>
 
 <h3 id="apache-eagle">Apache Eagle</h3>
 
 <p>To download and install Apache Eagle, please refer to  <a href="http://eagle.incubator.apache.org/docs/quick-start.html">Get Started with Sandbox.</a> .</p>
 
-<p>One thing need to mention is: in <code>“/bin/eagle-topology.sh”</code>, line 102:</p>
+<p>One thing need to mention is: in <code class="highlighter-rouge">“/bin/eagle-topology.sh”</code>, line 102:</p>
 
-<pre><code>			storm_ui=http://localhost:8080
+<div class="highlighter-rouge"><pre class="highlight"><code>			storm_ui=http://localhost:8080
 </code></pre>
+</div>
 
 <p>If you are not using the default port number, change this to your own Storm UI url.</p>
 
 <p>I know it takes time to finish these configuration, but now it is time to have fun! 
-Just try <code>HDFS Data Activity Monitoring</code> with <code>Demo</code> listed in <a href="http://eagle.incubator.apache.org/docs/hdfs-data-activity-monitoring.html">HDFS Data Activity Monitoring.</a></p>
+Just try <code class="highlighter-rouge">HDFS Data Activity Monitoring</code> with <code class="highlighter-rouge">Demo</code> listed in <a href="http://eagle.incubator.apache.org/docs/hdfs-data-activity-monitoring.html">HDFS Data Activity Monitoring.</a></p>
 
 
       </div><!--end of loadcontent-->  

Modified: eagle/site/docs/community.html
URL: http://svn.apache.org/viewvc/eagle/site/docs/community.html?rev=1778394&r1=1778393&r2=1778394&view=diff
==============================================================================
--- eagle/site/docs/community.html (original)
+++ eagle/site/docs/community.html Thu Jan 12 07:44:47 2017
@@ -242,13 +242,13 @@
       <td> </td>
       <td> </td>
       <td> </td>
-      <td><a href="&#109;&#097;&#105;&#108;&#116;&#111;:&#117;&#115;&#101;&#114;&#064;&#101;&#097;&#103;&#108;&#101;&#046;&#097;&#112;&#097;&#099;&#104;&#101;&#046;&#111;&#114;&#103;">&#117;&#115;&#101;&#114;&#064;&#101;&#097;&#103;&#108;&#101;&#046;&#097;&#112;&#097;&#099;&#104;&#101;&#046;&#111;&#114;&#103;</a></td>
+      <td><a href="mailto:user@eagle.apache.org">user@eagle.apache.org</a></td>
       <td> </td>
       <td> </td>
       <td> </td>
       <td> </td>
-      <td><a href="&#109;&#097;&#105;&#108;&#116;&#111;:&#117;&#115;&#101;&#114;&#045;&#115;&#117;&#098;&#115;&#099;&#114;&#105;&#098;&#101;&#064;&#101;&#097;&#103;&#108;&#101;&#046;&#097;&#112;&#097;&#099;&#104;&#101;&#046;&#111;&#114;&#103;">subscribe</a></td>
-      <td><a href="&#109;&#097;&#105;&#108;&#116;&#111;:&#117;&#115;&#101;&#114;&#045;&#117;&#110;&#115;&#117;&#098;&#115;&#099;&#114;&#105;&#098;&#101;&#064;&#101;&#097;&#103;&#108;&#101;&#046;&#097;&#112;&#097;&#099;&#104;&#101;&#046;&#111;&#114;&#103;">unsubscribe</a></td>
+      <td><a href="mailto:user-subscribe@eagle.apache.org">subscribe</a></td>
+      <td><a href="mailto:user-unsubscribe@eagle.apache.org">unsubscribe</a></td>
       <td><a href="http://mail-archives.apache.org/mod_mbox/eagle-user/">eagle-user</a></td>
     </tr>
     <tr>
@@ -257,13 +257,13 @@
       <td> </td>
       <td> </td>
       <td> </td>
-      <td><a href="&#109;&#097;&#105;&#108;&#116;&#111;:&#100;&#101;&#118;&#064;&#101;&#097;&#103;&#108;&#101;&#046;&#097;&#112;&#097;&#099;&#104;&#101;&#046;&#111;&#114;&#103;">&#100;&#101;&#118;&#064;&#101;&#097;&#103;&#108;&#101;&#046;&#097;&#112;&#097;&#099;&#104;&#101;&#046;&#111;&#114;&#103;</a></td>
+      <td><a href="mailto:dev@eagle.apache.org">dev@eagle.apache.org</a></td>
       <td> </td>
       <td> </td>
       <td> </td>
       <td> </td>
-      <td><a href="&#109;&#097;&#105;&#108;&#116;&#111;:&#100;&#101;&#118;&#045;&#115;&#117;&#098;&#115;&#099;&#114;&#105;&#098;&#101;&#064;&#101;&#097;&#103;&#108;&#101;&#046;&#097;&#112;&#097;&#099;&#104;&#101;&#046;&#111;&#114;&#103;">subscribe</a></td>
-      <td><a href="&#109;&#097;&#105;&#108;&#116;&#111;:&#100;&#101;&#118;&#045;&#117;&#110;&#115;&#117;&#098;&#115;&#099;&#114;&#105;&#098;&#101;&#064;&#101;&#097;&#103;&#108;&#101;&#046;&#097;&#112;&#097;&#099;&#104;&#101;&#046;&#111;&#114;&#103;">unsubscribe</a></td>
+      <td><a href="mailto:dev-subscribe@eagle.apache.org">subscribe</a></td>
+      <td><a href="mailto:dev-unsubscribe@eagle.apache.org">unsubscribe</a></td>
       <td><a href="http://mail-archives.apache.org/mod_mbox/eagle-dev/">eagle-dev</a></td>
     </tr>
     <tr>
@@ -272,13 +272,13 @@
       <td> </td>
       <td> </td>
       <td> </td>
-      <td><a href="&#109;&#097;&#105;&#108;&#116;&#111;:&#105;&#115;&#115;&#117;&#101;&#115;&#064;&#101;&#097;&#103;&#108;&#101;&#046;&#097;&#112;&#097;&#099;&#104;&#101;&#046;&#111;&#114;&#103;">&#105;&#115;&#115;&#117;&#101;&#115;&#064;&#101;&#097;&#103;&#108;&#101;&#046;&#097;&#112;&#097;&#099;&#104;&#101;&#046;&#111;&#114;&#103;</a></td>
+      <td><a href="mailto:issues@eagle.apache.org">issues@eagle.apache.org</a></td>
       <td> </td>
       <td> </td>
       <td> </td>
       <td> </td>
-      <td><a href="&#109;&#097;&#105;&#108;&#116;&#111;:&#105;&#115;&#115;&#117;&#101;&#115;&#045;&#115;&#117;&#098;&#115;&#099;&#114;&#105;&#098;&#101;&#064;&#101;&#097;&#103;&#108;&#101;&#046;&#097;&#112;&#097;&#099;&#104;&#101;&#046;&#111;&#114;&#103;">subscribe</a></td>
-      <td><a href="&#109;&#097;&#105;&#108;&#116;&#111;:&#105;&#115;&#115;&#117;&#101;&#115;&#045;&#117;&#110;&#115;&#117;&#098;&#115;&#099;&#114;&#105;&#098;&#101;&#064;&#101;&#097;&#103;&#108;&#101;&#046;&#097;&#112;&#097;&#099;&#104;&#101;&#046;&#111;&#114;&#103;">unsubscribe</a></td>
+      <td><a href="mailto:issues-subscribe@eagle.apache.org">subscribe</a></td>
+      <td><a href="mailto:issues-unsubscribe@eagle.apache.org">unsubscribe</a></td>
       <td><a href="http://mail-archives.apache.org/mod_mbox/eagle-issues/">eagle-issues</a></td>
     </tr>
     <tr>
@@ -287,13 +287,13 @@
       <td> </td>
       <td> </td>
       <td> </td>
-      <td><a href="&#109;&#097;&#105;&#108;&#116;&#111;:&#099;&#111;&#109;&#109;&#105;&#116;&#115;&#064;&#101;&#097;&#103;&#108;&#101;&#046;&#097;&#112;&#097;&#099;&#104;&#101;&#046;&#111;&#114;&#103;">&#099;&#111;&#109;&#109;&#105;&#116;&#115;&#064;&#101;&#097;&#103;&#108;&#101;&#046;&#097;&#112;&#097;&#099;&#104;&#101;&#046;&#111;&#114;&#103;</a></td>
+      <td><a href="mailto:commits@eagle.apache.org">commits@eagle.apache.org</a></td>
       <td> </td>
       <td> </td>
       <td> </td>
       <td> </td>
-      <td><a href="&#109;&#097;&#105;&#108;&#116;&#111;:&#099;&#111;&#109;&#109;&#105;&#116;&#115;&#045;&#115;&#117;&#098;&#115;&#099;&#114;&#105;&#098;&#101;&#064;&#101;&#097;&#103;&#108;&#101;&#046;&#097;&#112;&#097;&#099;&#104;&#101;&#046;&#111;&#114;&#103;">subscribe</a></td>
-      <td><a href="&#109;&#097;&#105;&#108;&#116;&#111;:&#099;&#111;&#109;&#109;&#105;&#116;&#115;&#045;&#117;&#110;&#115;&#117;&#098;&#115;&#099;&#114;&#105;&#098;&#101;&#064;&#101;&#097;&#103;&#108;&#101;&#046;&#097;&#112;&#097;&#099;&#104;&#101;&#046;&#111;&#114;&#103;">unsubscribe</a></td>
+      <td><a href="mailto:commits-subscribe@eagle.apache.org">subscribe</a></td>
+      <td><a href="mailto:commits-unsubscribe@eagle.apache.org">unsubscribe</a></td>
       <td><a href="http://mail-archives.apache.org/mod_mbox/eagle-commits/">eagle-commits</a></td>
     </tr>
   </tbody>
@@ -376,7 +376,7 @@
   <li><strong>Wechat</strong>: Apache_Eagle</li>
 </ul>
 
-<h3 id="events-and-meetupshadoop">Events and Meetups<sup id="fnref:HADOOP"><a href="#fn:HADOOP" class="footnote">1</a></sup></h3>
+<h3 id="events-and-meetups">Events and Meetups<sup id="fnref:HADOOP"><a href="#fn:HADOOP" class="footnote">1</a></sup></h3>
 
 <p><strong>Conferences</strong></p>
 

Modified: eagle/site/docs/configuration.html
URL: http://svn.apache.org/viewvc/eagle/site/docs/configuration.html?rev=1778394&r1=1778393&r2=1778394&view=diff
==============================================================================
--- eagle/site/docs/configuration.html (original)
+++ eagle/site/docs/configuration.html Thu Jan 12 07:44:47 2017
@@ -215,7 +215,7 @@
       </div>
       <div class="col-xs-6 col-sm-9 page-main-content" style="margin-left: -15px" id="loadcontent">
         <h1 class="page-header" style="margin-top: 0px">Application Configuration</h1>
-        <p>Apache Eagle (called Eagle in the following) requires you to create a configuration file under <code>$EAGLE_HOME/conf/</code> for each application. Basically, there are some common properties shared, e.g., envContextConfig, eagleProps, and dynamicConfigSource. While dataSourceConfig differs from application to application.</p>
+        <p>Apache Eagle (called Eagle in the following) requires you to create a configuration file under <code class="highlighter-rouge">$EAGLE_HOME/conf/</code> for each application. Basically, there are some common properties shared, e.g., envContextConfig, eagleProps, and dynamicConfigSource. While dataSourceConfig differs from application to application.</p>
 
 <p>In this page we take the following two application as examples</p>
 

Modified: eagle/site/docs/deployment-env.html
URL: http://svn.apache.org/viewvc/eagle/site/docs/deployment-env.html?rev=1778394&r1=1778393&r2=1778394&view=diff
==============================================================================
--- eagle/site/docs/deployment-env.html (original)
+++ eagle/site/docs/deployment-env.html Thu Jan 12 07:44:47 2017
@@ -217,7 +217,7 @@
         <h1 class="page-header" style="margin-top: 0px">Deploy Environment</h1>
         <h3 id="setup-environment">Setup Environment</h3>
 
-<p>Apache Eagle (called Eagle in the following) as an analytics solution for identifying security and performance issues instantly, relies on streaming platform <code>Storm</code><sup id="fnref:STORM"><a href="#fn:STORM" class="footnote">1</a></sup> + <code>Kafka</code><sup id="fnref:KAFKA"><a href="#fn:KAFKA" class="footnote">2</a></sup> to meet the realtime criteria, and persistence storage to store metadata and some metrics. As for the persistence storage, it supports three types of database: <code>HBase</code><sup id="fnref:HBASE"><a href="#fn:HBASE" class="footnote">3</a></sup>, <code>Derby</code><sup id="fnref:DERBY"><a href="#fn:DERBY" class="footnote">4</a></sup>, and <code>Mysql</code></p>
+<p>Apache Eagle (called Eagle in the following) as an analytics solution for identifying security and performance issues instantly, relies on streaming platform <code class="highlighter-rouge">Storm</code><sup id="fnref:STORM"><a href="#fn:STORM" class="footnote">1</a></sup> + <code class="highlighter-rouge">Kafka</code><sup id="fnref:KAFKA"><a href="#fn:KAFKA" class="footnote">2</a></sup> to meet the realtime criteria, and persistence storage to store metadata and some metrics. As for the persistence storage, it supports three types of database: <code class="highlighter-rouge">HBase</code><sup id="fnref:HBASE"><a href="#fn:HBASE" class="footnote">3</a></sup>, <code class="highlighter-rouge">Derby</code><sup id="fnref:DERBY"><a href="#fn:DERBY" class="footnote">4</a></sup>, and <code class="highlighter-rouge">Mysql</code></p>
 
 <p>To run monitoring applications, Eagle requires the following dependencies.</p>
 

Modified: eagle/site/docs/deployment-in-docker.html
URL: http://svn.apache.org/viewvc/eagle/site/docs/deployment-in-docker.html?rev=1778394&r1=1778393&r2=1778394&view=diff
==============================================================================
--- eagle/site/docs/deployment-in-docker.html (original)
+++ eagle/site/docs/deployment-in-docker.html Thu Jan 12 07:44:47 2017
@@ -225,13 +225,14 @@
       <li>
         <p>Pull latest eagle docker image from <a href="https://hub.docker.com/r/apacheeagle/sandbox/">docker hub</a> directly:</p>
 
-        <pre><code>docker pull apacheeagle/sandbox
+        <div class="highlighter-rouge"><pre class="highlight"><code>docker pull apacheeagle/sandbox
 </code></pre>
+        </div>
       </li>
       <li>
         <p>Then run eagle docker image:</p>
 
-        <pre><code>docker run -p 9099:9099 -p 8080:8080 -p 8744:8744 -p 2181:2181 -p 2888:2888 \
+        <div class="highlighter-rouge"><pre class="highlight"><code>docker run -p 9099:9099 -p 8080:8080 -p 8744:8744 -p 2181:2181 -p 2888:2888 \
   -p 6667:6667 -p 60020:60020 -p 60030:60030 -p 60010:60010 -d --dns 127.0.0.1 \
   --entrypoint /usr/local/serf/bin/start-serf-agent.sh -e KEYCHAIN= \
   --env EAGLE_SERVER_HOST=sandbox.eagle.apache.org --name sandbox \
@@ -241,6 +242,7 @@ docker run -it --rm -e EXPECTED_HOST_COU
   --link sandbox:ambariserver --entrypoint /bin/sh apacheeagle/sandbox:latest \
   -c /tmp/install-cluster.sh
 </code></pre>
+        </div>
       </li>
     </ul>
   </li>
@@ -251,14 +253,16 @@ docker run -it --rm -e EXPECTED_HOST_COU
       <li>
         <p>Get latest source code of eagle.</p>
 
-        <pre><code>git clone https://github.com/apache/eagle.git
+        <div class="highlighter-rouge"><pre class="highlight"><code>git clone https://github.com/apache/eagle.git
 </code></pre>
+        </div>
       </li>
       <li>
         <p>Then run eagle docker command.</p>
 
-        <pre><code>cd eagle &amp;&amp; ./eagle-docker boot
+        <div class="highlighter-rouge"><pre class="highlight"><code>cd eagle &amp;&amp; ./eagle-docker boot
 </code></pre>
+        </div>
       </li>
     </ul>
   </li>

Modified: eagle/site/docs/deployment-in-production.html
URL: http://svn.apache.org/viewvc/eagle/site/docs/deployment-in-production.html?rev=1778394&r1=1778393&r2=1778394&view=diff
==============================================================================
--- eagle/site/docs/deployment-in-production.html (original)
+++ eagle/site/docs/deployment-in-production.html Thu Jan 12 07:44:47 2017
@@ -243,9 +243,9 @@
 
     <ul>
       <li>
-        <p>Edit <code>bin/eagle-env.sh</code></p>
+        <p>Edit <code class="highlighter-rouge">bin/eagle-env.sh</code></p>
 
-        <pre><code>  # TODO: make sure java version is 1.7.x
+        <div class="highlighter-rouge"><pre class="highlight"><code>  # TODO: make sure java version is 1.7.x
   export JAVA_HOME=
 
   # TODO: Apache Storm nimbus host. Default is localhost
@@ -254,11 +254,12 @@
   # TODO: EAGLE_SERVICE_HOST, default is `hostname -f`
   export EAGLE_SERVICE_HOST=localhost
 </code></pre>
+        </div>
       </li>
       <li>
-        <p>Edit <code>conf/eagle-service.conf</code> to configure the database to use (for example: HBase<sup id="fnref:HBASE"><a href="#fn:HBASE" class="footnote">1</a></sup>)</p>
+        <p>Edit <code class="highlighter-rouge">conf/eagle-service.conf</code> to configure the database to use (for example: HBase<sup id="fnref:HBASE"><a href="#fn:HBASE" class="footnote">1</a></sup>)</p>
 
-        <pre><code>  # TODO: hbase.zookeeper.quorum in the format host1,host2,host3,...
+        <div class="highlighter-rouge"><pre class="highlight"><code>  # TODO: hbase.zookeeper.quorum in the format host1,host2,host3,...
   # default is "localhost"
   hbase-zookeeper-quorum="localhost"
 
@@ -270,13 +271,14 @@
   # default is "/hbase"
   zookeeper-znode-parent="/hbase"
 </code></pre>
+        </div>
       </li>
     </ul>
   </li>
   <li>
     <p>Step 2: Install metadata for policies</p>
 
-    <pre><code>  $ cd &lt;eagle-home&gt;
+    <div class="highlighter-rouge"><pre class="highlight"><code>  $ cd &lt;eagle-home&gt;
 
   # start Eagle web service
   $ bin/eagle-service.sh start
@@ -284,6 +286,7 @@
   # import metadata after Eagle service is successfully started
   $ bin/eagle-topology-init.sh
 </code></pre>
+    </div>
   </li>
 </ul>
 
@@ -319,8 +322,9 @@
   <li>
     <p>Stop eagle service</p>
 
-    <pre><code>$ bin/eagle-service.sh stop
+    <div class="highlighter-rouge"><pre class="highlight"><code>$ bin/eagle-service.sh stop
 </code></pre>
+    </div>
   </li>
 </ul>
 

Modified: eagle/site/docs/deployment-in-sandbox.html
URL: http://svn.apache.org/viewvc/eagle/site/docs/deployment-in-sandbox.html?rev=1778394&r1=1778393&r2=1778394&view=diff
==============================================================================
--- eagle/site/docs/deployment-in-sandbox.html (original)
+++ eagle/site/docs/deployment-in-sandbox.html Thu Jan 12 07:44:47 2017
@@ -260,21 +260,23 @@
         <p><strong>Option 1</strong>: Download eagle jar from <a href="http://66.211.190.194/eagle-0.1.0.tar.gz">here</a>.</p>
       </li>
       <li>
-        <p><strong>Option 2</strong>: Build form source code <a href="https://github.com/apache/eagle">eagle github</a>. After successful build, ‘eagle-xxx-bin.tar.gz’ will be generated under <code>./eagle-assembly/target</code></p>
+        <p><strong>Option 2</strong>: Build form source code <a href="https://github.com/apache/eagle">eagle github</a>. After successful build, ‘eagle-xxx-bin.tar.gz’ will be generated under <code class="highlighter-rouge">./eagle-assembly/target</code></p>
 
-        <pre><code># installed npm is required before compiling
+        <div class="highlighter-rouge"><pre class="highlight"><code># installed npm is required before compiling
 $ mvn clean install -DskipTests=true
 </code></pre>
+        </div>
       </li>
     </ul>
   </li>
   <li>
     <p><strong>Copy and extract the package to sandbox</strong></p>
 
-    <pre><code>#extract
+    <div class="highlighter-rouge"><pre class="highlight"><code>#extract
 $ tar -zxvf eagle-0.1.0-bin.tar.gz
 $ mv eagle-0.1.0 /usr/hdp/current/eagle
 </code></pre>
+    </div>
   </li>
 </ul>
 
@@ -288,9 +290,10 @@ $ mv eagle-0.1.0 /usr/hdp/current/eagle
   <li>
     <p><strong>Option 1</strong>: Install Eagle using command line</p>
 
-    <pre><code>$ cd /usr/hdp/current/eagle
+    <div class="highlighter-rouge"><pre class="highlight"><code>$ cd /usr/hdp/current/eagle
 $ examples/eagle-sandbox-starter.sh
 </code></pre>
+    </div>
   </li>
   <li>
     <p><strong>Option 2</strong>: Install Eagle using <a href="/docs/ambari-plugin-install.html">Eagle Ambari plugin</a></p>
@@ -305,7 +308,7 @@ $ examples/eagle-sandbox-starter.sh
   <li>
     <p><strong>Step 1</strong>: Configure Advanced hadoop-log4j via <a href="http://localhost:8080/#/main/services/HDFS/configs" target="_blank">Ambari UI</a>, and add below “KAFKA_HDFS_AUDIT” log4j appender to hdfs audit logging.</p>
 
-    <pre><code>log4j.appender.KAFKA_HDFS_AUDIT=org.apache.eagle.log4j.kafka.KafkaLog4jAppender
+    <div class="highlighter-rouge"><pre class="highlight"><code>log4j.appender.KAFKA_HDFS_AUDIT=org.apache.eagle.log4j.kafka.KafkaLog4jAppender
 log4j.appender.KAFKA_HDFS_AUDIT.Topic=sandbox_hdfs_audit_log
 log4j.appender.KAFKA_HDFS_AUDIT.BrokerList=sandbox.hortonworks.com:6667
 log4j.appender.KAFKA_HDFS_AUDIT.KeyClass=org.apache.eagle.log4j.kafka.hadoop.AuditLogKeyer
@@ -315,22 +318,25 @@ log4j.appender.KAFKA_HDFS_AUDIT.Producer
 #log4j.appender.KAFKA_HDFS_AUDIT.BatchSize=1
 #log4j.appender.KAFKA_HDFS_AUDIT.QueueSize=1
 </code></pre>
+    </div>
 
     <p><img src="/images/docs/hdfs-log4j-conf.png" alt="HDFS LOG4J Configuration" title="hdfslog4jconf" /></p>
   </li>
   <li>
     <p><strong>Step 3</strong>: Edit Advanced hadoop-env via <a href="http://localhost:8080/#/main/services/HDFS/configs" target="_blank">Ambari UI</a>, and add the reference to KAFKA_HDFS_AUDIT to HADOOP_NAMENODE_OPTS.</p>
 
-    <pre><code>-Dhdfs.audit.logger=INFO,DRFAAUDIT,KAFKA_HDFS_AUDIT
+    <div class="highlighter-rouge"><pre class="highlight"><code>-Dhdfs.audit.logger=INFO,DRFAAUDIT,KAFKA_HDFS_AUDIT
 </code></pre>
+    </div>
 
     <p><img src="/images/docs/hdfs-env-conf.png" alt="HDFS Environment Configuration" title="hdfsenvconf" /></p>
   </li>
   <li>
     <p><strong>Step 4</strong>: Edit Advanced hadoop-env via <a href="http://localhost:8080/#/main/services/HDFS/configs" target="_blank">Ambari UI</a>, and append the following command to it.</p>
 
-    <pre><code>export HADOOP_CLASSPATH=${HADOOP_CLASSPATH}:/usr/hdp/current/eagle/lib/log4jkafka/lib/*
+    <div class="highlighter-rouge"><pre class="highlight"><code>export HADOOP_CLASSPATH=${HADOOP_CLASSPATH}:/usr/hdp/current/eagle/lib/log4jkafka/lib/*
 </code></pre>
+    </div>
 
     <p><img src="/images/docs/hdfs-env-conf2.png" alt="HDFS Environment Configuration" title="hdfsenvconf2" /></p>
   </li>
@@ -338,14 +344,15 @@ log4j.appender.KAFKA_HDFS_AUDIT.Producer
     <p><strong>Step 5</strong>: save the changes and restart the namenode.</p>
   </li>
   <li>
-    <p><strong>Step 6</strong>: Check whether logs are flowing into topic <code>sandbox_hdfs_audit_log</code></p>
+    <p><strong>Step 6</strong>: Check whether logs are flowing into topic <code class="highlighter-rouge">sandbox_hdfs_audit_log</code></p>
 
-    <pre><code>$ /usr/hdp/current/kafka-broker/bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic sandbox_hdfs_audit_log
+    <div class="highlighter-rouge"><pre class="highlight"><code>$ /usr/hdp/current/kafka-broker/bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic sandbox_hdfs_audit_log
 </code></pre>
+    </div>
   </li>
 </ul>
 
-<p>Now please login to Eagle web http://localhost:9099/eagle-service with account <code>admin/secret</code>, and try the sample demos on
+<p>Now please login to Eagle web http://localhost:9099/eagle-service with account <code class="highlighter-rouge">admin/secret</code>, and try the sample demos on
 <a href="/docs/quick-start.html">Quick Starer</a></p>
 
 <p>(If the NAT network is used in a virtual machine, it’s required to add port 9099 to forwarding ports)

Modified: eagle/site/docs/development-in-intellij.html
URL: http://svn.apache.org/viewvc/eagle/site/docs/development-in-intellij.html?rev=1778394&r1=1778393&r2=1778394&view=diff
==============================================================================
--- eagle/site/docs/development-in-intellij.html (original)
+++ eagle/site/docs/development-in-intellij.html Thu Jan 12 07:44:47 2017
@@ -217,7 +217,7 @@
         <h1 class="page-header" style="margin-top: 0px">Development in Intellij</h1>
         <p>Apache Eagle (called Eagle in the following) can be developed in popular IDE, e.g. Intellij and Eclipse. Here we focus on development in Intellij.</p>
 
-<h3 id="prepare-hadoophadoop-environment">1. Prepare Hadoop<sup id="fnref:HADOOP"><a href="#fn:HADOOP" class="footnote">1</a></sup> environment</h3>
+<h3 id="1-prepare-hadoop-environment">1. Prepare Hadoop<sup id="fnref:HADOOP"><a href="#fn:HADOOP" class="footnote">1</a></sup> environment</h3>
 
 <p>Normally HDP sandbox is needed for testing Hadoop monitoring. Please reference <a href="/docs/quick-start.html">Quick Start</a> for setting up HDP sandbox.</p>
 
@@ -244,7 +244,7 @@
   </li>
 </ul>
 
-<h3 id="start-eagle-web-service-in-intellij">2. Start Eagle web service in Intellij</h3>
+<h3 id="2-start-eagle-web-service-in-intellij">2. Start Eagle web service in Intellij</h3>
 
 <p>Import source code into Intellij, and find eagle-webservice project. Intellij Ultimate supports launching J2EE server within Intellij. If you don’t have 
 Intellij Ultimate version, Eclipse is another option.</p>
@@ -267,7 +267,7 @@ Intellij Ultimate version, Eclipse is an
 
 <p>Configure Intellij for running Apache Tomcat server with eagle-service artifacts</p>
 
-<h3 id="start-topology-in-intellij">3. Start topology in Intellij</h3>
+<h3 id="3-start-topology-in-intellij">3. Start topology in Intellij</h3>
 
 <ul>
   <li><strong>Check topology configuration</strong></li>

Modified: eagle/site/docs/development-in-macosx.html
URL: http://svn.apache.org/viewvc/eagle/site/docs/development-in-macosx.html?rev=1778394&r1=1778393&r2=1778394&view=diff
==============================================================================
--- eagle/site/docs/development-in-macosx.html (original)
+++ eagle/site/docs/development-in-macosx.html Thu Jan 12 07:44:47 2017
@@ -218,7 +218,7 @@
         <h2 id="how-to-setup-apache-eagle-development-environment-on-mac-osx">How to Setup Apache Eagle Development Environment on Mac OSX</h2>
 
 <p><em>Apache Eagle will be called Eagle in the following.</em><br />
-This tutorial is based <code>Mac OS X</code>. It can be used as a reference guide for other OS like Linux or Windows as well.  To save your time of jumping back and forth between different web pages, all necessary references will be pointed out.</p>
+This tutorial is based <code class="highlighter-rouge">Mac OS X</code>. It can be used as a reference guide for other OS like Linux or Windows as well.  To save your time of jumping back and forth between different web pages, all necessary references will be pointed out.</p>
 
 <h3 id="prerequisite">Prerequisite</h3>
 
@@ -228,8 +228,9 @@ This tutorial is based <code>Mac OS X</c
 
 <p>Make sure you have HomeBrew installed on your mac. If not, please run:</p>
 
-<pre><code>$ ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
+<div class="highlighter-rouge"><pre class="highlight"><code>$ ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
 </code></pre>
+</div>
 
 <p>you can find more information about HomeBrew at http://brew.sh/ .</p>
 
@@ -239,9 +240,10 @@ This tutorial is based <code>Mac OS X</c
 
 <p>Some core eagle modules are written with scala. To install Scala and SBT, just run:</p>
 
-<pre><code> $ brew install scala
+<div class="highlighter-rouge"><pre class="highlight"><code> $ brew install scala
  $ brew install sbt
 </code></pre>
+</div>
 
 <ul>
   <li><strong>NPM</strong></li>
@@ -249,8 +251,9 @@ This tutorial is based <code>Mac OS X</c
 
 <p>Eagle-webservice module uses npm. To install it, run:</p>
 
-<pre><code> $ brew install npm
+<div class="highlighter-rouge"><pre class="highlight"><code> $ brew install npm
 </code></pre>
+</div>
 
 <ul>
   <li><strong>Apache Maven</strong></li>
@@ -258,8 +261,9 @@ This tutorial is based <code>Mac OS X</c
 
 <p>Eagle is built with maven:</p>
 
-<pre><code> $ brew install maven
+<div class="highlighter-rouge"><pre class="highlight"><code> $ brew install maven
 </code></pre>
+</div>
 
 <ul>
   <li>
@@ -269,8 +273,9 @@ This tutorial is based <code>Mac OS X</c
       <li>
         <p>Install HomeBrew Cask:</p>
 
-        <pre><code>$ brew install caskroom/cask/brew-cask
+        <div class="highlighter-rouge"><pre class="highlight"><code>$ brew install caskroom/cask/brew-cask
 </code></pre>
+        </div>
       </li>
       <li>
         <p>Next, install JDK via HomeBrew:</p>
@@ -283,15 +288,18 @@ This tutorial is based <code>Mac OS X</c
 
 <p>you will see all available JDK versions and you can install multiple JDK versions in this way. For eagle please choose java7 to install:</p>
 
-<pre><code> $ brew cask install java7
+<div class="highlighter-rouge"><pre class="highlight"><code> $ brew cask install java7
 </code></pre>
+</div>
 
-<p><strong>Note:</strong>
-- During this writing SBT has issue with JDK 8. This issue has been tested confirmed by using: 
-- Java 1.8.0_66
-- Maven 3.3.9
-- Scala 2.11.7
-- Sbt 0.13.9</p>
+<p><strong>Note:</strong></p>
+<ul>
+  <li>During this writing SBT has issue with JDK 8. This issue has been tested confirmed by using:</li>
+  <li>Java 1.8.0_66</li>
+  <li>Maven 3.3.9</li>
+  <li>Scala 2.11.7</li>
+  <li>Sbt 0.13.9</li>
+</ul>
 
 <p>you can find more information about HomeBrew Cask at <a href="http://caskroom.io">http://caskroom.io</a></p>
 
@@ -301,44 +309,53 @@ This tutorial is based <code>Mac OS X</c
 
 <p>you can use Jenv to manage installed multiple Java versions. To install it:</p>
 
-<pre><code>$ brew install https://raw.githubusercontent.com/entrypass/jenv/homebrew/homebrew/jenv.rb
+<div class="highlighter-rouge"><pre class="highlight"><code>$ brew install https://raw.githubusercontent.com/entrypass/jenv/homebrew/homebrew/jenv.rb
 </code></pre>
+</div>
 
 <p>and make sure activate it automatically:</p>
 
-<pre><code>$ echo 'eval "$(jenv init -)"' &gt;&gt; ~/.bash_profile
+<div class="highlighter-rouge"><pre class="highlight"><code>$ echo 'eval "$(jenv init -)"' &gt;&gt; ~/.bash_profile
 </code></pre>
+</div>
 
-<p><strong>Note:</strong>
-- There is a known issue at this writing: https://github.com/gcuisinier/jenv/wiki/Trouble-Shooting
-- Please make sure JENV_ROOT has been set before jenv init:
-- $ export JENV_ROOT=/usr/local/opt/jenv</p>
+<p><strong>Note:</strong></p>
+<ul>
+  <li>There is a known issue at this writing: https://github.com/gcuisinier/jenv/wiki/Trouble-Shooting</li>
+  <li>Please make sure JENV_ROOT has been set before jenv init:</li>
+  <li>$ export JENV_ROOT=/usr/local/opt/jenv</li>
+</ul>
 
 <p>Now let Jenv manage JDK versions (remember In OSX all JVMs are located at /Library/Java/JavaVirtualMachines):</p>
 
-<pre><code>$ jenv add /Library/Java/JavaVirtualMachines/jdk1.8.0_66.jdk/Contents/Home/
+<div class="highlighter-rouge"><pre class="highlight"><code>$ jenv add /Library/Java/JavaVirtualMachines/jdk1.8.0_66.jdk/Contents/Home/
 $ jenv add /Library/Java/JavaVirtualMachines/jdk1.7.0_80.jdk/Contents/Home/
 </code></pre>
+</div>
 
 <p>and</p>
 
-<pre><code>$ jenv rehash
+<div class="highlighter-rouge"><pre class="highlight"><code>$ jenv rehash
 </code></pre>
+</div>
 
 <p>You can see all managed JDK versions:</p>
 
-<pre><code>$ jenv versions
+<div class="highlighter-rouge"><pre class="highlight"><code>$ jenv versions
 </code></pre>
+</div>
 
 <p>set global java version:</p>
 
-<pre><code>$ jenv global oracle64-1.8.0.66
+<div class="highlighter-rouge"><pre class="highlight"><code>$ jenv global oracle64-1.8.0.66
 </code></pre>
+</div>
 
 <p>switch to your eagle home directory and set the local JDK version for eagle:</p>
 
-<pre><code>$ jenv local oracle64-1.7.0.80
+<div class="highlighter-rouge"><pre class="highlight"><code>$ jenv local oracle64-1.7.0.80
 </code></pre>
+</div>
 
 <p>you can find more information about Jenv at https://github.com/rbenv/rbenv and http://hanxue-it.blogspot.com/2014/05/installing-java-8-managing-multiple.html.</p>
 
@@ -346,8 +363,9 @@ $ jenv add /Library/Java/JavaVirtualMach
 
 <p>Go to Eagle home directory and run:</p>
 
-<pre><code>mvn -DskipTests clean package
+<div class="highlighter-rouge"><pre class="highlight"><code>mvn -DskipTests clean package
 </code></pre>
+</div>
 
 <p>That’s all. Now you have runnable eagle on your Mac. Have fun. :-)</p>
 

Modified: eagle/site/docs/download-latest.html
URL: http://svn.apache.org/viewvc/eagle/site/docs/download-latest.html?rev=1778394&r1=1778393&r2=1778394&view=diff
==============================================================================
--- eagle/site/docs/download-latest.html (original)
+++ eagle/site/docs/download-latest.html Thu Jan 12 07:44:47 2017
@@ -221,7 +221,7 @@
   <p>You can verify your download by following these <a href="https://www.apache.org/info/verification.html">procedures</a> and using these <a href="https://dist.apache.org/repos/dist/release/eagle/KEYS">KEYS</a>.</p>
 </blockquote>
 
-<h1 id="incubating">0.4.0-incubating</h1>
+<h1 id="040-incubating">0.4.0-incubating</h1>
 
 <p><a href="http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.apache.eagle%22%20AND%20a%3A%22eagle-parent%22"><img src="https://maven-badges.herokuapp.com/maven-central/org.apache.eagle/eagle-parent/badge.svg" alt="Eagle Latest Maven Release" /></a></p>
 
@@ -233,7 +233,7 @@
   </li>
   <li>Source download:
     <ul>
-      <li><a href="http://www.apache.org/dyn/closer.cgi?path=/incubator/eagle/apache-eagle-0.4.0-incubating">apache-eagle-0.4.0-incubating-src.tar.gz</a></li>
+      <li><a href="http://www.apache.org/dyn/closer.cgi?path=/eagle/apache-eagle-0.4.0-incubating">apache-eagle-0.4.0-incubating-src.tar.gz</a></li>
       <li><a href="https://dist.apache.org/repos/dist/release/eagle/apache-eagle-0.4.0-incubating/apache-eagle-0.4.0-incubating-src.tar.gz.md5">apache-eagle-0.4.0-incubating-src.tar.gz.md5</a></li>
       <li><a href="https://dist.apache.org/repos/dist/release/eagle/apache-eagle-0.4.0-incubating/apache-eagle-0.4.0-incubating-src.tar.gz.sha1">apache-eagle-0.4.0-incubating-src.tar.gz.sha1</a></li>
       <li><a href="https://dist.apache.org/repos/dist/release/eagle/apache-eagle-0.4.0-incubating/apache-eagle-0.4.0-incubating-src.tar.gz.asc">apache-eagle-0.4.0-incubating-src.tar.gz.asc</a></li>

Modified: eagle/site/docs/download.html
URL: http://svn.apache.org/viewvc/eagle/site/docs/download.html?rev=1778394&r1=1778393&r2=1778394&view=diff
==============================================================================
--- eagle/site/docs/download.html (original)
+++ eagle/site/docs/download.html Thu Jan 12 07:44:47 2017
@@ -219,7 +219,7 @@
   <p>You can verify your download by following these <a href="https://www.apache.org/info/verification.html">procedures</a> and using these <a href="https://dist.apache.org/repos/dist/release/eagle/KEYS">KEYS</a>.</p>
 </blockquote>
 
-<h1 id="incubating">0.4.0-incubating</h1>
+<h1 id="040-incubating">0.4.0-incubating</h1>
 <ul>
   <li>Release notes:
     <ul>
@@ -228,7 +228,7 @@
   </li>
   <li>Source download:
     <ul>
-      <li><a href="http://www.apache.org/dyn/closer.cgi?path=/incubator/eagle/apache-eagle-0.4.0-incubating">apache-eagle-0.4.0-incubating-src.tar.gz</a></li>
+      <li><a href="http://www.apache.org/dyn/closer.cgi?path=/eagle/apache-eagle-0.4.0-incubating">apache-eagle-0.4.0-incubating-src.tar.gz</a></li>
       <li><a href="https://dist.apache.org/repos/dist/release/eagle/apache-eagle-0.4.0-incubating/apache-eagle-0.4.0-incubating-src.tar.gz.md5">apache-eagle-0.4.0-incubating-src.tar.gz.md5</a></li>
       <li><a href="https://dist.apache.org/repos/dist/release/eagle/apache-eagle-0.4.0-incubating/apache-eagle-0.4.0-incubating-src.tar.gz.sha1">apache-eagle-0.4.0-incubating-src.tar.gz.sha1</a></li>
       <li><a href="https://dist.apache.org/repos/dist/release/eagle/apache-eagle-0.4.0-incubating/apache-eagle-0.4.0-incubating-src.tar.gz.asc">apache-eagle-0.4.0-incubating-src.tar.gz.asc</a></li>
@@ -242,7 +242,7 @@
   </li>
 </ul>
 
-<h1 id="incubating-1">0.3.0-incubating</h1>
+<h1 id="030-incubating">0.3.0-incubating</h1>
 
 <ul>
   <li>Release notes:
@@ -252,7 +252,7 @@
   </li>
   <li>Source download:
     <ul>
-      <li><a href="http://www.apache.org/dyn/closer.cgi?path=/incubator/eagle/apache-eagle-0.3.0-incubating">apache-eagle-0.3.0-incubating-src.tar.gz</a></li>
+      <li><a href="http://www.apache.org/dyn/closer.cgi?path=/eagle/apache-eagle-0.3.0-incubating">apache-eagle-0.3.0-incubating-src.tar.gz</a></li>
       <li><a href="https://dist.apache.org/repos/dist/release/eagle/apache-eagle-0.3.0-incubating/apache-eagle-0.3.0-incubating-src.tar.gz.md5">apache-eagle-0.3.0-incubating-src.tar.gz.md5</a></li>
       <li><a href="https://dist.apache.org/repos/dist/release/eagle/apache-eagle-0.3.0-incubating/apache-eagle-0.3.0-incubating-src.tar.gz.sha1">apache-eagle-0.3.0-incubating-src.tar.gz.sha1</a></li>
       <li><a href="https://dist.apache.org/repos/dist/release/eagle/apache-eagle-0.3.0-incubating/apache-eagle-0.3.0-incubating-src.tar.gz.asc">apache-eagle-0.3.0-incubating-src.tar.gz.asc</a></li>

Modified: eagle/site/docs/hbase-auth-activity-monitoring.html
URL: http://svn.apache.org/viewvc/eagle/site/docs/hbase-auth-activity-monitoring.html?rev=1778394&r1=1778393&r2=1778394&view=diff
==============================================================================
--- eagle/site/docs/hbase-auth-activity-monitoring.html (original)
+++ eagle/site/docs/hbase-auth-activity-monitoring.html Thu Jan 12 07:44:47 2017
@@ -219,11 +219,11 @@
 
 <p>Please follow below steps to enable HBase authorization auditing in HDP sandbox and Cloudera</p>
 
-<h4 id="in-hbase-sitexml">1. in hbase-site.xml</h4>
+<h4 id="1-in-hbase-sitexml">1. in hbase-site.xml</h4>
 
 <p>Note: when testing in HDP sandbox, sometimes Apache Ranger will take over access controll for HBase, so maybe you need change that back to native hbase access controller, i.e. change com.xasecure.authorization.hbase.XaSecureAuthorizationCoprocessor to org.apache.hadoop.hbase.security.access.AccessController</p>
 
-<pre><code>&lt;property&gt;
+<div class="highlighter-rouge"><pre class="highlight"><code>&lt;property&gt;
      &lt;name&gt;hbase.security.authorization&lt;/name&gt;
      &lt;value&gt;true&lt;/value&gt;
 &lt;/property&gt;
@@ -236,10 +236,11 @@
      &lt;value&gt;org.apache.hadoop.hbase.security.token.TokenProvider,org.apache.hadoop.hbase.security.access.AccessController&lt;/value&gt;
 &lt;/property&gt;
 </code></pre>
+</div>
 
-<h4 id="log4jproperties">2. log4j.properties</h4>
+<h4 id="2-log4jproperties">2. log4j.properties</h4>
 
-<pre><code>#
+<div class="highlighter-rouge"><pre class="highlight"><code>#
 # Security audit appender
 #
 hbase.security.log.file=SecurityAuth.audit
@@ -255,6 +256,7 @@ log4j.category.SecurityLogger=${hbase.se
 log4j.additivity.SecurityLogger=false
 log4j.logger.SecurityLogger.org.apache.hadoop.hbase.security.access.AccessController=TRACE
 </code></pre>
+</div>
 
 <hr />
 

Modified: eagle/site/docs/hbase-data-activity-monitoring.html
URL: http://svn.apache.org/viewvc/eagle/site/docs/hbase-data-activity-monitoring.html?rev=1778394&r1=1778393&r2=1778394&view=diff
==============================================================================
--- eagle/site/docs/hbase-data-activity-monitoring.html (original)
+++ eagle/site/docs/hbase-data-activity-monitoring.html Thu Jan 12 07:44:47 2017
@@ -233,15 +233,16 @@
 
 <ol>
   <li>
-    <p>edit Advanced hbase-log4j via Ambari<sup id="fnref:AMBARI"><a href="#fn:AMBARI" class="footnote">3</a></sup> UI, and append below sentence to <code>Security audit appender</code></p>
+    <p>edit Advanced hbase-log4j via Ambari<sup id="fnref:AMBARI"><a href="#fn:AMBARI" class="footnote">3</a></sup> UI, and append below sentence to <code class="highlighter-rouge">Security audit appender</code></p>
 
-    <pre><code> log4j.logger.SecurityLogger.org.apache.hadoop.hbase.security.access.AccessController=TRACE,RFAS
+    <div class="highlighter-rouge"><pre class="highlight"><code> log4j.logger.SecurityLogger.org.apache.hadoop.hbase.security.access.AccessController=TRACE,RFAS
 </code></pre>
+    </div>
   </li>
   <li>
     <p>edit Advanced hbase-site.xml</p>
 
-    <pre><code> &lt;property&gt;
+    <div class="highlighter-rouge"><pre class="highlight"><code> &lt;property&gt;
    &lt;name&gt;hbase.security.authorization&lt;/name&gt;
    &lt;value&gt;true&lt;/value&gt;
  &lt;/property&gt;
@@ -256,6 +257,7 @@
    &lt;value&gt;org.apache.hadoop.hbase.security.access.AccessController&lt;/value&gt;
  &lt;/property&gt;
 </code></pre>
+    </div>
   </li>
   <li>
     <p>Save and restart HBase</p>
@@ -267,21 +269,22 @@
 <h3 id="how-to-add-a-kafka-log4j-appender">How to add a Kafka log4j appender</h3>
 
 <blockquote>
-  <p>Notice: if you are willing to use sample logs under <code>eagle-security-hbase-security/test/resources/securityAuditLog</code>, please skip this part.</p>
+  <p>Notice: if you are willing to use sample logs under <code class="highlighter-rouge">eagle-security-hbase-security/test/resources/securityAuditLog</code>, please skip this part.</p>
 </blockquote>
 
 <ol>
   <li>
-    <p>create Kafka topic <code>sandbox_hbase_security_log</code></p>
+    <p>create Kafka topic <code class="highlighter-rouge">sandbox_hbase_security_log</code></p>
 
-    <pre><code> $ /usr/hdp/current/kafka-broker/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic sandbox_hbase_security_log
+    <div class="highlighter-rouge"><pre class="highlight"><code> $ /usr/hdp/current/kafka-broker/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic sandbox_hbase_security_log
 </code></pre>
+    </div>
   </li>
   <li>
-    <p>add below “KAFKA_HBASE_AUDIT” log4j appender to <code>Security audit appender</code>
+    <p>add below “KAFKA_HBASE_AUDIT” log4j appender to <code class="highlighter-rouge">Security audit appender</code>
 Please refer to http://goeagle.io/docs/import-hdfs-auditLog.html.</p>
 
-    <pre><code> log4j.appender.KAFKA_HBASE_AUDIT=org.apache.eagle.log4j.kafka.KafkaLog4jAppender
+    <div class="highlighter-rouge"><pre class="highlight"><code> log4j.appender.KAFKA_HBASE_AUDIT=org.apache.eagle.log4j.kafka.KafkaLog4jAppender
  log4j.appender.KAFKA_HBASE_AUDIT.Topic=sandbox_hbase_security_log
  log4j.appender.KAFKA_HBASE_AUDIT.BrokerList=sandbox.hortonworks.com:6667
  log4j.appender.KAFKA_HBASE_AUDIT.Layout=org.apache.log4j.PatternLayout
@@ -290,18 +293,21 @@ Please refer to http://goeagle.io/docs/i
  log4j.appender.KAFKA_HDFS_AUDIT.KeyClass=org.apache.eagle.log4j.kafka.hadoop.GenericLogKeyer
  log4j.appender.KAFKA_HDFS_AUDIT.KeyPattern=user=(\\w+),\\s+
 </code></pre>
+    </div>
   </li>
   <li>
     <p>add the reference to KAFKA_HBASE_AUDIT to log4j appender</p>
 
-    <pre><code> log4j.logger.SecurityLogger.org.apache.hadoop.hbase.security.access.AccessController=TRACE,RFAS,KAFKA_HBASE_AUDIT
+    <div class="highlighter-rouge"><pre class="highlight"><code> log4j.logger.SecurityLogger.org.apache.hadoop.hbase.security.access.AccessController=TRACE,RFAS,KAFKA_HBASE_AUDIT
 </code></pre>
+    </div>
   </li>
   <li>
     <p>add Eagle log4j appender jars into HBASE_CLASSPATH BY editing Advanced hbase-env via Ambari UI</p>
 
-    <pre><code> export HBASE_CLASSPATH=${HBASE_CLASSPATH}:/usr/hdp/current/eagle/lib/log4jkafka/lib/*
+    <div class="highlighter-rouge"><pre class="highlight"><code> export HBASE_CLASSPATH=${HBASE_CLASSPATH}:/usr/hdp/current/eagle/lib/log4jkafka/lib/*
 </code></pre>
+    </div>
   </li>
   <li>
     <p>Save and restart HBase</p>
@@ -312,32 +318,36 @@ Please refer to http://goeagle.io/docs/i
 
 <ol>
   <li>
-    <p>create tables (<code>skip if you do not use hbase</code>)</p>
+    <p>create tables (<code class="highlighter-rouge">skip if you do not use hbase</code>)</p>
 
-    <pre><code> bin/eagle-service-init.sh 
+    <div class="highlighter-rouge"><pre class="highlight"><code> bin/eagle-service-init.sh 
 </code></pre>
+    </div>
   </li>
   <li>
     <p>start Eagle service</p>
 
-    <pre><code> bin/eagle-service.sh start
+    <div class="highlighter-rouge"><pre class="highlight"><code> bin/eagle-service.sh start
 </code></pre>
+    </div>
   </li>
   <li>
     <p>import metadata</p>
 
-    <pre><code> bin/eagle-topology-init.sh
+    <div class="highlighter-rouge"><pre class="highlight"><code> bin/eagle-topology-init.sh
 </code></pre>
+    </div>
   </li>
   <li>
     <p>submit topology</p>
 
-    <pre><code> bin/eagle-topology.sh --main org.apache.eagle.security.hbase.HbaseAuditLogProcessorMain --config conf/sandbox-hbaseSecurityLog-application.conf start
+    <div class="highlighter-rouge"><pre class="highlight"><code> bin/eagle-topology.sh --main org.apache.eagle.security.hbase.HbaseAuditLogProcessorMain --config conf/sandbox-hbaseSecurityLog-application.conf start
 </code></pre>
+    </div>
   </li>
 </ol>
 
-<p>(sample sensitivity data at <code>examples/sample-sensitivity-resource-create.sh</code>)</p>
+<p>(sample sensitivity data at <code class="highlighter-rouge">examples/sample-sensitivity-resource-create.sh</code>)</p>
 
 <h3 id="q--a">Q &amp; A</h3>
 

Modified: eagle/site/docs/hdfs-auth-activity-monitoring.html
URL: http://svn.apache.org/viewvc/eagle/site/docs/hdfs-auth-activity-monitoring.html?rev=1778394&r1=1778393&r2=1778394&view=diff
==============================================================================
--- eagle/site/docs/hdfs-auth-activity-monitoring.html (original)
+++ eagle/site/docs/hdfs-auth-activity-monitoring.html Thu Jan 12 07:44:47 2017
@@ -219,23 +219,25 @@
 
 <h4 id="sample-authorization-logs">Sample authorization logs</h4>
 
-<pre><code>2016-06-08 02:55:07,742 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for hdfs (auth:SIMPLE) for protocol=interface org.apache.hadoop.hdfs.protocol.ClientProtocol
+<div class="highlighter-rouge"><pre class="highlight"><code>2016-06-08 02:55:07,742 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for hdfs (auth:SIMPLE) for protocol=interface org.apache.hadoop.hdfs.protocol.ClientProtocol
 2016-06-08 02:55:35,304 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for hdfs (auth:SIMPLE) for protocol=interface org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol
 2016-06-08 02:55:36,862 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for hive (auth:SIMPLE) for protocol=interface org.apache.hadoop.hdfs.protocol.ClientProtocol
 </code></pre>
+</div>
 
 <p>Steps for enabling service-level authorization activity</p>
 
-<h4 id="enable-hdfs-authorization-security-in-core-sitexml">1. Enable HDFS Authorization Security in core-site.xml</h4>
+<h4 id="1-enable-hdfs-authorization-security-in-core-sitexml">1. Enable HDFS Authorization Security in core-site.xml</h4>
 
-<pre><code>  &lt;property&gt;
+<div class="highlighter-rouge"><pre class="highlight"><code>  &lt;property&gt;
       &lt;name&gt;hadoop.security.authorization&lt;/name&gt;
       &lt;value&gt;true&lt;/value&gt;
   &lt;/property&gt;
 </code></pre>
+</div>
 
-<h4 id="enable-hdfs-security-log-in-log4jproperties">2. Enable HDFS security log in log4j.properties</h4>
-<pre><code>#
+<h4 id="2-enable-hdfs-security-log-in-log4jproperties">2. Enable HDFS security log in log4j.properties</h4>
+<div class="highlighter-rouge"><pre class="highlight"><code>#
 #Security audit appender
 #
 hadoop.security.logger=INFO,DRFAS
@@ -249,6 +251,7 @@ log4j.appender.DRFAS.layout=org.apache.l
 log4j.appender.DRFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
 log4j.appender.DRFAS.DatePattern=.yyyy-MM-dd
 </code></pre>
+</div>
 
       </div><!--end of loadcontent-->  
     </div>

Modified: eagle/site/docs/hdfs-data-activity-monitoring.html
URL: http://svn.apache.org/viewvc/eagle/site/docs/hdfs-data-activity-monitoring.html?rev=1778394&r1=1778393&r2=1778394&view=diff
==============================================================================
--- eagle/site/docs/hdfs-data-activity-monitoring.html (original)
+++ eagle/site/docs/hdfs-data-activity-monitoring.html Thu Jan 12 07:44:47 2017
@@ -240,7 +240,7 @@
   <li>
     <p><strong>Step 1</strong>: Configure Advanced hdfs-log4j via <a href="http://localhost:8080/#/main/services/HDFS/configs" target="_blank">Ambari UI</a><sup id="fnref:AMBARI"><a href="#fn:AMBARI" class="footnote">2</a></sup>, by adding below “KAFKA_HDFS_AUDIT” log4j appender to hdfs audit logging.</p>
 
-    <pre><code> log4j.appender.KAFKA_HDFS_AUDIT=org.apache.eagle.log4j.kafka.KafkaLog4jAppender
+    <div class="highlighter-rouge"><pre class="highlight"><code> log4j.appender.KAFKA_HDFS_AUDIT=org.apache.eagle.log4j.kafka.KafkaLog4jAppender
  log4j.appender.KAFKA_HDFS_AUDIT.Topic=sandbox_hdfs_audit_log
  log4j.appender.KAFKA_HDFS_AUDIT.BrokerList=sandbox.hortonworks.com:6667
  log4j.appender.KAFKA_HDFS_AUDIT.KeyClass=org.apache.eagle.log4j.kafka.hadoop.AuditLogKeyer
@@ -248,22 +248,25 @@
  log4j.appender.KAFKA_HDFS_AUDIT.Layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n
  log4j.appender.KAFKA_HDFS_AUDIT.ProducerType=async
 </code></pre>
+    </div>
 
     <p><img src="/images/docs/hdfs-log4j-conf.png" alt="HDFS LOG4J Configuration" title="hdfslog4jconf" /></p>
   </li>
   <li>
     <p><strong>Step 2</strong>: Edit Advanced hadoop-env via <a href="http://localhost:8080/#/main/services/HDFS/configs" target="_blank">Ambari UI</a>, and add the reference to KAFKA_HDFS_AUDIT to HADOOP_NAMENODE_OPTS.</p>
 
-    <pre><code>-Dhdfs.audit.logger=INFO,DRFAAUDIT,KAFKA_HDFS_AUDIT
+    <div class="highlighter-rouge"><pre class="highlight"><code>-Dhdfs.audit.logger=INFO,DRFAAUDIT,KAFKA_HDFS_AUDIT
 </code></pre>
+    </div>
 
     <p><img src="/images/docs/hdfs-env-conf.png" alt="HDFS Environment Configuration" title="hdfsenvconf" /></p>
   </li>
   <li>
     <p><strong>Step 3</strong>: Edit Advanced hadoop-env via <a href="http://localhost:8080/#/main/services/HDFS/configs" target="_blank">Ambari UI</a>, and append the following command to it.</p>
 
-    <pre><code>export HADOOP_CLASSPATH=${HADOOP_CLASSPATH}:/usr/hdp/current/eagle/lib/log4jkafka/lib/*
+    <div class="highlighter-rouge"><pre class="highlight"><code>export HADOOP_CLASSPATH=${HADOOP_CLASSPATH}:/usr/hdp/current/eagle/lib/log4jkafka/lib/*
 </code></pre>
+    </div>
 
     <p><img src="/images/docs/hdfs-env-conf2.png" alt="HDFS Environment Configuration" title="hdfsenvconf2" /></p>
   </li>
@@ -282,10 +285,11 @@
 
 <ul>
   <li>
-    <p><strong>Step 7</strong>: Check whether logs from “/var/log/hadoop/hdfs/hdfs-audit.log” are flowing into topic <code>sandbox_hdfs_audit_log</code></p>
+    <p><strong>Step 7</strong>: Check whether logs from “/var/log/hadoop/hdfs/hdfs-audit.log” are flowing into topic <code class="highlighter-rouge">sandbox_hdfs_audit_log</code></p>
 
-    <pre><code>  $ /usr/hdp/2.2.4.2-2/kafka/bin/kafka-console-consumer.sh --zookeeper sandbox.hortonworks.com:2181 --topic sandbox_hdfs_audit_log      
+    <div class="highlighter-rouge"><pre class="highlight"><code>  $ /usr/hdp/2.2.4.2-2/kafka/bin/kafka-console-consumer.sh --zookeeper sandbox.hortonworks.com:2181 --topic sandbox_hdfs_audit_log      
 </code></pre>
+    </div>
   </li>
 </ul>
 

Modified: eagle/site/docs/hive-query-activity-monitoring.html
URL: http://svn.apache.org/viewvc/eagle/site/docs/hive-query-activity-monitoring.html?rev=1778394&r1=1778393&r2=1778394&view=diff
==============================================================================
--- eagle/site/docs/hive-query-activity-monitoring.html (original)
+++ eagle/site/docs/hive-query-activity-monitoring.html Thu Jan 12 07:44:47 2017
@@ -246,12 +246,13 @@
   </li>
 </ul>
 
-<pre><code>$ su hive
+<div class="highlighter-rouge"><pre class="highlight"><code>$ su hive
 $ hive
 $ set hive.execution.engine=mr;
 $ use xademo;
 $ select a.phone_number from customer_details a, call_detail_records b where a.phone_number=b.phone_number;
 </code></pre>
+</div>
 
 <p>From UI click on alert tab and you should see alert for your attempt to read restricted column.</p>
 

Modified: eagle/site/docs/import-hdfs-auditLog.html
URL: http://svn.apache.org/viewvc/eagle/site/docs/import-hdfs-auditLog.html?rev=1778394&r1=1778393&r2=1778394&view=diff
==============================================================================
--- eagle/site/docs/import-hdfs-auditLog.html (original)
+++ eagle/site/docs/import-hdfs-auditLog.html Thu Jan 12 07:44:47 2017
@@ -228,9 +228,10 @@ install a <strong>namenode log4j Kafka a
 
     <p>Here is an sample Kafka command to create topic ‘sandbox_hdfs_audit_log’</p>
 
-    <pre><code>cd &lt;kafka-home&gt;
+    <div class="highlighter-rouge"><pre class="highlight"><code>cd &lt;kafka-home&gt;
 bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic sandbox_hdfs_audit_log
 </code></pre>
+    </div>
   </li>
   <li>
     <p><strong>Step 2</strong>: Install Logstash-kafka plugin</p>
@@ -248,7 +249,7 @@ bin/kafka-topics.sh --create --zookeeper
   <li>
     <p><strong>Step 3</strong>: Create a Logstash configuration file under ${LOGSTASH_HOME}/conf. Here is a sample.</p>
 
-    <pre><code>  input {
+    <div class="highlighter-rouge"><pre class="highlight"><code>  input {
       file {
           type =&gt; "hdp-nn-audit"
           path =&gt; "/path/to/audit.log"
@@ -289,15 +290,17 @@ bin/kafka-topics.sh --create --zookeeper
       }
   }
 </code></pre>
+    </div>
   </li>
   <li>
     <p><strong>Step 4</strong>: Start Logstash</p>
 
-    <pre><code>bin/logstash -f conf/sample.conf
+    <div class="highlighter-rouge"><pre class="highlight"><code>bin/logstash -f conf/sample.conf
 </code></pre>
+    </div>
   </li>
   <li>
-    <p><strong>Step 5</strong>: Check whether logs are flowing into the kafka topic specified by <code>topic_id</code></p>
+    <p><strong>Step 5</strong>: Check whether logs are flowing into the kafka topic specified by <code class="highlighter-rouge">topic_id</code></p>
   </li>
 </ul>
 
@@ -311,14 +314,15 @@ bin/kafka-topics.sh --create --zookeeper
   <li>
     <p><strong>Step 1</strong>: Create a Kafka topic. Here is a example Kafka command for creating topic “sandbox_hdfs_audit_log”</p>
 
-    <pre><code>cd &lt;kafka-home&gt;
+    <div class="highlighter-rouge"><pre class="highlight"><code>cd &lt;kafka-home&gt;
 bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic sandbox_hdfs_audit_log
 </code></pre>
+    </div>
   </li>
   <li>
     <p><strong>Step 2</strong>: Configure $HADOOP_CONF_DIR/log4j.properties, and add a log4j appender “KAFKA_HDFS_AUDIT” to hdfs audit logging</p>
 
-    <pre><code>log4j.appender.KAFKA_HDFS_AUDIT=org.apache.eagle.log4j.kafka.KafkaLog4jAppender
+    <div class="highlighter-rouge"><pre class="highlight"><code>log4j.appender.KAFKA_HDFS_AUDIT=org.apache.eagle.log4j.kafka.KafkaLog4jAppender
 log4j.appender.KAFKA_HDFS_AUDIT.Topic=sandbox_hdfs_audit_log
 log4j.appender.KAFKA_HDFS_AUDIT.BrokerList=sandbox.hortonworks.com:6667
 log4j.appender.KAFKA_HDFS_AUDIT.KeyClass=org.apache.eagle.log4j.kafka.hadoop.AuditLogKeyer
@@ -328,22 +332,25 @@ log4j.appender.KAFKA_HDFS_AUDIT.Producer
 #log4j.appender.KAFKA_HDFS_AUDIT.BatchSize=1
 #log4j.appender.KAFKA_HDFS_AUDIT.QueueSize=1
 </code></pre>
+    </div>
 
     <p><img src="/images/docs/hdfs-log4j-conf.png" alt="HDFS LOG4J Configuration" title="hdfslog4jconf" /></p>
   </li>
   <li>
     <p><strong>Step 3</strong>: Edit $HADOOP_CONF_DIR/hadoop-env.sh, and add the reference to KAFKA_HDFS_AUDIT to HADOOP_NAMENODE_OPTS.</p>
 
-    <pre><code>-Dhdfs.audit.logger=INFO,DRFAAUDIT,KAFKA_HDFS_AUDIT
+    <div class="highlighter-rouge"><pre class="highlight"><code>-Dhdfs.audit.logger=INFO,DRFAAUDIT,KAFKA_HDFS_AUDIT
 </code></pre>
+    </div>
 
     <p><img src="/images/docs/hdfs-env-conf.png" alt="HDFS Environment Configuration" title="hdfsenvconf" /></p>
   </li>
   <li>
     <p><strong>Step 4</strong>: Edit $HADOOP_CONF_DIR/hadoop-env.sh, and append the following command to it.</p>
 
-    <pre><code>export HADOOP_CLASSPATH=${HADOOP_CLASSPATH}:/path/to/eagle/lib/log4jkafka/lib/*
+    <div class="highlighter-rouge"><pre class="highlight"><code>export HADOOP_CLASSPATH=${HADOOP_CLASSPATH}:/path/to/eagle/lib/log4jkafka/lib/*
 </code></pre>
+    </div>
 
     <p><img src="/images/docs/hdfs-env-conf2.png" alt="HDFS Environment Configuration" title="hdfsenvconf2" /></p>
   </li>
@@ -353,8 +360,9 @@ log4j.appender.KAFKA_HDFS_AUDIT.Producer
   <li>
     <p><strong>Step 6</strong>: Check whether logs are flowing into Topic sandbox_hdfs_audit_log</p>
 
-    <pre><code>$ /usr/hdp/current/kafka-broker/bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic sandbox_hdfs_audit_log
+    <div class="highlighter-rouge"><pre class="highlight"><code>$ /usr/hdp/current/kafka-broker/bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic sandbox_hdfs_audit_log
 </code></pre>
+    </div>
   </li>
 </ul>
 

Modified: eagle/site/docs/installation.html
URL: http://svn.apache.org/viewvc/eagle/site/docs/installation.html?rev=1778394&r1=1778393&r2=1778394&view=diff
==============================================================================
--- eagle/site/docs/installation.html (original)
+++ eagle/site/docs/installation.html Thu Jan 12 07:44:47 2017
@@ -239,16 +239,19 @@
 <h4 id="install-eagle">Install Eagle</h4>
 
 <ul>
-  <li>
-    <p><strong>Step 1</strong>: Clone stable version from <a href="https://github.com/apache/eagle/releases/tag/v0.4.0-incubating">eagle github</a>
-&gt;       Build project mvn clean install -DskipTests=true</p>
+  <li><strong>Step 1</strong>: Clone stable version from <a href="https://github.com/apache/eagle/releases/tag/v0.4.0-incubating">eagle github</a>
+    <blockquote>
+      <div class="highlighter-rouge"><pre class="highlight"><code>  Build project mvn clean install -DskipTests=true
+</code></pre>
+      </div>
+    </blockquote>
   </li>
   <li>
     <p><strong>Step 2</strong>:  Download eagle-bin-0.1.0.tar.gz package from successful build into your HDP sandbox.</p>
 
     <ul>
       <li>
-        <p>Option 1: <code>scp -P 2222  eagle/eagle-assembly/target/eagle-0.1.0-bin.tar.gz root@127.0.0.1:/usr/hdp/current/</code></p>
+        <p>Option 1: <code class="highlighter-rouge">scp -P 2222  eagle/eagle-assembly/target/eagle-0.1.0-bin.tar.gz root@127.0.0.1:/usr/hdp/current/</code></p>
       </li>
       <li>
         <p>Option 2: Create shared directory between host and Sandbox, and restart Sandbox. Then you can find the shared directory under /media in Sandbox.</p>
@@ -258,27 +261,30 @@
   <li>
     <p><strong>Step 3</strong>: Extract eagle tarball package</p>
 
-    <pre><code>$ cd /usr/hdp/current
+    <div class="highlighter-rouge"><pre class="highlight"><code>$ cd /usr/hdp/current
 $ tar -zxvf eagle-0.1.0-bin.tar.gz
 $ mv eagle-0.1.0 eagle
 </code></pre>
+    </div>
   </li>
   <li>
     <p><strong>Step 4</strong>: Add root as a HBase<sup id="fnref:HBASE"><a href="#fn:HBASE" class="footnote">1</a></sup> superuser via <a href="http://127.0.0.1:8080/#/main/services/HBASE/configs">Ambari</a> (Optional, a user can operate HBase by sudo su hbase, as an alternative).</p>
   </li>
-  <li>
-    <p><strong>Step 5</strong>: Install Eagle Ambari<sup id="fnref:AMBARI"><a href="#fn:AMBARI" class="footnote">2</a></sup> service 
-&gt;
-  /usr/hdp/current/eagle/bin/eagle-ambari.sh install.</p>
+  <li><strong>Step 5</strong>: Install Eagle Ambari<sup id="fnref:AMBARI"><a href="#fn:AMBARI" class="footnote">2</a></sup> service
+    <blockquote>
+
+      <p>/usr/hdp/current/eagle/bin/eagle-ambari.sh install.</p>
+    </blockquote>
   </li>
   <li>
     <p><strong>Step 6</strong>: Restart <a href="http://127.0.0.1:8000/">Ambari</a> click on disable and enable Ambari back.</p>
   </li>
-  <li>
-    <p><strong>Step 7</strong>: Start HBase &amp; Storm<sup id="fnref:STORM"><a href="#fn:STORM" class="footnote">3</a></sup> &amp; Kafka<sup id="fnref:KAFKA"><a href="#fn:KAFKA" class="footnote">4</a></sup>
-From Ambari UI, restart any suggested components(“Restart button on top”) &amp; Start Storm (Start “Nimbus” ,”Supervisor” &amp; “Storm UI Server”), Kafka (Start “Kafka Broker”) , HBase (Start “RegionServer”  and “ HBase Master”) 
-&gt;
-<img src="/images/docs/Services.png" alt="Restart Services" title="Services" /></p>
+  <li><strong>Step 7</strong>: Start HBase &amp; Storm<sup id="fnref:STORM"><a href="#fn:STORM" class="footnote">3</a></sup> &amp; Kafka<sup id="fnref:KAFKA"><a href="#fn:KAFKA" class="footnote">4</a></sup>
+From Ambari UI, restart any suggested components(“Restart button on top”) &amp; Start Storm (Start “Nimbus” ,”Supervisor” &amp; “Storm UI Server”), Kafka (Start “Kafka Broker”) , HBase (Start “RegionServer”  and “ HBase Master”)
+    <blockquote>
+
+      <p><img src="/images/docs/Services.png" alt="Restart Services" title="Services" /></p>
+    </blockquote>
   </li>
   <li>
     <p><strong>Step 8</strong>: Add Eagle Service To Ambari. (Click For Video)</p>
@@ -300,9 +306,10 @@ EagleServiceSuccess</p>
   <li>
     <p><strong>Step 9</strong>: Add Policies and meta data required by running below script.</p>
 
-    <pre><code>$ /usr/hdp/current/eagle/examples/sample-sensitivity-resource-create.sh 
+    <div class="highlighter-rouge"><pre class="highlight"><code>$ /usr/hdp/current/eagle/examples/sample-sensitivity-resource-create.sh 
 $ /usr/hdp/current/eagle/examples/sample-policy-create.sh
 </code></pre>
+    </div>
   </li>
 </ul>
 

Modified: eagle/site/docs/jmx-metric-monitoring.html
URL: http://svn.apache.org/viewvc/eagle/site/docs/jmx-metric-monitoring.html?rev=1778394&r1=1778393&r2=1778394&view=diff
==============================================================================
--- eagle/site/docs/jmx-metric-monitoring.html (original)
+++ eagle/site/docs/jmx-metric-monitoring.html Thu Jan 12 07:44:47 2017
@@ -236,9 +236,10 @@
 <h3 id="setup"><strong>Setup</strong></h3>
 <p>From Hortonworks sandbox just run below setup script to Install Pyton JMX script, Create Kafka topic, update Apache Hbase tables and deploy “hadoopjmx” Storm topology.</p>
 
-<pre><code>$ /usr/hdp/current/eagle/examples/hadoop-metric-sandbox-starter.sh
+<div class="highlighter-rouge"><pre class="highlight"><code>$ /usr/hdp/current/eagle/examples/hadoop-metric-sandbox-starter.sh
 $ /usr/hdp/current/eagle/examples/hadoop-metric-policy-create.sh  
 </code></pre>
+</div>
 
 <p><br /></p>
 
@@ -263,15 +264,17 @@ $ /usr/hdp/current/eagle/examples/hadoop
   <li>
     <p>First make sure that Kafka topic “nn_jmx_metric_sandbox” is populated with JMX metric data periodically.(To make sure that python script is running)</p>
 
-    <pre><code>  $ /usr/hdp/2.2.4.2-2/kafka/bin/kafka-console-consumer.sh --zookeeper sandbox.hortonworks.com:2181 --topic nn_jmx_metric_sandbox
+    <div class="highlighter-rouge"><pre class="highlight"><code>  $ /usr/hdp/2.2.4.2-2/kafka/bin/kafka-console-consumer.sh --zookeeper sandbox.hortonworks.com:2181 --topic nn_jmx_metric_sandbox
 </code></pre>
+    </div>
   </li>
   <li>
     <p>Genrate Alert by producing alert triggering message into Kafka topic.</p>
 
-    <pre><code>  $ /usr/hdp/2.2.4.2-2/kafka/bin/kafka-console-producer.sh --broker-list sandbox.hortonworks.com:6667 --topic nn_jmx_metric_sandbox
+    <div class="highlighter-rouge"><pre class="highlight"><code>  $ /usr/hdp/2.2.4.2-2/kafka/bin/kafka-console-producer.sh --broker-list sandbox.hortonworks.com:6667 --topic nn_jmx_metric_sandbox
   $ {"host": "localhost", "timestamp": 1457033916718, "metric": "hadoop.namenode.fsnamesystemstate.fsstate", "component": "namenode", "site": "sandbox", "value": 1.0}
 </code></pre>
+    </div>
   </li>
 </ul>
 

Modified: eagle/site/docs/mapr-integration.html
URL: http://svn.apache.org/viewvc/eagle/site/docs/mapr-integration.html?rev=1778394&r1=1778393&r2=1778394&view=diff
==============================================================================
--- eagle/site/docs/mapr-integration.html (original)
+++ eagle/site/docs/mapr-integration.html Thu Jan 12 07:44:47 2017
@@ -230,84 +230,97 @@
 <p>Here are the steps to follow:</p>
 
 <h4 id="step1-enable-audit-logs-for-filesystem-operations-and-table-operations-in-mapr">Step1: Enable audit logs for FileSystem Operations and Table Operations in MapR</h4>
-<p>First we need to enable data auditing at all three levels: cluster level, volume level and directory,file or table level. 
-##### Cluster level:</p>
+<p>First we need to enable data auditing at all three levels: cluster level, volume level and directory,file or table level.</p>
+<h5 id="cluster-level">Cluster level:</h5>
 
-<pre><code>       $ maprcli audit data -cluster &lt;cluster name&gt; -enabled true 
+<div class="highlighter-rouge"><pre class="highlight"><code>       $ maprcli audit data -cluster &lt;cluster name&gt; -enabled true 
                            [ -maxsize &lt;GB, defaut value is 32. When size of audit logs exceed this number, an alarm will be sent to the dashboard in the MapR Control Service &gt; ]
                            [ -retention &lt;number of Days&gt; ]
 </code></pre>
+</div>
 <p>Example:</p>
 
-<pre><code>        $ maprcli audit data -cluster mapr.cluster.com -enabled true -maxsize 30 -retention 30
+<div class="highlighter-rouge"><pre class="highlight"><code>        $ maprcli audit data -cluster mapr.cluster.com -enabled true -maxsize 30 -retention 30
 </code></pre>
+</div>
 
 <h5 id="volume-level">Volume level:</h5>
 
-<pre><code>       $ maprcli volume audit -cluster &lt;cluster name&gt; -enabled true 
+<div class="highlighter-rouge"><pre class="highlight"><code>       $ maprcli volume audit -cluster &lt;cluster name&gt; -enabled true 
                             -name &lt;volume name&gt;
                             [ -coalesce &lt;interval in minutes, the interval of time during which READ, WRITE, or GETATTR operations on one file from one client IP address are logged only once, if auditing is enabled&gt; ]
 </code></pre>
+</div>
 
 <p>Example:</p>
 
-<pre><code>        $ maprcli volume audit -cluster mapr.cluster.com -name mapr.tmp -enabled true
+<div class="highlighter-rouge"><pre class="highlight"><code>        $ maprcli volume audit -cluster mapr.cluster.com -name mapr.tmp -enabled true
 </code></pre>
+</div>
 
 <p>To verify that auditing is enabled for a particular volume, use this command:</p>
 
-<pre><code>        $ maprcli volume info -name &lt;volume name&gt; -json | grep -i 'audited\|coalesce'
+<div class="highlighter-rouge"><pre class="highlight"><code>        $ maprcli volume info -name &lt;volume name&gt; -json | grep -i 'audited\|coalesce'
 </code></pre>
+</div>
 <p>and you should see something like this:</p>
 
-<pre><code>                        "audited":1,
+<div class="highlighter-rouge"><pre class="highlight"><code>                        "audited":1,
                         "coalesceInterval":60
 </code></pre>
+</div>
 <p>If “audited” is ‘1’ then auditing is enabled for this volume.</p>
 
 <h5 id="directory-file-or-mapr-db-table-level">Directory, file, or MapR-DB table level:</h5>
 
-<pre><code>        $ hadoop mfs -setaudit on &lt;directory|file|table&gt;
+<div class="highlighter-rouge"><pre class="highlight"><code>        $ hadoop mfs -setaudit on &lt;directory|file|table&gt;
 </code></pre>
+</div>
 
-<p>To check whether Auditing is Enabled for a Directory, File, or MapR-DB Table, use <code>$ hadoop mfs -ls</code>
+<p>To check whether Auditing is Enabled for a Directory, File, or MapR-DB Table, use <code class="highlighter-rouge">$ hadoop mfs -ls</code>
 Example:
-Before enable the audit log on file <code>/tmp/dir</code>, try <code>$ hadoop mfs -ls /tmp/dir</code>, you should see something like this:</p>
+Before enable the audit log on file <code class="highlighter-rouge">/tmp/dir</code>, try <code class="highlighter-rouge">$ hadoop mfs -ls /tmp/dir</code>, you should see something like this:</p>
 
-<pre><code>drwxr-xr-x Z U U   - root root          0 2016-03-02 15:02  268435456 /tmp/dir
+<div class="highlighter-rouge"><pre class="highlight"><code>drwxr-xr-x Z U U   - root root          0 2016-03-02 15:02  268435456 /tmp/dir
                p 2050.32.131328  mapr2.da.dg:5660 mapr1.da.dg:5660
 </code></pre>
+</div>
 
-<p>The second <code>U</code> means auditing on this file is not enabled. 
+<p>The second <code class="highlighter-rouge">U</code> means auditing on this file is not enabled. 
 Enable auditing with this command:</p>
 
-<pre><code>$ hadoop mfs -setaudit on /tmp/dir
+<div class="highlighter-rouge"><pre class="highlight"><code>$ hadoop mfs -setaudit on /tmp/dir
 </code></pre>
+</div>
 
 <p>Then check the auditing bit with :</p>
 
-<pre><code>$ hadoop mfs -ls /tmp/dir
+<div class="highlighter-rouge"><pre class="highlight"><code>$ hadoop mfs -ls /tmp/dir
 </code></pre>
+</div>
 
 <p>you should see something like this:</p>
 
-<pre><code>drwxr-xr-x Z U A   - root root          0 2016-03-02 15:02  268435456 /tmp/dir
+<div class="highlighter-rouge"><pre class="highlight"><code>drwxr-xr-x Z U A   - root root          0 2016-03-02 15:02  268435456 /tmp/dir
                p 2050.32.131328  mapr2.da.dg:5660 mapr1.da.dg:5660
 </code></pre>
+</div>
 
-<p>We can see the previous <code>U</code> has been changed to <code>A</code> which indicates auditing on this file is enabled.</p>
+<p>We can see the previous <code class="highlighter-rouge">U</code> has been changed to <code class="highlighter-rouge">A</code> which indicates auditing on this file is enabled.</p>
 
-<p><code>Important</code>:
+<p><code class="highlighter-rouge">Important</code>:
 When a directory has been enabled auditing,  directories/files located in this dir won’t inherit auditing, but a newly created file/dir (after enabling the auditing on this dir) in this directory will.</p>
 
 <h4 id="step2-stream-log-data-into-kafka-by-using-logstash">Step2: Stream log data into Kafka by using Logstash</h4>
-<p>As MapR do not have name node, instead it use CLDB service, we have to use logstash to stream log data into Kafka.
-- First find out the nodes that have CLDB service
-- Then find out the location of audit log files, eg: <code>/mapr/mapr.cluster.com/var/mapr/local/mapr1.da.dg/audit/</code>, file names should be in this format: <code>FSAudit.log-2016-05-04-001.json</code> 
-- Created a logstash conf file and run it, following this doc<a href="https://github.com/apache/eagle/blob/master/eagle-assembly/src/main/docs/logstash-kafka-conf.md">Logstash-kafka</a></p>
+<p>As MapR do not have name node, instead it use CLDB service, we have to use logstash to stream log data into Kafka.</p>
+<ul>
+  <li>First find out the nodes that have CLDB service</li>
+  <li>Then find out the location of audit log files, eg: <code class="highlighter-rouge">/mapr/mapr.cluster.com/var/mapr/local/mapr1.da.dg/audit/</code>, file names should be in this format: <code class="highlighter-rouge">FSAudit.log-2016-05-04-001.json</code></li>
+  <li>Created a logstash conf file and run it, following this doc<a href="https://github.com/apache/eagle/blob/master/eagle-assembly/src/main/docs/logstash-kafka-conf.md">Logstash-kafka</a></li>
+</ul>
 
 <h4 id="step3-set-up-maprfsauditlog-applicaiton-in-eagle-service">Step3: Set up maprFSAuditLog applicaiton in Eagle Service</h4>
-<p>After Eagle Service gets started, create mapFSAuditLog application using:  <code>$ ./maprFSAuditLog-init.sh</code>. By default it will create maprFSAuditLog in site “sandbox”, you may need to change it to your own site.
+<p>After Eagle Service gets started, create mapFSAuditLog application using:  <code class="highlighter-rouge">$ ./maprFSAuditLog-init.sh</code>. By default it will create maprFSAuditLog in site “sandbox”, you may need to change it to your own site.
 After these steps you are good to go.</p>
 
 <p>Have fun!!! :)</p>

Modified: eagle/site/docs/quick-start-0.3.0.html
URL: http://svn.apache.org/viewvc/eagle/site/docs/quick-start-0.3.0.html?rev=1778394&r1=1778393&r2=1778394&view=diff
==============================================================================
--- eagle/site/docs/quick-start-0.3.0.html (original)
+++ eagle/site/docs/quick-start-0.3.0.html Thu Jan 12 07:44:47 2017
@@ -237,21 +237,22 @@
   <li>
     <p>Build manually with <a href="https://maven.apache.org/">Apache Maven</a>:</p>
 
-    <pre><code>$ tar -zxvf apache-eagle-0.3.0-incubating-src.tar.gz
+    <div class="highlighter-rouge"><pre class="highlight"><code>$ tar -zxvf apache-eagle-0.3.0-incubating-src.tar.gz
 $ cd incubator-eagle-release-0.3.0-rc3  
 $ curl -O https://patch-diff.githubusercontent.com/raw/apache/eagle/pull/180.patch
 $ git apply 180.patch
 $ mvn clean package -DskipTests
 </code></pre>
+    </div>
 
-    <p>After building successfully, you will get tarball under <code>eagle-assembly/target/</code> named as <code>eagle-0.3.0-incubating-bin.tar.gz</code>
+    <p>After building successfully, you will get tarball under <code class="highlighter-rouge">eagle-assembly/target/</code> named as <code class="highlighter-rouge">eagle-0.3.0-incubating-bin.tar.gz</code>
 <br /></p>
   </li>
 </ul>
 
 <h3 id="install-eagle"><strong>Install Eagle</strong></h3>
 
-<pre><code> $ scp -P 2222  eagle-assembly/target/eagle-0.3.0-incubating-bin.tar.gz root@127.0.0.1:/root/
+<div class="highlighter-rouge"><pre class="highlight"><code> $ scp -P 2222  eagle-assembly/target/eagle-0.3.0-incubating-bin.tar.gz root@127.0.0.1:/root/
  $ ssh root@127.0.0.1 -p 2222 (password is hadoop)
  $ tar -zxvf eagle-0.3.0-incubating-bin.tar.gz
  $ mv eagle-0.3.0-incubating eagle
@@ -259,6 +260,7 @@ $ mvn clean package -DskipTests
  $ cd /usr/hdp/current/eagle
  $ examples/eagle-sandbox-starter.sh
 </code></pre>
+</div>
 
 <p><br /></p>
 

Modified: eagle/site/docs/quick-start.html
URL: http://svn.apache.org/viewvc/eagle/site/docs/quick-start.html?rev=1778394&r1=1778393&r2=1778394&view=diff
==============================================================================
--- eagle/site/docs/quick-start.html (original)
+++ eagle/site/docs/quick-start.html Thu Jan 12 07:44:47 2017
@@ -238,21 +238,22 @@
   <li>
     <p>Build manually with <a href="https://maven.apache.org/">Apache Maven</a>:</p>
 
-    <pre><code>$ tar -zxvf apache-eagle-0.4.0-incubating-src.tar.gz
+    <div class="highlighter-rouge"><pre class="highlight"><code>$ tar -zxvf apache-eagle-0.4.0-incubating-src.tar.gz
 $ cd apache-eagle-0.4.0-incubating-src 
 $ curl -O https://patch-diff.githubusercontent.com/raw/apache/eagle/pull/268.patch
 $ git apply 268.patch
 $ mvn clean package -DskipTests
 </code></pre>
+    </div>
 
-    <p>After building successfully, you will get a tarball under <code>eagle-assembly/target/</code> named <code>apache-eagle-0.4.0-incubating-bin.tar.gz</code>
+    <p>After building successfully, you will get a tarball under <code class="highlighter-rouge">eagle-assembly/target/</code> named <code class="highlighter-rouge">apache-eagle-0.4.0-incubating-bin.tar.gz</code>
 <br /></p>
   </li>
 </ul>
 
 <h3 id="install-eagle"><strong>Install Eagle</strong></h3>
 
-<pre><code> $ scp -P 2222 eagle-assembly/target/apache-eagle-0.4.0-incubating-bin.tar.gz root@127.0.0.1:/root/
+<div class="highlighter-rouge"><pre class="highlight"><code> $ scp -P 2222 eagle-assembly/target/apache-eagle-0.4.0-incubating-bin.tar.gz root@127.0.0.1:/root/
  $ ssh root@127.0.0.1 -p 2222 (password is hadoop)
  $ tar -zxvf apache-eagle-0.4.0-incubating-bin.tar.gz
  $ mv apache-eagle-0.4.0-incubating eagle
@@ -260,20 +261,22 @@ $ mvn clean package -DskipTests
  $ cd /usr/hdp/current/eagle
  $ examples/eagle-sandbox-starter.sh
 </code></pre>
+</div>
 
 <p><br /></p>
 
 <h3 id="sample-application-hive-query-activity-monitoring-in-sandbox"><strong>Sample Application: Hive query activity monitoring in sandbox</strong></h3>
-<p>After executing <code>examples/eagle-sandbox-starter.sh</code>, you have a sample application (topology) running on the Apache Storm (check with <a href="http://sandbox.hortonworks.com:8744/index.html">storm ui</a>), and a sample policy of Hive activity monitoring defined.</p>
+<p>After executing <code class="highlighter-rouge">examples/eagle-sandbox-starter.sh</code>, you have a sample application (topology) running on the Apache Storm (check with <a href="http://sandbox.hortonworks.com:8744/index.html">storm ui</a>), and a sample policy of Hive activity monitoring defined.</p>
 
 <p>Next you can trigger an alert by running a Hive query.</p>
 
-<pre><code>$ su hive
+<div class="highlighter-rouge"><pre class="highlight"><code>$ su hive
 $ hive
 $ set hive.execution.engine=mr;
 $ use xademo;
 $ select a.phone_number from customer_details a, call_detail_records b where a.phone_number=b.phone_number;
 </code></pre>
+</div>
 <p><br /></p>
 
 <hr />

Modified: eagle/site/docs/serviceconfiguration.html
URL: http://svn.apache.org/viewvc/eagle/site/docs/serviceconfiguration.html?rev=1778394&r1=1778393&r2=1778394&view=diff
==============================================================================
--- eagle/site/docs/serviceconfiguration.html (original)
+++ eagle/site/docs/serviceconfiguration.html Thu Jan 12 07:44:47 2017
@@ -230,7 +230,7 @@ description of Eagle Service configurati
   <li>for hbase</li>
 </ul>
 
-<pre><code>eagle {
+<div class="highlighter-rouge"><pre class="highlight"><code>eagle {
 	service{
 		storage-type="hbase"
 		hbase-zookeeper-quorum="sandbox.hortonworks.com"
@@ -241,12 +241,13 @@ description of Eagle Service configurati
 	}
       }
 </code></pre>
+</div>
 
 <ul>
   <li>for mysql</li>
 </ul>
 
-<pre><code>eagle {
+<div class="highlighter-rouge"><pre class="highlight"><code>eagle {
 	service {
 		storage-type="jdbc"
 		storage-adapter="mysql"
@@ -260,12 +261,13 @@ description of Eagle Service configurati
 	}
 }
 </code></pre>
+</div>
 
 <ul>
   <li>for derby</li>
 </ul>
 
-<pre><code>eagle {
+<div class="highlighter-rouge"><pre class="highlight"><code>eagle {
 	service {
 		storage-type="jdbc"
 		storage-adapter="derby"
@@ -279,6 +281,7 @@ description of Eagle Service configurati
 	}
 }
 </code></pre>
+</div>
 <p><br /></p>
 
       </div><!--end of loadcontent-->  



Mime
View raw message