hadoop-common-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From d...@apache.org
Subject svn commit: r652064 - in /hadoop/core/branches/branch-0.17: docs/hod_admin_guide.html docs/hod_admin_guide.pdf src/contrib/hod/CHANGES.txt src/contrib/hod/support/logcondense.py src/docs/src/documentation/content/xdocs/hod_admin_guide.xml
Date Mon, 28 Apr 2008 05:22:35 GMT
Author: ddas
Date: Sun Apr 27 22:22:34 2008
New Revision: 652064

URL: http://svn.apache.org/viewvc?rev=652064&view=rev
Log:
Merge -r 652056:652057 from trunk onto 0.17 branch. Fixes HADOOP-3304.

Modified:
    hadoop/core/branches/branch-0.17/docs/hod_admin_guide.html
    hadoop/core/branches/branch-0.17/docs/hod_admin_guide.pdf
    hadoop/core/branches/branch-0.17/src/contrib/hod/CHANGES.txt
    hadoop/core/branches/branch-0.17/src/contrib/hod/support/logcondense.py
    hadoop/core/branches/branch-0.17/src/docs/src/documentation/content/xdocs/hod_admin_guide.xml

Modified: hadoop/core/branches/branch-0.17/docs/hod_admin_guide.html
URL: http://svn.apache.org/viewvc/hadoop/core/branches/branch-0.17/docs/hod_admin_guide.html?rev=652064&r1=652063&r2=652064&view=diff
==============================================================================
--- hadoop/core/branches/branch-0.17/docs/hod_admin_guide.html (original)
+++ hadoop/core/branches/branch-0.17/docs/hod_admin_guide.html Sun Apr 27 22:22:34 2008
@@ -205,6 +205,22 @@
 <li>
 <a href="#Running+HOD">Running HOD</a>
 </li>
+<li>
+<a href="#Supporting+Tools+and+Utilities">Supporting Tools and Utilities</a>
+<ul class="minitoc">
+<li>
+<a href="#logcondense.py+-+Tool+for+removing+log+files+uploaded+to+DFS">logcondense.py - Tool for removing log files uploaded to DFS</a>
+<ul class="minitoc">
+<li>
+<a href="#Running+logcondense.py">Running logcondense.py</a>
+</li>
+<li>
+<a href="#Command+Line+Options+for+logcondense.py">Command Line Options for logcondense.py</a>
+</li>
+</ul>
+</li>
+</ul>
+</li>
 </ul>
 </div>
 
@@ -468,6 +484,105 @@
 <div class="section">
 <p>You can now proceed to <a href="hod_user_guide.html">HOD User Guide</a> for information about how to run HOD,
     what are the various features, options and for help in trouble-shooting.</p>
+</div>
+
+  
+<a name="N10134"></a><a name="Supporting+Tools+and+Utilities"></a>
+<h2 class="h3">Supporting Tools and Utilities</h2>
+<div class="section">
+<p>This section describes certain supporting tools and utilities that can be used in managing HOD deployments.</p>
+<a name="N1013D"></a><a name="logcondense.py+-+Tool+for+removing+log+files+uploaded+to+DFS"></a>
+<h3 class="h4">logcondense.py - Tool for removing log files uploaded to DFS</h3>
+<p>As mentioned in 
+         <a href="hod_user_guide.html#Collecting+and+Viewing+Hadoop+Logs">this section</a> of the
+         <a href="hod_user_guide.html">HOD User Guide</a>, HOD can be configured to upload
+         Hadoop logs to a statically configured HDFS. Over time, the number of logs uploaded
+         to DFS could increase. logcondense.py is a tool that helps administrators to clean-up
+         the log files older than a certain number of days. </p>
+<a name="N1014E"></a><a name="Running+logcondense.py"></a>
+<h4>Running logcondense.py</h4>
+<p>logcondense.py is available under hod_install_location/support folder. You can either
+        run it using python, for e.g. <em>python logcondense.py</em>, or give execute permissions 
+        to the file, and directly run it as <em>logcondense.py</em>. logcondense.py needs to be 
+        run by a user who has sufficient permissions to remove files from locations where log 
+        files are uploaded in the DFS, if permissions are enabled. For e.g. as mentioned in the
+        <a href="hod_config_guide.html#3.7+hodring+options">configuration guide</a>, the logs could
+        be configured to come under the user's home directory in HDFS. In that case, the user
+        running logcondense.py should have super user privileges to remove the files from under
+        all user home directories.</p>
+<a name="N10162"></a><a name="Command+Line+Options+for+logcondense.py"></a>
+<h4>Command Line Options for logcondense.py</h4>
+<p>The following command line options are supported for logcondense.py.</p>
+<table class="ForrestTable" cellspacing="1" cellpadding="4">
+            
+<tr>
+              
+<td colspan="1" rowspan="1">Short Option</td>
+              <td colspan="1" rowspan="1">Long option</td>
+              <td colspan="1" rowspan="1">Meaning</td>
+              <td colspan="1" rowspan="1">Example</td>
+            
+</tr>
+            
+<tr>
+              
+<td colspan="1" rowspan="1">-p</td>
+              <td colspan="1" rowspan="1">--package</td>
+              <td colspan="1" rowspan="1">Complete path to the hadoop script. The version of hadoop must be the same as the 
+                  one running HDFS.</td>
+              <td colspan="1" rowspan="1">/usr/bin/hadoop</td>
+            
+</tr>
+            
+<tr>
+              
+<td colspan="1" rowspan="1">-d</td>
+              <td colspan="1" rowspan="1">--days</td>
+              <td colspan="1" rowspan="1">Delete log files older than the specified number of days</td>
+              <td colspan="1" rowspan="1">7</td>
+            
+</tr>
+            
+<tr>
+              
+<td colspan="1" rowspan="1">-c</td>
+              <td colspan="1" rowspan="1">--config</td>
+              <td colspan="1" rowspan="1">Path to the Hadoop configuration directory, under which hadoop-site.xml resides.
+              The hadoop-site.xml must point to the HDFS NameNode from where logs are to be removed.</td>
+              <td colspan="1" rowspan="1">/home/foo/hadoop/conf</td>
+            
+</tr>
+            
+<tr>
+              
+<td colspan="1" rowspan="1">-l</td>
+              <td colspan="1" rowspan="1">--logs</td>
+              <td colspan="1" rowspan="1">A HDFS path, this must be the same HDFS path as specified for the log-destination-uri,
+              as mentioned in the  <a href="hod_config_guide.html#3.7+hodring+options">configuration guide</a>,
+              without the hdfs:// URI string</td>
+              <td colspan="1" rowspan="1">/user</td>
+            
+</tr>
+            
+<tr>
+              
+<td colspan="1" rowspan="1">-n</td>
+              <td colspan="1" rowspan="1">--dynamicdfs</td>
+              <td colspan="1" rowspan="1">If true, this will indicate that the logcondense.py script should delete HDFS logs
+              in addition to Map/Reduce logs. Otherwise, it only deletes Map/Reduce logs, which is also the
+              default if this option is not specified. This option is useful if dynamic DFS installations 
+              are being provisioned by HOD, and the static DFS installation is being used only to collect 
+              logs - a scenario that may be common in test clusters.</td>
+              <td colspan="1" rowspan="1">false</td>
+            
+</tr>
+          
+</table>
+<p>So, for example, to delete all log files older than 7 days using a hadoop-site.xml stored in
+        ~/hadoop-conf, using the hadoop installation under ~/hadoop-0.17.0, you could say:</p>
+<p>
+<em>python logcondense.py -p ~/hadoop-0.17.0/bin/hadoop -d 7 -c ~/hadoop-conf -l /user</em>
+</p>
 </div>
 
 </div>

Modified: hadoop/core/branches/branch-0.17/docs/hod_admin_guide.pdf
URL: http://svn.apache.org/viewvc/hadoop/core/branches/branch-0.17/docs/hod_admin_guide.pdf?rev=652064&r1=652063&r2=652064&view=diff
==============================================================================
--- hadoop/core/branches/branch-0.17/docs/hod_admin_guide.pdf (original)
+++ hadoop/core/branches/branch-0.17/docs/hod_admin_guide.pdf Sun Apr 27 22:22:34 2008
@@ -5,10 +5,10 @@
 /Producer (FOP 0.20.5) >>
 endobj
 5 0 obj
-<< /Length 668 /Filter [ /ASCII85Decode /FlateDecode ]
+<< /Length 803 /Filter [ /ASCII85Decode /FlateDecode ]
  >>
 stream
-Gb!$E9lldX&;KZO$6AX[W]lN[a/!Drm7bN:_QD'K"[SNl_@b^"s*Z3/8C"SSEbHJ@YZAE<oW4?iYL;@H%0\#/#c'n>lsCMN:^.aZo>=sQkBMY&jU"Jei-T]!ntt*B^]fQ.F"c;s2nkS^U%/qC"R-`_3<ja!I,p=,$t@@LGH(@?9C%/^>@CI?ERq'`Wihok0n@ZV5E(D@a=rS4d3&)4jHY%9gfJFf;XZ.3Z3%q$n;N^m:*N6r"[]N=9IV5#"^,I2e6fVrd.4`Ng8^A(=ip&6-R3[86FmSb)+5,B?&&(&pTeX^SU$WH-^6$U4q#-]`JA@AH^Qs!F-oWHd3?''riFgKe9.T[7t/(\52W?o1oDQ[#KJNJ7Q6Td:G?T&h,Af><)JpoIM_?o#D=('n'fptPXZY;7ESTQ;rE<qG,2mR<4*IKK0D[iFB,*kFP"$B.6D10PgQ&Gs$<=%)E:$$#X_#B]#EUL3]Df0dpM6aTe_PrQT4o(1pDnep]/O(e(oJ2-NLlNm[j9h=6a(eq@&BEIguB+%A&@S@V>luMIC<2D:gLnKE3Yj&UXthCgTNk&0[F&3+?2uJf02VcG8-RqY[qnId*k[l<HFAK.uXCJl(g8oLV+(E[NX"IV=W,7;-W%VWh)Ej:1`&X(u>VCU]le>LQft*'3`L4+?XF~>
+Gb!$E_/c#!&;KX9KoXG%W[&/6A(&kjFsc6--E"9WC1O%V/.j%qakKkuN'sQQae<_D_-0rAI(r^_^!)'=Wt:?u,*;0nAH0(=!bHl_1-Z25p0J^^(re#Gp>Gi-3\R;1cbMP4;Y+_0(2eN=p:2Ts^e^&?0p*-"LWGZu1nbJ9E??gVIoX]^T_?c"mbE%;Kk<Y1@lt)PR`<7$fRO[R.-]c#hjaQ&h7`6'Y7F;I3.!aih2LiVQf1[a(to?Ka?4M\68DB!H"PD"X((7L4g_l\RPcH=[qhQ.bRms1o?`jk1gl".d%9t`CLjKS?DcQ>+to?F$,?Fr*nr=@4En/k@60K<ZeW:hB@5qUWjjGtq=XULFK9<.4RCa20ST<a\u@7N!(5qs4n"G0m!.3`GJS13J<8gbINI70dO7RpOYYJX3H,Tt3q9uDK(&<R5p)B524)29f$]ONVO_-n7VPp$#GK?]3(:$4+X1FPU@?hsf?<+J?PJ9#Cb<XkFEi^#Y`jja&X^kA]02&WJ9X964n%-+Lj,!H,<1_D=+DMV(9M*gh"0C0FY/lMY\-j7NAJ2DjCm=_??+6Iiom]m]QEg`\qauTQET?Q]AE:cA8)sZJr,5uqT7]f:K;Dc:+Bm)[[2c+KI3;Rb:D`>g")G4$\)sf[H))1H+C:_,22cSOd#!-(-H\eB\+qIPeb4RmOgVRKEB-.0)-2egA[@r@Nq2&,beWBMCF8&R"D/3CHNBt-cO8UmFp,5KOSR+KG%p/pWOKkFh[P9NSNd=>^:NoZ5#f0?_I-s<B6hD]CB^gQl_J,d%UIDH<OpC~>
 endstream
 endobj
 6 0 obj
@@ -30,6 +30,8 @@
 18 0 R
 20 0 R
 22 0 R
+24 0 R
+26 0 R
 ]
 endobj
 8 0 obj
@@ -113,27 +115,47 @@
 >>
 endobj
 24 0 obj
+<< /Type /Annot
+/Subtype /Link
+/Rect [ 102.0 414.066 255.68 402.066 ]
+/C [ 0 0 0 ]
+/Border [ 0 0 0 ]
+/A 25 0 R
+/H /I
+>>
+endobj
+26 0 obj
+<< /Type /Annot
+/Subtype /Link
+/Rect [ 108.0 395.866 423.476 383.866 ]
+/C [ 0 0 0 ]
+/Border [ 0 0 0 ]
+/A 27 0 R
+/H /I
+>>
+endobj
+28 0 obj
 << /Length 2280 /Filter [ /ASCII85Decode /FlateDecode ]
  >>
 stream
 GatU5>Ap9+'Roe[i%^A"KKH\^fl"-1V.eW%I;5bh_q8Q;(r@S]qs/U=QoPDtfI#,u/&9B!qffb1HsSCBmsWH&F@08L(5B8'R$n'U9PnLZ1XE=Ke+U.KVYtXKlIX`;bAEkYI=#Jc>Jf7#[D0-"Di7t'McX$6Y5Z#r=5&ZUGb4&>msf?e6P$PeC$W_[MpJM)=#6sb.[=\8]&WMd=$%`S5(kZR"k"M0f3_Vqk-;FG\U+&EjlKZ.-qaK07C@B%<8oqQQdrgM=XNPq]HiuZ,uqhR,i>?H=>6ZcDS)N&[nt8,5dps@P!!=[KE(LGD;eu,4L)DQQpTHCQhtE+;-qd]*\h1BcfIXa_NLU@U9js%OeI.IY&bYj$3_IjX7YDVc0j$1NPXbGo2NA/n%p"?m0?ZS5sMWYg+*4;``\HP<F(>:0.VaCRYp(]C+cYrRj@imU>7>#2hcRMBrI+tI\#Pc`"dj&#D4PD[,TN\5ZR3.aA%;87Qa4H1!Q8V]s3e<9^.k3[4t1d^@..U3Pu/TIF]^*<;0k2YEC;%)NKY#nV.kDhV\iP8?3Q+YBn5%S=AuXg1kS6>G#cH8jk`>Lb'g099pn@PYK7LhNYMg]5#B?o5aNrLgc'NCRXfBX:r8F;5L6!"BRu*js3G2o#`<*'i<rm3$4f,lGF3(:j1_>klpj5^_Tr[AE&hGl5^0VmM<Jsg=',?@088;I1Oi5>I;I.)qp<b>DEH_#0k'Y[<M#KmiKC#EiY1CM*6N+.pfqJ8tf3R/(?X0oQUIVZ9dE$BMZ1TW9UlhW%*s'Xt8BIA3h22aZA.+^7T,"D<gOeD=+:g\54S8k'a-X-4OEcn[?(I9m^[bUs6)1!"j?@%a/OPeBl>WN'b%5BS4G.K1K-64=bXd$&kHm8tPu<BR+rCb)(rWXitd3NF+6[*T'(.1+)'k^Yf+eG='s2,qr6dW_1jP2:[)tTRod9FlE<@m=7*jXF.Rm3GVW&l..Dd@"B[m;;p/s8]s,
 hJ[a-g=W5'W7fcp]R;QYVVDr+SWa.S\V3]?POTqghCDJ7B%=dU9$hZoZj$h]OTP*V0<8&S:-W/b:"]k(ahJ@C\-+)bfU:j*LetZtqrl.^bOJ6H,P;e5FR70"%cE*3HD(`klS$Z,B1tN-TgA7\4Wt\<jUP-EmEe<2Nh_@i7`&UaHc_HIS[k8+eQd<o;_d3'\no4UVV8S$ROLWa@#F:&?5)qV(_OF5u$mrSL$XuW24b=-MpQY-.aS5D_JO"qceD*#@K94iEj/-V,oCWA)b0\g*i;T7nftK_?T-;=FHKn*8].#hUJ>(hu67OEp6LTKOI5Klf(jQ1H7MJk!hK5D.TmDBI%7(->QXu;kbcFLl%;]5kb6,X7;Of#6bClRh#PUVt;Q/850tq,4>OtD%E8%S"Rj7F@N8\st1rq,b#qRCQG%,3j/T6:Zl;*ZQeMUg[D#3N#l*4&HhNPQDPu'9@9\d^^oMT'AZf>hbO@,AIak!I].%cuU7p&Cb(1#Ii?rKs2e"W3M2QJ-RF2DMm$CZH")Dm>mNQ9iMEQXh6C9n3kLA^U_oa8chK3o8CrK,Z/j\&cD33bab,#;SM6idQ3%T3Fd_I>sF4[J<qL2HF"=?l-\W6uWRKTae!<@pLSodMCId1oJs.[-cpDRIB$$0oWP[!-hu3eJVK8!#bX5'6oOhGj>JB<*%$1YK$'"G-Yk?2`4[3g3V-CW]i+UC3B)N!SoIKj0\(8),NL^>N\*8jb]B%.0Dr=mM91B8WjFl:g>GdiX,bN$;<\P+2(k<EH(,!";9U1LW/)Tk(k:BPQKg31C,H#363D1@,T2":bu.DkTQr0m-T]H6GBq="oI=;?e)0EI%[`28DTmrkD+ACsSL2PX)jrPI6\,DqW'I8+QhNgF"=r\ALQU`78a3O-(]=l&.oDP3N^0(iaSr+XQfKp%,\9NM<Djm5L8?h,(lrWuh\+a22LK-_Y'oH4Hc+a$_C@T/Wu^(*,,8;aEou>_7G2RA/
 _1OsbAbVTHU&NF68Ij9[,]-VdR8$&Vp@Q:ZfJRK%)Q"H-:c^doabSE:DI:5pH(`h68A`ufNIk](p;*b6d_G(:mmGP5O:/1o3h*%&^YLt]#oS$m3<bNU/l'Z\YrXM`*9nb2JN(S@7:qsVn/iTn3>I"q&sVI\ECn.FBE.aXb[gOZIY*H:His*(Dk,0iLZQq!]]L)sI2j.`l!drk=0UhAV9.HY'O0D8s4*&m[Z#9B+GMEB!8HuV++BRb))5hVk.B>1A&#B0?4mWk#:h.3IZASEW33nY7ZB3k*I9+?:C++$GKg&~>
 endstream
 endobj
-25 0 obj
+29 0 obj
 << /Type /Page
 /Parent 1 0 R
 /MediaBox [ 0 0 612 792 ]
 /Resources 3 0 R
-/Contents 24 0 R
-/Annots 26 0 R
+/Contents 28 0 R
+/Annots 30 0 R
 >>
 endobj
-26 0 obj
+30 0 obj
 [
-27 0 R
+31 0 R
 ]
 endobj
-27 0 obj
+31 0 obj
 << /Type /Annot
 /Subtype /Link
 /Rect [ 308.976 542.466 431.592 530.466 ]
@@ -144,36 +166,36 @@
 /H /I
 >>
 endobj
-28 0 obj
+32 0 obj
 << /Length 2450 /Filter [ /ASCII85Decode /FlateDecode ]
  >>
 stream
 Gatm=99Yi)&AJ$CE(j/d(;r;D]H<DL-8WqUCU48)AXa)N#D$1Pn$`=M=s/efWNF[G%#04[rH5UIHF\:0>l7nfee\6K/7P+$<Y,E'^ZW(u?T<3o4KDrBC@Q`BHF`h:]l,nF^0,=hG5%:3ruB4eU\7\P+G&o_bJO,=8]A-+W*/fCH9jW$:QASkP?7F;abuXigjVuOa.LPEg["pLj`OUAGZ?#3`_1S5=n/@I86mUEi2%PlXadJ7B/&0[N?8St?1o\'UQaLu.J:^tr7MH*:Hb%hK;YpDggECib>#E^Wenui/8%:UQAhs!)L]k-cordlcM':(U/&iF^RnIrjd)/?oO3$GHR/%E/Ws*AQ>`@e]Ua<2g4TqW$=4hiHrVOsWE<Eq!e%h%h%`O+=fJm.&JuZoW>LCY*m9J9H9TkS5lA[!g'NU!"Cd4oX2Hd<1KG\BhD)GTf]JoAat*^c0r-HP$1n5<mZo7'js%ll3XHMX9HM`\hf_RJGl=&6Q;s+)@8df?rSuZ,\&P^PipTCGU^/98'bg6B2NGY"g([/Tf.>%_73R@mmN8Ah**pDGAoC5J69k&$IWmFf_)]M'UZScQ8e[C2P[m\_2+*<;>)oM3Z0qlBFG'Q-8u.4-MS5t@mj8]Nh/Yn2Akqn\NJJAR"3o<1-.G3OWa<^nG1`k/g:'JF_"m[Fr%26DkQ]X+;cSpu*)QahX3_Ulo?G="?T^KECkkD+MnZU(P:r?jki,Mi$n=IW=d^"!Tgti8^n^053(^s8Y2[]LVc<iaGuLOFS4-=&>?5I/o\-<*3I/e,os(9>c#\>-]MMO3)kQ6jW8n:&C!TGnJZ4jkE`f%4>mJMn;lZG&qXWa9T`?H+>;9f6kj(\d(.-GUT&Xn<KCo&%TQ:Wi4WQl-X5=@R0Y&oLjfB=r:VkhFY42;.XIBaOia`*">K@RVg*"S!L)[[7K.ip@Zu1$8<[X`t>#U2i4I5($ID'E;d4&/Z1XA:"Oq1o+*X>^PV1VV
 ('1ZCR]a9!!g\t<TItGs&.J8M`#7eQ1rIZLuk,L#6<UO&,%F!@lVG)?A=cMM9UX%XT2l:)A?34@jKT'/n(cV(jd'ea]#s7X2)h3U0<qXNHa/ioZ]lmC]X$;&0s'.fO&J2Ja->F5;!B[qBD)J)8'Ik1ok[*\_92GbdW!P"XZ*1[KN2V@4[/FO#WCXj*!Jj!8qNE\i@,hU.DK<t/69CaKJL%KXX'&Vqjqo[i5?A7l,(OLOC@iLh"dXh]Y76<+S.9V7hNTnP,Y_T@SNX`Z\_\RPeb)CPogp.PiPlt*MC84?fYk/Xg1r`FXpY#gVWR80S#XuGL6AXD<E%p`A$Q<u^a==+S?uD2;5pU15FWirIcO"BkSA+Xi`sU]3;]^?\_mg`oQ)dHW#BiN2&W6>3qj4-Z]-#c\<rFHP!N71H;[6,a+R/'&h4c+'29e=o4_8_q`V-%H^gnb-B=.g`VW]AiWl@JYr%a[i4#cK4!P7.47p2[5@TsA,us0D"[PI8)OWO/'628Qd/+^b,7BC)pcm2CY)Z9`AB,,%<)I!JG>l]2`*>QOa\?"=/if<;oe($s'"T:sBbG#u=Z!/\=41Z<."r8aZY?Vf7Ich-dU/8:HG5S@f-?;Kjj@RK4F0t$TXNX0homTF_>TKtGF'<coZI7f%E!(6=R!?=;Bud.;sg^PFFe5#%D+O!G"DPI9>L0NYmNepii>X_78'f%m%-P6(b^5`F>jY[0dtb,H.oO:0H`*0^pej(i*0XcN^N_()"5\[[\ctr.4@N$L8,Qei\$s#liQ$IXc=>6!8_\430^"@o/b>]2$;!TDCcX#U?<+h:e*Vg%cZH0D#bac5-%)=(2a;`f<<Rf*"8sa?oBjG=?"Z'%l*@,,AMi`;dgV[N,HG`F?8dk'KKG9H_b+Mb/%\l358NaSUl#<60b=V%+i(fjX[19k/T7?amA4p%o8Ru"3V!\&pH&"@*P?.*Ud59c<^\bac;@92]+0BFSr5Ka#eaRJ,\
 KGE&]oEl?Qjo&F)i3NVeiGQjK4%Xn40@Q*=4F@!QndrPq!*F3o0pTqb=VcbL2Y%&JLkFT]/2=0`@L0j\4M+=RDl25FoINh/WurL5a#-uPG.4PXL\1\LeugMMb$!HQm'U5f3qj0k1mhJF]Kn+[CoUH@lDpXi]Te)LqLP":-5-7=fl5e*uXoc?]/"]@c6XPS1^cSO[l"GG5k`f:8Q-<7Y#%OU=g)A1fO>):%"hX2an4b$:p"CO1t25g01GoiYV1u>d!5t3:O$mlH%jeHJM$qIFCUmfS)T`].m`p/`aJqbG*_gkodY"IgH/Lrk[a"D:+*>LVSkar&=)q_54h_rCIk%M.iGDsD&nnY1H7@/p+GO0ePK8@q%o9HV7rB'MN#j+YBSGGJ-hcJKSf>/SGf<ltjWUInYC>5Gp3ZU`a(U*O(MQc,uqf(^]#Q!aYQCuiU-LNNJh#^m"nl$R]R`N;C#:nP3Vu~>
 endstream
 endobj
-29 0 obj
+33 0 obj
 << /Type /Page
 /Parent 1 0 R
 /MediaBox [ 0 0 612 792 ]
 /Resources 3 0 R
-/Contents 28 0 R
-/Annots 30 0 R
+/Contents 32 0 R
+/Annots 34 0 R
 >>
 endobj
-30 0 obj
+34 0 obj
 [
-31 0 R
-32 0 R
-33 0 R
-34 0 R
 35 0 R
 36 0 R
 37 0 R
 38 0 R
 39 0 R
+40 0 R
+41 0 R
+42 0 R
+43 0 R
 ]
 endobj
-31 0 obj
+35 0 obj
 << /Type /Annot
 /Subtype /Link
 /Rect [ 108.0 437.266 142.008 425.266 ]
@@ -184,7 +206,7 @@
 /H /I
 >>
 endobj
-32 0 obj
+36 0 obj
 << /Type /Annot
 /Subtype /Link
 /Rect [ 108.0 383.666 183.672 371.666 ]
@@ -195,7 +217,7 @@
 /H /I
 >>
 endobj
-33 0 obj
+37 0 obj
 << /Type /Annot
 /Subtype /Link
 /Rect [ 108.0 357.266 145.992 345.266 ]
@@ -206,7 +228,7 @@
 /H /I
 >>
 endobj
-34 0 obj
+38 0 obj
 << /Type /Annot
 /Subtype /Link
 /Rect [ 476.928 215.732 511.596 203.732 ]
@@ -217,7 +239,7 @@
 /H /I
 >>
 endobj
-35 0 obj
+39 0 obj
 << /Type /Annot
 /Subtype /Link
 /Rect [ 90.0 202.532 139.32 190.532 ]
@@ -228,7 +250,7 @@
 /H /I
 >>
 endobj
-36 0 obj
+40 0 obj
 << /Type /Annot
 /Subtype /Link
 /Rect [ 462.6 189.332 483.252 177.332 ]
@@ -239,7 +261,7 @@
 /H /I
 >>
 endobj
-37 0 obj
+41 0 obj
 << /Type /Annot
 /Subtype /Link
 /Rect [ 135.648 154.932 156.3 142.932 ]
@@ -250,7 +272,7 @@
 /H /I
 >>
 endobj
-38 0 obj
+42 0 obj
 << /Type /Annot
 /Subtype /Link
 /Rect [ 348.276 154.932 368.928 142.932 ]
@@ -261,7 +283,7 @@
 /H /I
 >>
 endobj
-39 0 obj
+43 0 obj
 << /Type /Annot
 /Subtype /Link
 /Rect [ 423.636 141.732 444.288 129.732 ]
@@ -272,29 +294,29 @@
 /H /I
 >>
 endobj
-40 0 obj
+44 0 obj
 << /Length 2358 /Filter [ /ASCII85Decode /FlateDecode ]
  >>
 stream
 Gatm==`<%a&:W67+T.r:YSl:fckOQl'Q'#QH>ACI7BNA&8?TTNXW$Dan*glK6ZH\:/W;YmR_*Wjo?HRN@eOt"]:7Kn#6)<.`Kk3U5nLhMc`]Rj348qh.;g\L^Wp.;B$]?07e:;uSls;j$[uoeh<@"5@!Ta91Oa]A:9(4r^H(FJ<JZ#Y/3eN21G&6og^^B=<-K(,rm^`!=Sr<]ls:YHja(MYFeZT1No+dEO#GWkeEl[!n"AIV]I6pONl;Lo6tSNKo=QWG%hEm-ifA\lH]BY@3W=t4aQ8;/Ha;aL21"ugR\#ngD#1Y2_8`L;-&uIr#>8f#X)]Jp5U'<j'_q7a(>5&^0Q:-\;PqD><NpD%7;=(`eM7g"kZnj9`78%kCt_]92Q_V*X@tW)cq/'K\7C:aY_)[SCb!CrJW2''Z?=?pj]el`L#SdS3=:m6VH,d<frE29P4HKN2!fucR]tP-I5<q:rB6Ko,]UR6#qD:#-!QLAn!D[!Q@?l(6%8i:b3tnj"p/!(`n0t)b_]i.XU*GGRW?[^f<NQ8lQ;pWAO_,^m#Ba8'tVY)A+Z&qLEKg0>k(Q;"qT=*gZ-\^CCD;_ieqUsOMb,n/5Q%Ug;=Lh.WQN/S:YgAL[6u#j5:W`JX"Atm*H/GVgi`IL\aB".`ammqMpZg&#'HjcO]"9ra%,#Dul_3K%.i_:VtK0E_5iu:V"t_"t?Pqj&*H)pl1eTBI29!K1>^r@O1CWc#H[UpO`K0R;P+N:`;IL0a0m9Y#*Cl?H>_br$Go6gOQ\k^,h`J8-Bf?33ETdf%\p..=41$!6L@I0G]UuR$dac@[[>/im+u21>rDVojT[d`cqe-@6W8VI4$r/bOQHmo\a]#j,$l&9WBmEdY\r@B@cuX1Zi*6NiV^O]h5T&oY=>I'=tjFPT^?`F#IH?k7VW:&MgV=R#\j#%+m;o=@A/E?+*VO0PKF,TIV%+o"S;890&_1p&8BBK>@8^i1p!0?;2kd\H<amNYf?
 QdCJS?%b]B^7#Ia%GQ]Ooo&FO[R'=tj9^dOE$7iQ*AIeNU\mW0)0`?n:`XBZb(Z+:CV(aS7qEluBS!"fiF+d\LChuV(0E.$6+40LVG(XQ$ko^,s!#l_e4\rN-^oJ[@*"bBd\f)?6UKR\8:p#[Rj+Z7./i7B_.bLn@g2>7em3-VaZ/p4r)H[=ajZJnEU*)E(F1*0[Nb4ZR$39Vi_H-X`\Nc!1+$(M+VG.H:KF51Ij@Pj\FUFccfjbXlG9i]-KPmGRa^rQsjkk`^$fjg@1l_/\&2U5j*k_A`D7X.d#6"e&l%UnDF94WMpj/#:;Ae$OL6$!a_4n?S,:(5Z[ZV8K\n#O^Q=",KJA':sR7Q!d77U>Q$3u\o*l)0_`,^ljS7j(n%=mD]L1B"V[Ce\!VNMQ@^CgkV8qipI%_t,T4DI&Rd2,6R^>6:IC<K5F.hX`YWO?3;2+J7?KR"3nE_.JOY;/E(7\iaa;,]c&#B8>bS1??eS<\MkZ@ooY?AepqYGW%?@5$UE%X=I.IWA?a)Js[re;g(Tfbk?P/`f-F4M0P9AXHl;4XRfR79rlXeurkO@eP(H/Hek1$NrgC@C=--r!^%T;qt0Se8gO\l\(2^qgno,UbWgI>5NIs50"P:!Y`*X3;1<l<[h3*Fb`jeW4`P-j?0=r_hiugE2VP(oZ^M,g%nlI:i<KXe_bgdI>q$k&u2qACfcgB,tP!;C!T1E5bO598WQ+'\#Cnah&]c7^T/K$EV,ogGGR4g3($9pf$GtgZ[`uUXY_-]n*?"gT?X\^f.h^*l$8QuidCJ?VWHJ#5kSHdah0t"d8?3,?(<$4\AN6Qk(7PDC8J>,PPTeA(qkRT)!Ia9FToJI/;aD_$/=hQkc!Ybr,8>9quZ4:A*Iu(7eU=r2E::Hjnd08j+uBN%>Z&I+pP/4,2praY8u=DN"d(-DnRBQ\fIDc(eVh\g>5<O7l\AW;?>hf65Y4aU!2FFm6+9O;.Q.gYTi/n=ZaR(AX/
 >Qm6mr.5-X1%@u(a^M".r1d)1<[<p/X6>LO(Me<ZQ%_G$VlmB^fL;&hrTmo`h&e2`8_k144g@a@!HK>g>f&Ce/iO7`u%#/fZ1oE/WE%Pj*Jc,R1+Ppe*Wa!h1[LSs83?7;4SjN`onR*j-@Be&kqid1k=AP""l`Rc:u8_,qL*@!=&Cp"RVB&NMT#Z,"mEAtT(r@\T(3"-e%/&?]1L1^r!`Rk$W%O/"$%/gYZXL`$H5cb3@(b+Bq?l(pGHba,Pq-RP/-/q$tJ8M[pK-oZ4Sr^&LD;R9^@#pETVV'^i^"bb!Z.V%\%dVJ.!r2JnE\&^\<o&I4]6Lkr$!uN3Fh-sc;Z#N\?*_B".G_G@"PZ?i]0L/g>j(iA$blKCjFJEo~>
 endstream
 endobj
-41 0 obj
+45 0 obj
 << /Type /Page
 /Parent 1 0 R
 /MediaBox [ 0 0 612 792 ]
 /Resources 3 0 R
-/Contents 40 0 R
-/Annots 42 0 R
+/Contents 44 0 R
+/Annots 46 0 R
 >>
 endobj
-42 0 obj
+46 0 obj
 [
-43 0 R
-44 0 R
+47 0 R
+48 0 R
 ]
 endobj
-43 0 obj
+47 0 obj
 << /Type /Annot
 /Subtype /Link
 /Rect [ 299.988 604.0 320.64 592.0 ]
@@ -305,7 +327,7 @@
 /H /I
 >>
 endobj
-44 0 obj
+48 0 obj
 << /Type /Annot
 /Subtype /Link
 /Rect [ 196.98 590.8 217.632 578.8 ]
@@ -316,29 +338,31 @@
 /H /I
 >>
 endobj
-45 0 obj
-<< /Length 1840 /Filter [ /ASCII85Decode /FlateDecode ]
+49 0 obj
+<< /Length 2276 /Filter [ /ASCII85Decode /FlateDecode ]
  >>
 stream
-Gatm<>Ar7S'Roe[d)1BG`PP63W69]SUeTQ"CQg)b"r,f'0K*chP=/+]]>h)T50B&46V2Bi@0dL!P^"h<q9F\?4.TVM-\"i1CFO`t9HkG/QLF_AY=ImX:L2s'O1869XJa2-jR'8Yn8:n4roQAK*%\RrMi#,/A]l19EpU"o8po^SSi'6D?Vb8X?1;EjduoX]HRW\WnRTIGL=GctPYA4%'eRt9&.*S.(@XIfJ4FCDF8?TW6>b*i4J/q`:Y?^M"^2P$lP9J171()I0qEkAQgalj9s=C1GV8PJ%&BErcc===+5uY3D\s.5aDmYIhu.tGp<2.LjiutHIstA65)EDQ=GY9<>uRCujD&[+k<`^X^B%07Z:s=*_",2@C9Gj3<XT-EoW@QpY5.*>0![R*BF;"o&o.RZ,`5E3`8TOML\p>Th;e0r%`L:g5SLPm`DobjJAc'=B.FNhi,Oma)..>R)4hc-^.Q?aZ6U5(bKK:R;-6?p15#^g#tsT'RPa0.&605)QdCsLTfZGl)=S(82[B+aSt4coI!HmaqX!0T;:7GlT.kgT'1l6@$D:7mW1FW.NW\P#.V7lN'F'o;5k/gOo%iKLgP:[lc/795<GVpkN/X'!NKH>/Wi\M(;fMq]%KppuA(O`&ED3UAXf+/?q9&hfXQBI4&]S4_"3'5Z+Pe3##]JQ]qd.gpfX3Lg$r@LMaKu2ni';pk(P[fa;m!!V3#3GbqCGqD>blL3+)FfX'=rT$ChJi=V,71V!IY+R6K]n0l!.-s!T(s9NDdY%Le8"^0'EYkB00oRbNZNrK6Fk01"^!4?%>P4M%Z.6kHK8RC#BMV6:'Y+LY%pF^7u>),o'`F`&uTNHO>D(23"YHRan$H477_;Whf'gn?G\O,XZi,,?97RI]h[nBCXqWcJZpAZ#,:0FH&UhOp!<DQ;Ir#4;d[!:s'`L>bWtl+sK>JW<ZNpH#BSQSeE#t-lKkA041WD2qP[A#E%A.TMOTZ>JG^RD7S*
 ;QP]lM($V\k!%Msn^->t.7nT=bK/=S_aBbnM#r1d<k'*ZA<*R5uV^'948p,q^-9*Q-n<(h(2Zc"n8">!;Z`;ML1F?rZ#8t@":@n98&?85'<^!DlHA"pr'KNZXKHL\$l>423kmTWO:9Oscahs%%lOahV_>Z&6Y@<6UQ#;G:&KsnZa]D#1/T#R;k%k.`+OMl;LZ#;9?6:u)oElS>^586K`E#'eN/,*F+=ir\"T.(![2T7=#e:rWGAI<Y)CO&Fr'l9\pVE:X?A*]XV5+Y_;Gpt4a%)8^!P4iSZ;F-oCLc0ZnRt_]91r-(0*U!<N'1"p=&uA_%]^\/M#&<k26MSV,!*'\;.0*QMU+Aki'e\#ge%`'C\'/SjYH)j87(S]d_HiDqp_pu)rWuHF-du[N>4Xu_7M0k[CKqJ`/o4'[#sO]024RqbH$7/4I:;l!*L[fJ$Zs2BAeH&>cE@K;!O/$FFAfe]\b`(OIZefB+G6GYc"-p)')a<'G:",aI+OK6F2K]WV`I+fS<pcf,:OP$*I9RLK+dPG^U0dLN)/n@I=%BYh#0s;:%o=0>BJX<ffeTWbDR@$npi!)sq_"mZZ8,H=GRL_To7>0Ud&[J`=FP[$3LTD5ueY#d*Z5?ERD3I0]i@jJ%fCqm9H3;;[U<hIA(#ak4qP5dk2.7kLM1?Pn@p7=gN%PFGNUT!Li@(0_P*S()ZCD)!/Mk1^0*3q4"li]Eph,tYtH<W5G4eXA)l9R#j#C.Jur^a&>i.$:Xg-W]G&5PC'\gZEmog-mQk-WW7(&kB(.D$Oh+hD1S>=[m%CI4lAtVZ"Hdm(=."*-Ku0\j+ciYK^OS])ZVl_9:GHNYHI4!e<ZtiW~>
+Gatm=D/\/e&H;*)Taq6o'alNimfW8,D3AL6S$\#.TNHt7Z"[F3,\hXFC(1".4'@koPXO7DL*B&7l/(m#]49\m-Pt7R?P@0*iTL.b0SsMRXpb9a:At'hjk.Lp/0j>te)^,]m6Lf^I3_%,F6OO+_5$7(@W&/=pdYd.qDG84'%aLMBk%!`1)&mr>#ooN7O%dP=D^'i/>ALFH+X_8Q`"j=QL\"2"MGHVr#IH;X*'e"iD[r.q'aJ7q`/"O.A*BE84k(H<<K%u8C=Ih%VOuB_0<X5e*F^YCl?rJ%m\Hlj^U-k-*3s=b*W9!PVuLurI'+fle1=KmDk-7I[&&?Lfr'/fK\!H3^eQg/e;%B>_n@2+!@ag=XN\Pq=;6d@]n%,OU"JunuZgCX7OB.(,9$W_A)nH8ZXduE.WW8>!K[sn8LV*\9kkK;dFAVmR\(;R)N'&Eo`)?<A[pRGTr'(N6e4eKMs3i21/c-@ZIc?(E(QBc)X^N3'O@(&4oQ?>)-$(1$=7hEh%SMR"e7MN`'d6s*?(P#QDCt428K-oYO;fB2=a-AJiI9$D$d#,0;<n!(s7*ePd*Gj,S4U=Yb+&$B<09-hD:bo%h@,cOUg&MS5W4?La^nEFb;cC6X\*V0qojGUe`pS39.YZe2QoE?#X.B$+'lZi"Bc5$%:T2%dj`@!XS`gB;5IPXKb;8\8<Zo%:B,g\6Ne/"YE+3!4cPQaZA+CmuoI0&ftq/(*PE3m'lp=7k>bof<C<YEA5mVcYD/mnZJ\_%G-VcZhKPKOY7*+@`-*@=+"c$D]<>$IaVJ2?>cXf5]/TT3!$,=O7GP;Pd$CX4q]Sh@V12K*Ea@\ENa(b\EKe!dY0iX,?,])Pg0&@C6U*lBl2V-f9m;@/I7.gF@U+'d&!'SjK4['ZZ87>Xhl/5Emrdo-i9MoFa&6`,8-ZF=_YPJr;<k*#/Q(Deu^!=jbkHi.R_K%a2^;f-%b$II.RL&UaNKAU8KGZUI"[H9WX"8m^r
 T$]3`).g<Z*W$>3&SjEI2i-/`/@;$[43nm&63WR5QetI[4p*BUG+>HlVGkbtdj?Jnjp.?YU%"\2=c=AFj2;7das2J\OWN)gIg,]]T_/pa\o3ibPO78*S;B>B8bHXT'=egd?(b2Mkq)V2^f][N/->Y[Q(G![P876f#/dAADB!YAmYo%I#g1^$\Vl3M%3F>;S)5RF[6T6$[YHn"PbcoLt3/kS.Qq*'o6C3Xd]r5mN2Z3sV^HlQm*hWP==qBEY"h&2/WOC)j3#Hl.N<uupi]7K-q1JgI`*h(@!R(Q2PLqRGms)jYESn]E8"D:BRA(]<UlRG-8:4cH/O%f?I`N_sZN+hnb4I5g[3[8"qgB4(>ArIhSiLW-'n*1-]28pUUmFTUe5`=lF7eiuWBtqWpDM1\Y&?1$\>TW#W\d`<SuiPuiA'482RQjpGg(llJU3(Y0,:GN@/HE&ju?(1R)B%D%e2ghK[XrJ$X!/aR"T:$+?b(#^]JiaC6?>]O[BmMT*<G<JaMc+FK0M[8D/cmgN<^pHAlW:p3ok+(3Z<=?k_\57qgY>,&O_6'6lUAki3JGbR16[7Q"aBL@CqOrN'7jD6LU,kfKK]q5mi6@5O$5Ktr2S_\+k5AZFrQ#R1#=.p"i6/cq+p\Yu0:T,&q#l:q(nh(srhg,;!=W:',Db:0:[VaA<_"Wtl8L?6&hG'2^C$M[rSJqHi"iiT!6_5,"eZ"^43:^*JjZ%hLYQ:F`nbg#`7`(hdO9Iok)<_:acm,EE>:A%V\>C3f:`)_M!*mo9=DXCFX*S/S>D!D203&ju:a!WS%[?7:6[]rq$GF8!/i$:\M`2OYi:%m[2&So@2^eYtXOJaST31T:=fCR14D8+L=L!AC2?RXG'1"elF!/I-:c48b/NDD8MoOa!^07Y\`n;R2tOEfT6ULomRLX`4>s2RR.&jU!Cp50hD:qm5DL,n.6I4^(Hd>h$Qg==^bK3>G=._d$3Z%cNcH78BIK"amZ(\q
 %9OM^ba@>01_0A%Xj8002?(553ah<0ka4$'A$T(^q!>)TZ[MA\1RHOO_[=KpI!dSgcN7<P/&ns&%+hPC^R3]%hgi5K*N5XgHDbj]%gm$G0d?q8nQ@ECTNc`:nk`lW(t*7&mSml,Hn>OM""oSD\si5HtcWR5;)U'i!,*],Hb/,W>UjGGJnWD:\BR_\[AZQ8rK49q0Nf.hXlm-%K>Cm.B4e[DB#oc;5t=6<>)@]&D3e*^PL3Lj[>%%m"-U3thr;6G1b.4SHFGABH6f%dblLTTnESN.t^Ra6h4gqPEaZ(_~>
 endstream
 endobj
-46 0 obj
+50 0 obj
 << /Type /Page
 /Parent 1 0 R
 /MediaBox [ 0 0 612 792 ]
 /Resources 3 0 R
-/Contents 45 0 R
-/Annots 47 0 R
+/Contents 49 0 R
+/Annots 51 0 R
 >>
 endobj
-47 0 obj
+51 0 obj
 [
-48 0 R
-49 0 R
+52 0 R
+53 0 R
+54 0 R
+55 0 R
 ]
 endobj
-48 0 obj
+52 0 obj
 << /Type /Annot
 /Subtype /Link
 /Rect [ 149.652 327.547 249.312 315.547 ]
@@ -349,7 +373,7 @@
 /H /I
 >>
 endobj
-49 0 obj
+53 0 obj
 << /Type /Annot
 /Subtype /Link
 /Rect [ 210.3 275.213 294.276 263.213 ]
@@ -360,98 +384,225 @@
 /H /I
 >>
 endobj
-51 0 obj
+54 0 obj
+<< /Type /Annot
+/Subtype /Link
+/Rect [ 172.332 145.226 226.668 133.226 ]
+/C [ 0 0 0 ]
+/Border [ 0 0 0 ]
+/A << /URI (hod_user_guide.html#Collecting+and+Viewing+Hadoop+Logs)
+/S /URI >>
+/H /I
+>>
+endobj
+55 0 obj
+<< /Type /Annot
+/Subtype /Link
+/Rect [ 260.328 145.226 344.304 133.226 ]
+/C [ 0 0 0 ]
+/Border [ 0 0 0 ]
+/A << /URI (hod_user_guide.html)
+/S /URI >>
+/H /I
+>>
+endobj
+56 0 obj
+<< /Length 2621 /Filter [ /ASCII85Decode /FlateDecode ]
+ >>
+stream
+GatU6D3LJJ')oV[JoBhQnh;13?9[*+Cm(i(Hl)7<<0BkYTG^^;&-pF^pYSI`5V5I<H0k*J@+DCIVn8R=`-crYq!RD*rsu8T(B;*uL`G0SI,>`)0kiio#T!G/fW(nCr:%IVO5D(!HmG#9`:'9m%o+M8Lqg)=AVk@&%^qr]eo1#+Lf;UBIp**,l$A]fF02S`Y!0nkA)5'49B&'`;tgo2SCmX@d8\s!iR?g'>?u],TbYr!0(_<:bc&*$Y*We2QCm:j]TtR=;n-OWo,lQ4^F)cdcc1Ud_CTBq2$`-B81GP0eCJdWhPIRjG>$jLX,Z="SEJ8&\5?<FbtBIM?<G&/VMZ=!c/>2S1bK9iEX""%SC(]iZ-hLuM<,FbkBX'2,)Zi(,71K@QkeAmP/_FJh-4oI%*%qgX`@\$#t&E,@(YF%ddYGf-s1J\a1]rM>72\hTgdp:%Bf$Mr_+nW`fO$poT1nA;iG'mQH.<35lY#$:hDigP80fSo)-h(-_a4XENH?TaX`GUV_.tbo3_;.TFR__R0Y_'TXo_^ZGmT(,,cY)gq)&TU?,UCn_u\g"Xp;B=[g]P!R6U[mJFJDWGj.oV?hF,:kj+(^b0<ORCrU=kuX<Y)fVRJeV2P]5^K;XZuTkR!ULI9W=_KAEbI-WAJIoXaE9F=(tI/0p][hf+fm3O$lHho0IXSbUC'WB)-ictN(;2Z*mha2L7.h*!EE82p8G0m'2'/obipRF'cth:/TqlBn<Z`A^sZ0#0+\4gEAb+d<`?;od1sQQ[qu<>TU&HLR=Ks"-_o-=YgK2og?u=]=XF=2ICu[\Qug1.MZ@&H?*mh@W&2ltja6=0_gri9l%T%q'/(+:$0\/S7W(508-8,-qBBX](Tu&LWp*@en910,]#`+^52!!mS1JQ,X+3kX?t?#DcYFA6*[QBZk:911T]#I<P_-kq,E+g`9SSL=C<R;,X+k/RVmN^Z?fq6IAk?49rrBK<R\XIQ?G2b*Ahq7%W+?X
 4C_AHYc!d;%7lBpa!s4LXg;+c*eX6nZ(.3ffh%jn*Dd?E179'5RDPWbkehP?2P5pC>?u!T][l=d1Q^abekk\,08SAdh=$m](!W*\J>e27@h"Z^@@%Yd,KJ9fb@,h7<%UL=KLB1*@3'/<g-8=ci2XV$(d\,`Jc<n!,-(-fWCQ@n`,9!9L,8dMY"\7qt:Pjd&A@o]Wf6,'__2W8!VU&rX$V1+,@KmE!rjM,g`q<ciY'32oEr9f$EdRcIouLi5h'!mM)V,i93^/EU`L0+$[mI697mS5s03LNb#,00_'5#>U=p&u"As-:bD)VfnMtc5t_&Wq2NC0-JZn!;%5B3+1&h?/(Pp[Kth?5XV/7Q$AFc:pci0'?F)lEiuRO6U8`07@h$lV7f!H303R:I)lg$LB]G*Zr_dKXs7N''h2n%Qf.18ja1Uo?NPGQU#Z=CP_e%u\I=,$bqEDX6Yr6_H5K>e^85a%0OtgB$T%3KspDZO<&k>//Z`=bF>:o\^4[oV3XP.-'Blph\.m;mEJ+ZsM\bqCAUc(SYGMk"*eF!"oBsW%!"b+?@#lRnR,0nZ`%`:$;4V@7Y2s<,"7^W(NaEWE@j+lGh;CQ9ZQWR.WeHR`UbY^2ogok@"a^&g&R>\6NgrVk<4lPL78B!j9O!N/;$'.aU"l?@[r+*TG0[r<'thU[LghFo&a0QR&?pb1:NAF\5-TP4*XtX5budN`nhhR9`mD^`jJ.be@VMOkI^Kf?hHhBX#eKgBFB=_tmAFpKU]mLNK-@pS"lfU\D@Q1Yj)p4h/BjZYRW%g*??;>2P+\3euSXCgW'Hi$98jVIq<5c6oWUH@PK50?,C8T"sFa&/4:O7oAjHT#AR?,u*$KdHP([YGL?]_(bNljd+N]!c$_^[=GFOd_528WH(c=KH&CRVb.fQ@%J>qKa/F%2cn(?_$1Q5\j9lehFSM,H^,Rp#]p91lds2iKcq<$Op,m]KuuFih<Us!Md8SoE/.G6e<)c4o@!
 khgkF'8(Z"D[/AGJ@P)Y;C=\-Hn^WSG$35!KHROAG@ED@#>7m7m;QJ\03[%l/Zd<L0k7:.j:A7d9B'q<@Vktn^@g0ZMD=eMUf@@N/*/F<pIf>_eU3!dju1J#\$gEbXI":Vo.S/'9a]Xr)/R'\l9Er'E'YQ)gF+?,fZ&(l"X]aSZ2#irejL8#de.r%!3k$:EZ.Ko:.kN;$GSA28_eiUjaeK^R8Y5/PcdZJ?P>.G[.e,A<`o7#%N[KSJAd.C$B`08q,oWojMN;X0Id;M?0nGMGAd[XbA=uUO$5S_BaTC6C(+Be.QG<4E@q[O0#W.24?l*).<U!>O$?-oZ3NB:G5$ZQE9A7NLKZid#t0P+"(K_$7*/2JA_\AGao4"W09_cTq:XZ`^4/?\.b7KlR%,K]Ol@aZ7LmRi=Y@_4G74k8skTl=ML8>ap8iWrl!*ZpLZg).QJ_S0OW*jGduZX"8>^B5os)NbNNHM=ts/jIaAqPI*hjb'meq<<co'b<Iuh5@"i't(6o-3b$dYnq^<rImNo$tEqMD<D!+L=^DH*u&)a^5gPdY>;52$Wl-B1p_S1fJs%jr&Z+*\jDDj9P0pUi^4S2nG3MW26Y_9nArS5f$/GIIeo.YEN%V29GM$lU6Rn[r!eN1DZT~>
+endstream
+endobj
+57 0 obj
+<< /Type /Page
+/Parent 1 0 R
+/MediaBox [ 0 0 612 792 ]
+/Resources 3 0 R
+/Contents 56 0 R
+/Annots 58 0 R
+>>
+endobj
+58 0 obj
+[
+59 0 R
+]
+endobj
+59 0 obj
+<< /Type /Annot
+/Subtype /Link
+/Rect [ 347.964 535.428 442.284 523.428 ]
+/C [ 0 0 0 ]
+/Border [ 0 0 0 ]
+/A << /URI (hod_config_guide.html#3.7+hodring+options)
+/S /URI >>
+/H /I
+>>
+endobj
+60 0 obj
+<< /Length 1598 /Filter [ /ASCII85Decode /FlateDecode ]
+ >>
+stream
+GauHL>B?Pt'Roe[csq5G?jVL;IG8V-c$#tjD;maeI39YKc![2Lf'A\$m1foR!K.f'1JK2;jIGO[NkQb.EPM<p^\MM0cp@SXgA'==BXc`Q:[0d_L+O/D'b%cnIc#PN^Vq3:mYFY*M"ifMe?kaF3&WoNIY?.7!YIVLFKCc$r'hUb`[#7#l\t/K/Sm:2MreM(SbQGHIB"'k>Dp[G^".EafsT5^KW[_`#52u3CD1>lBs0DN9Qt@Ke8eX-j1J[Rm3=omC%6Va_/7b*r"<P([d+pfZn8XLTg/"/lsN5_XEYX_4A1[Fi:l]%@C_uV^'i)-k\mj.ASQaQEk9]7M5?1UMj+uQeT5kDSI(]OCBZkN6CZ8bjdViq:<7U&VE:jmKM@e0F@Io=UJA'RTP5r8DA:f,_CU]BTe>%uk1l!p1k(,nHe.Yn3YXe%.QYoMB@Vc>4#IT9U#o45%BNEF&W#<0X3c6;<B0U8R+9^ipO7+@iq!%kX1nAhq6Sp'p4gj@X+I5SLgn,KW9(.WMee;2pWqRli7RA:ZjULi?!L=-@F6Bri.h3P/?g-ba>@oQ>queRbI>l@igO9O(@W9kE]e9mg.&*C^4pEAij"H7Ilp(i\8)Y_Q2NWaZ#J0J,>='[Op39TMCkYl7mkN[dCDj3s"R`gS^sIu/8?$D)%`@b0jpZKB(qCKGc=_VSg88_6p(l-VWB'n&9^j&eM3G3S5,j<)0_Hd@Oabs/X`#6P\q?S4KnDB%TOc&=>^,S/Rf$e!4+jP4ML$=:jYGr`),`s@h;(6cDi%c9Q/3'!slXGq=^4FFU6u?>aTh3$P-Wm&Vq07?Ga9m!,.ZF(VLFI&!0<f5XI.GN@sn;&JDnRq8gGu#R)%SOtsXTC$M\SLfbYH$-7kAF5<2EA'hdcQpY007qi/YQT5G!bZSJVn+Od^.m)9V3lh2??*^t8`o67-3NB>F.T/<Kh+4M[+.ecuO<j32#m+F`G'[_7oM7$kWh&])qo[+(iU7L
 tZDT>dqP+EEWh4l"RM:'u\Iq1\aUY?]KA#L/T$P[T<B#bL7+7+rpLsRHAkBj+4d"CRLke5(#/D7"&kdk/O8;&-1$u!<3X$=e*K9B-)PI.V7\';IQ?T@D_PAe7<Xd3H3sDdUFNY/#5<cX=6#b(ZoG3u"W,58S!O3cGSs-UD6I1h(Xl1tagqITE:El/uNL-_X^@ogh?(;XE_7g)N4CR1*ZGYb0)EDgYaN5kIU9/rCSk*1Rb:JJen030-<@\kF#VJBC0&,bJGZ0[D+Nh"F.!`eUD^Q)a=_3<B/TT+b8S:UK"P/#4`qO=_Xgtanc[,'D!;_Ig``p-_OH-Uf*+>%e?378^=fiHJ5N]S:K$4%b\@\0gMR0#2kEB8kr@8o=ATI.!h])B-B\QU[DPoK5Og\diW%lad#t.%0[Xub]+Wh);-G<$l-+6Ar?`*EOK]0M>n*V`KpE^h3+5!P=DKs2caB*2R!QOB\Z/P6PCL0]u]SjCWkB>.#$P%"6b6=RMD`%Yr]@6lUAg2E'^N5K,734jcqC0T@"[=!Cm/G1KLg!pu]5WK,pAC!W+^DcmEHN<sai@(3>MFVb*:NtMCg5eIpBr;Tn[1M=oYCD5$rEd^~>
+endstream
+endobj
+61 0 obj
+<< /Type /Page
+/Parent 1 0 R
+/MediaBox [ 0 0 612 792 ]
+/Resources 3 0 R
+/Contents 60 0 R
+/Annots 62 0 R
+>>
+endobj
+62 0 obj
+[
+63 0 R
+]
+endobj
+63 0 obj
+<< /Type /Annot
+/Subtype /Link
+/Rect [ 319.5 637.5 403.99 627.5 ]
+/C [ 0 0 0 ]
+/Border [ 0 0 0 ]
+/A << /URI (hod_config_guide.html#3.7+hodring+options)
+/S /URI >>
+/H /I
+>>
+endobj
+65 0 obj
 <<
  /Title (\376\377\0\61\0\40\0\117\0\166\0\145\0\162\0\166\0\151\0\145\0\167)
- /Parent 50 0 R
- /Next 52 0 R
+ /Parent 64 0 R
+ /Next 66 0 R
  /A 9 0 R
 >> endobj
-52 0 obj
+66 0 obj
 <<
  /Title (\376\377\0\62\0\40\0\120\0\162\0\145\0\55\0\162\0\145\0\161\0\165\0\151\0\163\0\151\0\164\0\145\0\163)
- /Parent 50 0 R
- /Prev 51 0 R
- /Next 53 0 R
+ /Parent 64 0 R
+ /Prev 65 0 R
+ /Next 67 0 R
  /A 11 0 R
 >> endobj
-53 0 obj
+67 0 obj
 <<
  /Title (\376\377\0\63\0\40\0\122\0\145\0\163\0\157\0\165\0\162\0\143\0\145\0\40\0\115\0\141\0\156\0\141\0\147\0\145\0\162)
- /Parent 50 0 R
- /Prev 52 0 R
- /Next 54 0 R
+ /Parent 64 0 R
+ /Prev 66 0 R
+ /Next 68 0 R
  /A 13 0 R
 >> endobj
-54 0 obj
+68 0 obj
 <<
  /Title (\376\377\0\64\0\40\0\111\0\156\0\163\0\164\0\141\0\154\0\154\0\151\0\156\0\147\0\40\0\110\0\117\0\104)
- /Parent 50 0 R
- /Prev 53 0 R
- /Next 55 0 R
+ /Parent 64 0 R
+ /Prev 67 0 R
+ /Next 69 0 R
  /A 15 0 R
 >> endobj
-55 0 obj
+69 0 obj
 <<
  /Title (\376\377\0\65\0\40\0\103\0\157\0\156\0\146\0\151\0\147\0\165\0\162\0\151\0\156\0\147\0\40\0\110\0\117\0\104)
- /Parent 50 0 R
- /First 56 0 R
- /Last 57 0 R
- /Prev 54 0 R
- /Next 58 0 R
+ /Parent 64 0 R
+ /First 70 0 R
+ /Last 71 0 R
+ /Prev 68 0 R
+ /Next 72 0 R
  /Count -2
  /A 17 0 R
 >> endobj
-56 0 obj
+70 0 obj
 <<
  /Title (\376\377\0\65\0\56\0\61\0\40\0\115\0\151\0\156\0\151\0\155\0\141\0\154\0\40\0\103\0\157\0\156\0\146\0\151\0\147\0\165\0\162\0\141\0\164\0\151\0\157\0\156\0\40\0\164\0\157\0\40\0\147\0\145\0\164\0\40\0\163\0\164\0\141\0\162\0\164\0\145\0\144)
- /Parent 55 0 R
- /Next 57 0 R
+ /Parent 69 0 R
+ /Next 71 0 R
  /A 19 0 R
 >> endobj
-57 0 obj
+71 0 obj
 <<
  /Title (\376\377\0\65\0\56\0\62\0\40\0\101\0\144\0\166\0\141\0\156\0\143\0\145\0\144\0\40\0\103\0\157\0\156\0\146\0\151\0\147\0\165\0\162\0\141\0\164\0\151\0\157\0\156)
- /Parent 55 0 R
- /Prev 56 0 R
+ /Parent 69 0 R
+ /Prev 70 0 R
  /A 21 0 R
 >> endobj
-58 0 obj
+72 0 obj
 <<
  /Title (\376\377\0\66\0\40\0\122\0\165\0\156\0\156\0\151\0\156\0\147\0\40\0\110\0\117\0\104)
- /Parent 50 0 R
- /Prev 55 0 R
+ /Parent 64 0 R
+ /Prev 69 0 R
+ /Next 73 0 R
  /A 23 0 R
 >> endobj
-59 0 obj
+73 0 obj
+<<
+ /Title (\376\377\0\67\0\40\0\123\0\165\0\160\0\160\0\157\0\162\0\164\0\151\0\156\0\147\0\40\0\124\0\157\0\157\0\154\0\163\0\40\0\141\0\156\0\144\0\40\0\125\0\164\0\151\0\154\0\151\0\164\0\151\0\145\0\163)
+ /Parent 64 0 R
+ /First 74 0 R
+ /Last 74 0 R
+ /Prev 72 0 R
+ /Count -3
+ /A 25 0 R
+>> endobj
+74 0 obj
+<<
+ /Title (\376\377\0\67\0\56\0\61\0\40\0\154\0\157\0\147\0\143\0\157\0\156\0\144\0\145\0\156\0\163\0\145\0\56\0\160\0\171\0\40\0\55\0\40\0\124\0\157\0\157\0\154\0\40\0\146\0\157\0\162\0\40\0\162\0\145\0\155\0\157\0\166\0\151\0\156\0\147\0\40\0\154\0\157\0\147\0\40\0\146\0\151\0\154\0\145\0\163\0\40\0\165\0\160\0\154\0\157\0\141\0\144\0\145\0\144\0\40\0\164\0\157\0\40\0\104\0\106\0\123)
+ /Parent 73 0 R
+ /First 76 0 R
+ /Last 78 0 R
+ /Count -2
+ /A 27 0 R
+>> endobj
+76 0 obj
+<<
+ /Title (\376\377\0\67\0\56\0\61\0\56\0\61\0\40\0\122\0\165\0\156\0\156\0\151\0\156\0\147\0\40\0\154\0\157\0\147\0\143\0\157\0\156\0\144\0\145\0\156\0\163\0\145\0\56\0\160\0\171)
+ /Parent 74 0 R
+ /Next 78 0 R
+ /A 75 0 R
+>> endobj
+78 0 obj
+<<
+ /Title (\376\377\0\67\0\56\0\61\0\56\0\62\0\40\0\103\0\157\0\155\0\155\0\141\0\156\0\144\0\40\0\114\0\151\0\156\0\145\0\40\0\117\0\160\0\164\0\151\0\157\0\156\0\163\0\40\0\146\0\157\0\162\0\40\0\154\0\157\0\147\0\143\0\157\0\156\0\144\0\145\0\156\0\163\0\145\0\56\0\160\0\171)
+ /Parent 74 0 R
+ /Prev 76 0 R
+ /A 77 0 R
+>> endobj
+79 0 obj
 << /Type /Font
 /Subtype /Type1
 /Name /F3
 /BaseFont /Helvetica-Bold
 /Encoding /WinAnsiEncoding >>
 endobj
-60 0 obj
+80 0 obj
 << /Type /Font
 /Subtype /Type1
 /Name /F5
 /BaseFont /Times-Roman
 /Encoding /WinAnsiEncoding >>
 endobj
-61 0 obj
+81 0 obj
+<< /Type /Font
+/Subtype /Type1
+/Name /F6
+/BaseFont /Times-Italic
+/Encoding /WinAnsiEncoding >>
+endobj
+82 0 obj
 << /Type /Font
 /Subtype /Type1
 /Name /F1
 /BaseFont /Helvetica
 /Encoding /WinAnsiEncoding >>
 endobj
-62 0 obj
+83 0 obj
 << /Type /Font
 /Subtype /Type1
 /Name /F2
 /BaseFont /Helvetica-Oblique
 /Encoding /WinAnsiEncoding >>
 endobj
-63 0 obj
+84 0 obj
 << /Type /Font
 /Subtype /Type1
 /Name /F7
@@ -460,146 +611,191 @@
 endobj
 1 0 obj
 << /Type /Pages
-/Count 5
-/Kids [6 0 R 25 0 R 29 0 R 41 0 R 46 0 R ] >>
+/Count 7
+/Kids [6 0 R 29 0 R 33 0 R 45 0 R 50 0 R 57 0 R 61 0 R ] >>
 endobj
 2 0 obj
 << /Type /Catalog
 /Pages 1 0 R
- /Outlines 50 0 R
+ /Outlines 64 0 R
  /PageMode /UseOutlines
  >>
 endobj
 3 0 obj
 << 
-/Font << /F3 59 0 R /F5 60 0 R /F1 61 0 R /F2 62 0 R /F7 63 0 R >> 
+/Font << /F3 79 0 R /F5 80 0 R /F1 82 0 R /F6 81 0 R /F2 83 0 R /F7 84 0 R >> 
 /ProcSet [ /PDF /ImageC /Text ] >> 
 endobj
 9 0 obj
 <<
 /S /GoTo
-/D [25 0 R /XYZ 85.0 659.0 null]
+/D [29 0 R /XYZ 85.0 659.0 null]
 >>
 endobj
 11 0 obj
 <<
 /S /GoTo
-/D [29 0 R /XYZ 85.0 552.6 null]
+/D [33 0 R /XYZ 85.0 552.6 null]
 >>
 endobj
 13 0 obj
 <<
 /S /GoTo
-/D [29 0 R /XYZ 85.0 258.266 null]
+/D [33 0 R /XYZ 85.0 258.266 null]
 >>
 endobj
 15 0 obj
 <<
 /S /GoTo
-/D [41 0 R /XYZ 85.0 447.0 null]
+/D [45 0 R /XYZ 85.0 447.0 null]
 >>
 endobj
 17 0 obj
 <<
 /S /GoTo
-/D [41 0 R /XYZ 85.0 256.666 null]
+/D [45 0 R /XYZ 85.0 256.666 null]
 >>
 endobj
 19 0 obj
 <<
 /S /GoTo
-/D [41 0 R /XYZ 85.0 204.332 null]
+/D [45 0 R /XYZ 85.0 204.332 null]
 >>
 endobj
 21 0 obj
 <<
 /S /GoTo
-/D [46 0 R /XYZ 85.0 369.0 null]
+/D [50 0 R /XYZ 85.0 369.0 null]
 >>
 endobj
 23 0 obj
 <<
 /S /GoTo
-/D [46 0 R /XYZ 85.0 304.547 null]
+/D [50 0 R /XYZ 85.0 304.547 null]
 >>
 endobj
-50 0 obj
+25 0 obj
 <<
- /First 51 0 R
- /Last 58 0 R
+/S /GoTo
+/D [50 0 R /XYZ 85.0 239.013 null]
+>>
+endobj
+27 0 obj
+<<
+/S /GoTo
+/D [50 0 R /XYZ 85.0 173.479 null]
+>>
+endobj
+64 0 obj
+<<
+ /First 65 0 R
+ /Last 73 0 R
 >> endobj
+75 0 obj
+<<
+/S /GoTo
+/D [57 0 R /XYZ 85.0 615.4 null]
+>>
+endobj
+77 0 obj
+<<
+/S /GoTo
+/D [57 0 R /XYZ 85.0 472.828 null]
+>>
+endobj
 xref
-0 64
+0 85
 0000000000 65535 f 
-0000017091 00000 n 
-0000017177 00000 n 
-0000017269 00000 n 
+0000024926 00000 n 
+0000025026 00000 n 
+0000025118 00000 n 
 0000000015 00000 n 
 0000000071 00000 n 
-0000000830 00000 n 
-0000000950 00000 n 
-0000001024 00000 n 
-0000017392 00000 n 
-0000001159 00000 n 
-0000017455 00000 n 
-0000001296 00000 n 
-0000017519 00000 n 
-0000001431 00000 n 
-0000017585 00000 n 
-0000001568 00000 n 
-0000017649 00000 n 
-0000001704 00000 n 
-0000017715 00000 n 
-0000001841 00000 n 
-0000017781 00000 n 
-0000001977 00000 n 
-0000017845 00000 n 
-0000002114 00000 n 
-0000004487 00000 n 
-0000004610 00000 n 
-0000004637 00000 n 
-0000004865 00000 n 
-0000007408 00000 n 
-0000007531 00000 n 
-0000007614 00000 n 
-0000007787 00000 n 
-0000007969 00000 n 
-0000008151 00000 n 
-0000008336 00000 n 
-0000008517 00000 n 
-0000008718 00000 n 
-0000008935 00000 n 
-0000009156 00000 n 
-0000009375 00000 n 
-0000011826 00000 n 
-0000011949 00000 n 
-0000011983 00000 n 
-0000012211 00000 n 
-0000012442 00000 n 
-0000014375 00000 n 
-0000014498 00000 n 
-0000014532 00000 n 
-0000014707 00000 n 
-0000017911 00000 n 
-0000014878 00000 n 
-0000015017 00000 n 
-0000015206 00000 n 
-0000015407 00000 n 
-0000015596 00000 n 
-0000015831 00000 n 
-0000016145 00000 n 
-0000016378 00000 n 
-0000016535 00000 n 
-0000016648 00000 n 
-0000016758 00000 n 
-0000016866 00000 n 
-0000016982 00000 n 
+0000000965 00000 n 
+0000001085 00000 n 
+0000001173 00000 n 
+0000025252 00000 n 
+0000001308 00000 n 
+0000025315 00000 n 
+0000001445 00000 n 
+0000025379 00000 n 
+0000001580 00000 n 
+0000025445 00000 n 
+0000001717 00000 n 
+0000025509 00000 n 
+0000001853 00000 n 
+0000025575 00000 n 
+0000001990 00000 n 
+0000025641 00000 n 
+0000002126 00000 n 
+0000025705 00000 n 
+0000002263 00000 n 
+0000025771 00000 n 
+0000002399 00000 n 
+0000025837 00000 n 
+0000002536 00000 n 
+0000004909 00000 n 
+0000005032 00000 n 
+0000005059 00000 n 
+0000005287 00000 n 
+0000007830 00000 n 
+0000007953 00000 n 
+0000008036 00000 n 
+0000008209 00000 n 
+0000008391 00000 n 
+0000008573 00000 n 
+0000008758 00000 n 
+0000008939 00000 n 
+0000009140 00000 n 
+0000009357 00000 n 
+0000009578 00000 n 
+0000009797 00000 n 
+0000012248 00000 n 
+0000012371 00000 n 
+0000012405 00000 n 
+0000012633 00000 n 
+0000012864 00000 n 
+0000015233 00000 n 
+0000015356 00000 n 
+0000015404 00000 n 
+0000015579 00000 n 
+0000015750 00000 n 
+0000015958 00000 n 
+0000016131 00000 n 
+0000018845 00000 n 
+0000018968 00000 n 
+0000018995 00000 n 
+0000019190 00000 n 
+0000020881 00000 n 
+0000021004 00000 n 
+0000021031 00000 n 
+0000025903 00000 n 
+0000021219 00000 n 
+0000021358 00000 n 
+0000021547 00000 n 
+0000021748 00000 n 
+0000021937 00000 n 
+0000022172 00000 n 
+0000022486 00000 n 
+0000022719 00000 n 
+0000022890 00000 n 
+0000023199 00000 n 
+0000025954 00000 n 
+0000023676 00000 n 
+0000026018 00000 n 
+0000023918 00000 n 
+0000024259 00000 n 
+0000024372 00000 n 
+0000024482 00000 n 
+0000024593 00000 n 
+0000024701 00000 n 
+0000024817 00000 n 
 trailer
 <<
-/Size 64
+/Size 85
 /Root 2 0 R
 /Info 4 0 R
 >>
 startxref
-17962
+26084
 %%EOF

Modified: hadoop/core/branches/branch-0.17/src/contrib/hod/CHANGES.txt
URL: http://svn.apache.org/viewvc/hadoop/core/branches/branch-0.17/src/contrib/hod/CHANGES.txt?rev=652064&r1=652063&r2=652064&view=diff
==============================================================================
--- hadoop/core/branches/branch-0.17/src/contrib/hod/CHANGES.txt (original)
+++ hadoop/core/branches/branch-0.17/src/contrib/hod/CHANGES.txt Sun Apr 27 22:22:34 2008
@@ -54,6 +54,9 @@
     permissions to update the clusters state file.
     (Vinod Kumar Vavilapalli via yhemanth)
 
+    HADOOP-3304. Fixes the way the logcondense.py utility searches for log
+    files that need to be deleted. (yhemanth)
+
 Release 0.16.2 - 2008-04-02
 
   BUG FIXES

Modified: hadoop/core/branches/branch-0.17/src/contrib/hod/support/logcondense.py
URL: http://svn.apache.org/viewvc/hadoop/core/branches/branch-0.17/src/contrib/hod/support/logcondense.py?rev=652064&r1=652063&r2=652064&view=diff
==============================================================================
--- hadoop/core/branches/branch-0.17/src/contrib/hod/support/logcondense.py (original)
+++ hadoop/core/branches/branch-0.17/src/contrib/hod/support/logcondense.py Sun Apr 27 22:22:34 2008
@@ -1,3 +1,5 @@
+#!/bin/sh
+
 #Licensed to the Apache Software Foundation (ASF) under one
 #or more contributor license agreements.  See the NOTICE file
 #distributed with this work for additional information
@@ -13,7 +15,6 @@
 #WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 #See the License for the specific language governing permissions and
 #limitations under the License.
-#!/bin/sh
 """:"
 work_dir=$(dirname $0)
 base_name=$(basename $0)
@@ -84,8 +85,8 @@
 	     'action'  : "store",
 	     'dest'    : "log",
 	     'metavar' : " ",
-	     'default' : "/user/hod/logs",
-	     'help'    : "directory where the logs are stored"},
+	     'default' : "/user",
+	     'help'    : "directory prefix under which logs are stored per user"},
 
 	    {'short'   : "-n",
 	     'long'    : "--dynamicdfs",
@@ -118,57 +119,64 @@
     deletedNamePrefixes.append('1-tasktracker-*')
     deletedNamePrefixes.append('0-datanode-*')
 
-  cmd = getDfsCommand(options, "-lsr " + options.log)
+  filepath = '%s/\*/hod-logs/' % (options.log)
+  cmd = getDfsCommand(options, "-lsr " + filepath)
   (stdin, stdout, stderr) = popen3(cmd)
   lastjobid = 'none'
   toPurge = { }
   for line in stdout:
-    m = re.match("^(.*?)\s.*$", line)
-    filename = m.group(1)
-    # file name format: <prefix>/<user>/hod-logs/<jobid>/[0-1]-[jobtracker|tasktracker|datanode|namenode|]-hostname-YYYYMMDDtime-random.tar.gz
-    # first strip prefix:
-    if filename.startswith(options.log):
-      filename = filename.lstrip(options.log)
-      if not filename.startswith('/'):
-        filename = '/' + filename
-    else:
-      continue
-    
-    # Now get other details from filename.
-    k = re.match("/(.*)/.*/(.*)/.*-.*-([0-9][0-9][0-9][0-9])([0-9][0-9])([0-9][0-9]).*$", filename)
-    if k:
-      username = k.group(1)
-      jobid =  k.group(2)
-      datetimefile = datetime(int(k.group(3)), int(k.group(4)), int(k.group(5)))
-      datetimenow = datetime.utcnow()
-      diff = datetimenow - datetimefile
-      filedate = k.group(3) + k.group(4) + k.group(5)
-      newdate = datetimenow.strftime("%Y%m%d")
-      print "%s %s %s %d" % (filename,  filedate, newdate, diff.days)
-      
-      # if the cluster is used to bring up dynamic dfs, we must also leave NameNode logs.
-      foundFilteredName = False
-      for name in filteredNames:
-        if filename.find(name) >= 0:
-          foundFilteredName = True
-          break
-
-      if foundFilteredName:
+    try:
+      m = re.match("^(.*?)\s.*$", line)
+      filename = m.group(1)
+      # file name format: <prefix>/<user>/hod-logs/<jobid>/[0-1]-[jobtracker|tasktracker|datanode|namenode|]-hostname-YYYYMMDDtime-random.tar.gz
+      # first strip prefix:
+      if filename.startswith(options.log):
+        filename = filename.lstrip(options.log)
+        if not filename.startswith('/'):
+          filename = '/' + filename
+      else:
         continue
-
-      if (diff.days > options.days):
-        desttodel = filename
-        if not toPurge.has_key(jobid):
-          toPurge[jobid] = options.log.rstrip("/") + "/" + username + "/hod-logs/" + jobid
+    
+      # Now get other details from filename.
+      k = re.match("/(.*)/hod-logs/(.*)/.*-.*-([0-9][0-9][0-9][0-9])([0-9][0-9])([0-9][0-9]).*$", filename)
+      if k:
+        username = k.group(1)
+        jobid =  k.group(2)
+        datetimefile = datetime(int(k.group(3)), int(k.group(4)), int(k.group(5)))
+        datetimenow = datetime.utcnow()
+        diff = datetimenow - datetimefile
+        filedate = k.group(3) + k.group(4) + k.group(5)
+        newdate = datetimenow.strftime("%Y%m%d")
+        print "%s %s %s %d" % (filename,  filedate, newdate, diff.days)
+
+        # if the cluster is used to bring up dynamic dfs, we must also leave NameNode logs.
+        foundFilteredName = False
+        for name in filteredNames:
+          if filename.find(name) >= 0:
+            foundFilteredName = True
+            break
+
+        if foundFilteredName:
+          continue
+
+        if (diff.days > options.days):
+          desttodel = filename
+          if not toPurge.has_key(jobid):
+            toPurge[jobid] = options.log.rstrip("/") + "/" + username + "/hod-logs/" + jobid
+    except Exception, e:
+      print >> sys.stderr, e
 
   for job in toPurge.keys():
-    for prefix in deletedNamePrefixes:
-      cmd = getDfsCommand(options, "-rm " + toPurge[job] + '/' + prefix)
-      print cmd
-      ret = 0
-      ret = os.system(cmd)
-      if (ret != 0):
-        print >> sys.stderr, "Command failed to delete file " + cmd 
+    try:
+      for prefix in deletedNamePrefixes:
+        cmd = getDfsCommand(options, "-rm " + toPurge[job] + '/' + prefix)
+        print cmd
+        ret = 0
+        ret = os.system(cmd)
+        if (ret != 0):
+          print >> sys.stderr, "Command failed to delete file " + cmd 
+    except Exception, e:
+      print >> sys.stderr, e
 	  
 	
 def process_args():

Modified: hadoop/core/branches/branch-0.17/src/docs/src/documentation/content/xdocs/hod_admin_guide.xml
URL: http://svn.apache.org/viewvc/hadoop/core/branches/branch-0.17/src/docs/src/documentation/content/xdocs/hod_admin_guide.xml?rev=652064&r1=652063&r2=652064&view=diff
==============================================================================
--- hadoop/core/branches/branch-0.17/src/docs/src/documentation/content/xdocs/hod_admin_guide.xml (original)
+++ hadoop/core/branches/branch-0.17/src/docs/src/documentation/content/xdocs/hod_admin_guide.xml Sun Apr 27 22:22:34 2008
@@ -1,238 +1,318 @@
-<?xml version="1.0"?>
-
-<!DOCTYPE document PUBLIC "-//APACHE//DTD Documentation V2.0//EN"
-          "http://forrest.apache.org/dtd/document-v20.dtd">
-
-
-<document>
-
-  <header>
-    <title> 
-      Hadoop On Demand
-    </title>
-  </header>
-
-  <body>
-<section>
-<title>Overview</title>
-
-<p>The Hadoop On Demand (HOD) project is a system for provisioning and
-managing independent Hadoop MapReduce and HDFS instances on a shared cluster 
-of nodes. HOD is a tool that makes it easy for administrators and users to 
-quickly setup and use Hadoop. It is also a very useful tool for Hadoop developers 
-and testers who need to share a physical cluster for testing their own Hadoop 
-versions.
-</p>
-
-<p>HOD relies on a resource manager (RM) for allocation of nodes that it can use for
-running Hadoop instances. At present it runs with the <a href="ext:hod/torque">Torque
-resource manager</a>.
-</p>
-
-<p>
-The basic system architecture of HOD includes components from:</p>
-<ul>
-  <li>A Resource manager (possibly together with a scheduler),</li>
-  <li>HOD components, and </li>
-  <li>Hadoop Map/Reduce and HDFS daemons.</li>
-</ul>
-
-<p>
-HOD provisions and maintains Hadoop Map/Reduce and, optionally, HDFS instances 
-through interaction with the above components on a given cluster of nodes. A cluster of
-nodes can be thought of as comprising of two sets of nodes:</p>
-<ul>
-  <li>Submit nodes: Users use the HOD client on these nodes to allocate clusters, and then
-use the Hadoop client to submit Hadoop jobs. </li>
-  <li>Compute nodes: Using the resource manager, HOD components are run on these nodes to 
-provision the Hadoop daemons. After that Hadoop jobs run on them.</li>
-</ul>
-
-<p>
-Here is a brief description of the sequence of operations in allocating a cluster and
-running jobs on them.
-</p>
-
-<ul>
-  <li>The user uses the HOD client on the Submit node to allocate a required number of
-cluster nodes, and provision Hadoop on them.</li>
-  <li>The HOD client uses a Resource Manager interface, (qsub, in Torque), to submit a HOD
-process, called the RingMaster, as a Resource Manager job, requesting the user desired number 
-of nodes. This job is submitted to the central server of the Resource Manager (pbs_server, in Torque).</li>
-  <li>On the compute nodes, the resource manager slave daemons, (pbs_moms in Torque), accept
-and run jobs that they are given by the central server (pbs_server in Torque). The RingMaster 
-process is started on one of the compute nodes (mother superior, in Torque).</li>
-  <li>The Ringmaster then uses another Resource Manager interface, (pbsdsh, in Torque), to run
-the second HOD component, HodRing, as distributed tasks on each of the compute
-nodes allocated.</li>
-  <li>The Hodrings, after initializing, communicate with the Ringmaster to get Hadoop commands, 
-and run them accordingly. Once the Hadoop commands are started, they register with the RingMaster,
-giving information about the daemons.</li>
-  <li>All the configuration files needed for Hadoop instances are generated by HOD itself, 
-some obtained from options given by user in its own configuration file.</li>
-  <li>The HOD client keeps communicating with the RingMaster to find out the location of the 
-JobTracker and HDFS daemons.</li>
-</ul>
-
-<p>The rest of the document deals with the steps needed to setup HOD on a physical cluster of nodes.</p>
-
-</section>
-
-<section>
-<title>Pre-requisites</title>
-
-<p>Operating System: HOD is currently tested on RHEL4.<br/>
-Nodes : HOD requires a minimum of 3 nodes configured through a resource manager.<br/></p>
-
-<p> Software </p>
-<p>The following components are to be installed on *ALL* the nodes before using HOD:</p>
-<ul>
- <li>Torque: Resource manager</li>
- <li><a href="ext:hod/python">Python</a> : HOD requires version 2.5.1 of Python.</li>
-</ul>
-
-<p>The following components can be optionally installed for getting better
-functionality from HOD:</p>
-<ul>
- <li><a href="ext:hod/twisted-python">Twisted Python</a>: This can be
-  used for improving the scalability of HOD. If this module is detected to be
-  installed, HOD uses it, else it falls back to default modules.</li>
- <li><a href="ext:site">Hadoop</a>: HOD can automatically
- distribute Hadoop to all nodes in the cluster. However, it can also use a
- pre-installed version of Hadoop, if it is available on all nodes in the cluster.
-  HOD currently supports Hadoop 0.15 and above.</li>
-</ul>
-
-<p>NOTE: HOD configuration requires the location of installs of these
-components to be the same on all nodes in the cluster. It will also
-make the configuration simpler to have the same location on the submit
-nodes.
-</p>
-</section>
-
-<section>
-<title>Resource Manager</title>
-<p>  Currently HOD works with the Torque resource manager, which it uses for its node
-  allocation and job submission. Torque is an open source resource manager from
-  <a href="ext:hod/cluster-resources">Cluster Resources</a>, a community effort
-  based on the PBS project. It provides control over batch jobs and distributed compute nodes. Torque is
-  freely available for download from <a href="ext:hod/torque-download">here</a>.
-  </p>
-
-<p>  All documentation related to torque can be seen under
-  the section TORQUE Resource Manager <a
-  href="ext:hod/torque-docs">here</a>. You can
-  get wiki documentation from <a
-  href="ext:hod/torque-wiki">here</a>.
-  Users may wish to subscribe to TORQUE’s mailing list or view the archive for questions,
-  comments <a
-  href="ext:hod/torque-mailing-list">here</a>.
-</p>
-
-<p>For using HOD with Torque:</p>
-<ul>
- <li>Install Torque components: pbs_server on one node(head node), pbs_mom on all
-  compute nodes, and PBS client tools on all compute nodes and submit
-  nodes. Perform atleast a basic configuration so that the Torque system is up and
-  running i.e pbs_server knows which machines to talk to. Look <a
-  href="ext:hod/torque-basic-config">here</a>
-  for basic configuration.
-
-  For advanced configuration, see <a
-  href="ext:hod/torque-advanced-config">here</a></li>
- <li>Create a queue for submitting jobs on the pbs_server. The name of the queue is the
-  same as the HOD configuration parameter, resource-manager.queue. The Hod client uses this queue to
-  submit the Ringmaster process as a Torque job.</li>
- <li>Specify a 'cluster name' as a 'property' for all nodes in the cluster.
-  This can be done by using the 'qmgr' command. For example:
-  qmgr -c "set node node properties=cluster-name". The name of the cluster is the same as
-  the HOD configuration parameter, hod.cluster. </li>
- <li>Ensure that jobs can be submitted to the nodes. This can be done by
-  using the 'qsub' command. For example:
-  echo "sleep 30" | qsub -l nodes=3</li>
-</ul>
-
-</section>
-
-<section>
-<title>Installing HOD</title>
-
-<p>Now that the resource manager set up is done, we proceed on to obtaining and
-installing HOD.</p>
-<ul>
- <li>If you are getting HOD from the Hadoop tarball,it is available under the 
-  'contrib' section of Hadoop, under the root  directory 'hod'.</li>
- <li>If you are building from source, you can run ant tar from the Hadoop root
-  directory, to generate the Hadoop tarball, and then pick HOD from there,
-  as described in the point above.</li>
- <li>Distribute the files under this directory to all the nodes in the
-  cluster. Note that the location where the files are copied should be
-  the same on all the nodes.</li>
-  <li>Note that compiling hadoop would build HOD with appropriate permissions 
-  set on all the required script files in HOD.</li>
-</ul>
-</section>
-
-<section>
-<title>Configuring HOD</title>
-
-<p>After HOD installation is done, it has to be configured before we start using
-it.</p>
-<section>
-  <title>Minimal Configuration to get started</title>
-<ul>
- <li>On the node from where you want to run hod, edit the file hodrc
-  which can be found in the &lt;install dir&gt;/conf directory. This file
-  contains the minimal set of values required for running hod.</li>
- <li>
-<p>Specify values suitable to your environment for the following
-  variables defined in the configuration file. Note that some of these
-  variables are defined at more than one place in the file.</p>
-
-  <ul>
-   <li>${JAVA_HOME}: Location of Java for Hadoop. Hadoop supports Sun JDK
-    1.5.x and above.</li>
-   <li>${CLUSTER_NAME}: Name of the cluster which is specified in the
-    'node property' as mentioned in resource manager configuration.</li>
-   <li>${HADOOP_HOME}: Location of Hadoop installation on the compute and
-    submit nodes.</li>
-   <li>${RM_QUEUE}: Queue configured for submiting jobs in the resource
-    manager configuration.</li>
-   <li>${RM_HOME}: Location of the resource manager installation on the
-    compute and submit nodes.</li>
-    </ul>
-</li>
-
-<li>
-<p>The following environment variables *may* need to be set depending on
-  your environment. These variables must be defined where you run the
-  HOD client, and also be specified in the HOD configuration file as the
-  value of the key resource_manager.env-vars. Multiple variables can be
-  specified as a comma separated list of key=value pairs.</p>
-
-  <ul>
-   <li>HOD_PYTHON_HOME: If you install python to a non-default location
-    of the compute nodes, or submit nodes, then, this variable must be
-    defined to point to the python executable in the non-standard
-    location.</li>
-    </ul>
-</li>
-</ul>
-</section>
-
-  <section>
-    <title>Advanced Configuration</title>
-    <p> You can review other configuration options in the file and modify them to suit
- your needs. Refer to the <a href="hod_config_guide.html">Configuration Guide</a> for information about the HOD
- configuration.
-    </p>
-  </section>
-</section>
-
-  <section>
-    <title>Running HOD</title>
-    <p>You can now proceed to <a href="hod_user_guide.html">HOD User Guide</a> for information about how to run HOD,
-    what are the various features, options and for help in trouble-shooting.</p>
-  </section>
-</body>
-</document>
+<?xml version="1.0"?>
+
+<!DOCTYPE document PUBLIC "-//APACHE//DTD Documentation V2.0//EN"
+          "http://forrest.apache.org/dtd/document-v20.dtd">
+
+
+<document>
+
+  <header>
+    <title> 
+      Hadoop On Demand
+    </title>
+  </header>
+
+  <body>
+<section>
+<title>Overview</title>
+
+<p>The Hadoop On Demand (HOD) project is a system for provisioning and
+managing independent Hadoop MapReduce and HDFS instances on a shared cluster 
+of nodes. HOD is a tool that makes it easy for administrators and users to 
+quickly setup and use Hadoop. It is also a very useful tool for Hadoop developers 
+and testers who need to share a physical cluster for testing their own Hadoop 
+versions.
+</p>
+
+<p>HOD relies on a resource manager (RM) for allocation of nodes that it can use for
+running Hadoop instances. At present it runs with the <a href="ext:hod/torque">Torque
+resource manager</a>.
+</p>
+
+<p>
+The basic system architecture of HOD includes components from:</p>
+<ul>
+  <li>A Resource manager (possibly together with a scheduler),</li>
+  <li>HOD components, and </li>
+  <li>Hadoop Map/Reduce and HDFS daemons.</li>
+</ul>
+
+<p>
+HOD provisions and maintains Hadoop Map/Reduce and, optionally, HDFS instances 
+through interaction with the above components on a given cluster of nodes. A cluster of
+nodes can be thought of as comprising of two sets of nodes:</p>
+<ul>
+  <li>Submit nodes: Users use the HOD client on these nodes to allocate clusters, and then
+use the Hadoop client to submit Hadoop jobs. </li>
+  <li>Compute nodes: Using the resource manager, HOD components are run on these nodes to 
+provision the Hadoop daemons. After that Hadoop jobs run on them.</li>
+</ul>
+
+<p>
+Here is a brief description of the sequence of operations in allocating a cluster and
+running jobs on them.
+</p>
+
+<ul>
+  <li>The user uses the HOD client on the Submit node to allocate a required number of
+cluster nodes, and provision Hadoop on them.</li>
+  <li>The HOD client uses a Resource Manager interface, (qsub, in Torque), to submit a HOD
+process, called the RingMaster, as a Resource Manager job, requesting the user desired number 
+of nodes. This job is submitted to the central server of the Resource Manager (pbs_server, in Torque).</li>
+  <li>On the compute nodes, the resource manager slave daemons, (pbs_moms in Torque), accept
+and run jobs that they are given by the central server (pbs_server in Torque). The RingMaster 
+process is started on one of the compute nodes (mother superior, in Torque).</li>
+  <li>The Ringmaster then uses another Resource Manager interface, (pbsdsh, in Torque), to run
+the second HOD component, HodRing, as distributed tasks on each of the compute
+nodes allocated.</li>
+  <li>The Hodrings, after initializing, communicate with the Ringmaster to get Hadoop commands, 
+and run them accordingly. Once the Hadoop commands are started, they register with the RingMaster,
+giving information about the daemons.</li>
+  <li>All the configuration files needed for Hadoop instances are generated by HOD itself, 
+some obtained from options given by user in its own configuration file.</li>
+  <li>The HOD client keeps communicating with the RingMaster to find out the location of the 
+JobTracker and HDFS daemons.</li>
+</ul>
+
+<p>The rest of the document deals with the steps needed to setup HOD on a physical cluster of nodes.</p>
+
+</section>
+
+<section>
+<title>Pre-requisites</title>
+
+<p>Operating System: HOD is currently tested on RHEL4.<br/>
+Nodes : HOD requires a minimum of 3 nodes configured through a resource manager.<br/></p>
+
+<p> Software </p>
+<p>The following components are to be installed on *ALL* the nodes before using HOD:</p>
+<ul>
+ <li>Torque: Resource manager</li>
+ <li><a href="ext:hod/python">Python</a> : HOD requires version 2.5.1 of Python.</li>
+</ul>
+
+<p>The following components can be optionally installed for getting better
+functionality from HOD:</p>
+<ul>
+ <li><a href="ext:hod/twisted-python">Twisted Python</a>: This can be
+  used for improving the scalability of HOD. If this module is detected to be
+  installed, HOD uses it, else it falls back to default modules.</li>
+ <li><a href="ext:site">Hadoop</a>: HOD can automatically
+ distribute Hadoop to all nodes in the cluster. However, it can also use a
+ pre-installed version of Hadoop, if it is available on all nodes in the cluster.
+  HOD currently supports Hadoop 0.15 and above.</li>
+</ul>
+
+<p>NOTE: HOD configuration requires the location of installs of these
+components to be the same on all nodes in the cluster. It will also
+make the configuration simpler to have the same location on the submit
+nodes.
+</p>
+</section>
+
+<section>
+<title>Resource Manager</title>
+<p>  Currently HOD works with the Torque resource manager, which it uses for its node
+  allocation and job submission. Torque is an open source resource manager from
+  <a href="ext:hod/cluster-resources">Cluster Resources</a>, a community effort
+  based on the PBS project. It provides control over batch jobs and distributed compute nodes. Torque is
+  freely available for download from <a href="ext:hod/torque-download">here</a>.
+  </p>
+
+<p>  All documentation related to torque can be seen under
+  the section TORQUE Resource Manager <a
+  href="ext:hod/torque-docs">here</a>. You can
+  get wiki documentation from <a
+  href="ext:hod/torque-wiki">here</a>.
+  Users may wish to subscribe to TORQUE’s mailing list or view the archive for questions,
+  comments <a
+  href="ext:hod/torque-mailing-list">here</a>.
+</p>
+
+<p>For using HOD with Torque:</p>
+<ul>
+ <li>Install Torque components: pbs_server on one node(head node), pbs_mom on all
+  compute nodes, and PBS client tools on all compute nodes and submit
+  nodes. Perform atleast a basic configuration so that the Torque system is up and
+  running i.e pbs_server knows which machines to talk to. Look <a
+  href="ext:hod/torque-basic-config">here</a>
+  for basic configuration.
+
+  For advanced configuration, see <a
+  href="ext:hod/torque-advanced-config">here</a></li>
+ <li>Create a queue for submitting jobs on the pbs_server. The name of the queue is the
+  same as the HOD configuration parameter, resource-manager.queue. The Hod client uses this queue to
+  submit the Ringmaster process as a Torque job.</li>
+ <li>Specify a 'cluster name' as a 'property' for all nodes in the cluster.
+  This can be done by using the 'qmgr' command. For example:
+  qmgr -c "set node node properties=cluster-name". The name of the cluster is the same as
+  the HOD configuration parameter, hod.cluster. </li>
+ <li>Ensure that jobs can be submitted to the nodes. This can be done by
+  using the 'qsub' command. For example:
+  echo "sleep 30" | qsub -l nodes=3</li>
+</ul>
+
+</section>
+
+<section>
+<title>Installing HOD</title>
+
+<p>Now that the resource manager set up is done, we proceed on to obtaining and
+installing HOD.</p>
+<ul>
+ <li>If you are getting HOD from the Hadoop tarball,it is available under the 
+  'contrib' section of Hadoop, under the root  directory 'hod'.</li>
+ <li>If you are building from source, you can run ant tar from the Hadoop root
+  directory, to generate the Hadoop tarball, and then pick HOD from there,
+  as described in the point above.</li>
+ <li>Distribute the files under this directory to all the nodes in the
+  cluster. Note that the location where the files are copied should be
+  the same on all the nodes.</li>
+  <li>Note that compiling hadoop would build HOD with appropriate permissions 
+  set on all the required script files in HOD.</li>
+</ul>
+</section>
+
+<section>
+<title>Configuring HOD</title>
+
+<p>After HOD installation is done, it has to be configured before we start using
+it.</p>
+<section>
+  <title>Minimal Configuration to get started</title>
+<ul>
+ <li>On the node from where you want to run hod, edit the file hodrc
+  which can be found in the &lt;install dir&gt;/conf directory. This file
+  contains the minimal set of values required for running hod.</li>
+ <li>
+<p>Specify values suitable to your environment for the following
+  variables defined in the configuration file. Note that some of these
+  variables are defined at more than one place in the file.</p>
+
+  <ul>
+   <li>${JAVA_HOME}: Location of Java for Hadoop. Hadoop supports Sun JDK
+    1.5.x and above.</li>
+   <li>${CLUSTER_NAME}: Name of the cluster which is specified in the
+    'node property' as mentioned in resource manager configuration.</li>
+   <li>${HADOOP_HOME}: Location of Hadoop installation on the compute and
+    submit nodes.</li>
+   <li>${RM_QUEUE}: Queue configured for submiting jobs in the resource
+    manager configuration.</li>
+   <li>${RM_HOME}: Location of the resource manager installation on the
+    compute and submit nodes.</li>
+    </ul>
+</li>
+
+<li>
+<p>The following environment variables *may* need to be set depending on
+  your environment. These variables must be defined where you run the
+  HOD client, and also be specified in the HOD configuration file as the
+  value of the key resource_manager.env-vars. Multiple variables can be
+  specified as a comma separated list of key=value pairs.</p>
+
+  <ul>
+   <li>HOD_PYTHON_HOME: If you install python to a non-default location
+    of the compute nodes, or submit nodes, then, this variable must be
+    defined to point to the python executable in the non-standard
+    location.</li>
+    </ul>
+</li>
+</ul>
+</section>
+
+  <section>
+    <title>Advanced Configuration</title>
+    <p> You can review other configuration options in the file and modify them to suit
+ your needs. Refer to the <a href="hod_config_guide.html">Configuration Guide</a> for information about the HOD
+ configuration.
+    </p>
+  </section>
+</section>
+
+  <section>
+    <title>Running HOD</title>
+    <p>You can now proceed to <a href="hod_user_guide.html">HOD User Guide</a> for information about how to run HOD,
+    what are the various features, options and for help in trouble-shooting.</p>
+  </section>
+
+  <section>
+    <title>Supporting Tools and Utilities</title>
+    <p>This section describes certain supporting tools and utilities that can be used in managing HOD deployments.</p>
+    
+    <section>
+      <title>logcondense.py - Tool for removing log files uploaded to DFS</title>
+      <p>As mentioned in 
+         <a href="hod_user_guide.html#Collecting+and+Viewing+Hadoop+Logs">this section</a> of the
+         <a href="hod_user_guide.html">HOD User Guide</a>, HOD can be configured to upload
+         Hadoop logs to a statically configured HDFS. Over time, the number of logs uploaded
+         to DFS could increase. logcondense.py is a tool that helps administrators to clean-up
+         the log files older than a certain number of days. </p>
+      <section>
+        <title>Running logcondense.py</title>
+        <p>logcondense.py is available under hod_install_location/support folder. You can either
+        run it using python, for e.g. <em>python logcondense.py</em>, or give execute permissions 
+        to the file, and directly run it as <em>logcondense.py</em>. logcondense.py needs to be 
+        run by a user who has sufficient permissions to remove files from locations where log 
+        files are uploaded in the DFS, if permissions are enabled. For e.g. as mentioned in the
+        <a href="hod_config_guide.html#3.7+hodring+options">configuration guide</a>, the logs could
+        be configured to come under the user's home directory in HDFS. In that case, the user
+        running logcondense.py should have super user privileges to remove the files from under
+        all user home directories.</p>
+      </section>
+      <section>
+        <title>Command Line Options for logcondense.py</title>
+        <p>The following command line options are supported for logcondense.py.</p>
+          <table>
+            <tr>
+              <td>Short Option</td>
+              <td>Long option</td>
+              <td>Meaning</td>
+              <td>Example</td>
+            </tr>
+            <tr>
+              <td>-p</td>
+              <td>--package</td>
+              <td>Complete path to the hadoop script. The version of hadoop must be the same as the 
+                  one running HDFS.</td>
+              <td>/usr/bin/hadoop</td>
+            </tr>
+            <tr>
+              <td>-d</td>
+              <td>--days</td>
+              <td>Delete log files older than the specified number of days</td>
+              <td>7</td>
+            </tr>
+            <tr>
+              <td>-c</td>
+              <td>--config</td>
+              <td>Path to the Hadoop configuration directory, under which hadoop-site.xml resides.
+              The hadoop-site.xml must point to the HDFS NameNode from where logs are to be removed.</td>
+              <td>/home/foo/hadoop/conf</td>
+            </tr>
+            <tr>
+              <td>-l</td>
+              <td>--logs</td>
+              <td>A HDFS path, this must be the same HDFS path as specified for the log-destination-uri,
+              as mentioned in the  <a href="hod_config_guide.html#3.7+hodring+options">configuration guide</a>,
+              without the hdfs:// URI string</td>
+              <td>/user</td>
+            </tr>
+            <tr>
+              <td>-n</td>
+              <td>--dynamicdfs</td>
+              <td>If true, this will indicate that the logcondense.py script should delete HDFS logs
+              in addition to Map/Reduce logs. Otherwise, it only deletes Map/Reduce logs, which is also the
+              default if this option is not specified. This option is useful if dynamic DFS installations 
+              are being provisioned by HOD, and the static DFS installation is being used only to collect 
+              logs - a scenario that may be common in test clusters.</td>
+              <td>false</td>
+            </tr>
+          </table>
+        <p>So, for example, to delete all log files older than 7 days using a hadoop-site.xml stored in
+        ~/hadoop-conf, using the hadoop installation under ~/hadoop-0.17.0, you could say:</p>
+        <p><em>python logcondense.py -p ~/hadoop-0.17.0/bin/hadoop -d 7 -c ~/hadoop-conf -l /user</em></p>
+      </section>
+    </section>
+  </section>
+</body>
+</document>



Mime
View raw message