hadoop-common-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From acmur...@apache.org
Subject svn commit: r705436 [4/4] - in /hadoop/core/branches/branch-0.18: ./ docs/ src/contrib/hod/ src/docs/src/documentation/content/xdocs/
Date Fri, 17 Oct 2008 00:57:52 GMT
Modified: hadoop/core/branches/branch-0.18/docs/linkmap.html
URL: http://svn.apache.org/viewvc/hadoop/core/branches/branch-0.18/docs/linkmap.html?rev=705436&r1=705435&r2=705436&view=diff
==============================================================================
--- hadoop/core/branches/branch-0.18/docs/linkmap.html (original)
+++ hadoop/core/branches/branch-0.18/docs/linkmap.html Thu Oct 16 17:57:51 2008
@@ -153,6 +153,9 @@
 <a href="api/index.html">API Docs</a>
 </div>
 <div class="menuitem">
+<a href="jdiff/changes.html">API Changes</a>
+</div>
+<div class="menuitem">
 <a href="http://wiki.apache.org/hadoop/">Wiki</a>
 </div>
 <div class="menuitem">
@@ -308,6 +311,12 @@
     
 <ul>
 <li>
+<a href="jdiff/changes.html">API Changes</a>&nbsp;&nbsp;___________________&nbsp;&nbsp;<em>jdiff</em>
+</li>
+</ul>
+    
+<ul>
+<li>
 <a href="http://wiki.apache.org/hadoop/">Wiki</a>&nbsp;&nbsp;___________________&nbsp;&nbsp;<em>wiki</em>
 </li>
 </ul>

Modified: hadoop/core/branches/branch-0.18/docs/linkmap.pdf
URL: http://svn.apache.org/viewvc/hadoop/core/branches/branch-0.18/docs/linkmap.pdf?rev=705436&r1=705435&r2=705436&view=diff
==============================================================================
--- hadoop/core/branches/branch-0.18/docs/linkmap.pdf (original)
+++ hadoop/core/branches/branch-0.18/docs/linkmap.pdf Thu Oct 16 17:57:51 2008
@@ -5,10 +5,10 @@
 /Producer (FOP 0.20.5) >>
 endobj
 5 0 obj
-<< /Length 1176 /Filter [ /ASCII85Decode /FlateDecode ]
+<< /Length 1177 /Filter [ /ASCII85Decode /FlateDecode ]
  >>
 stream
-Gatn'?#uJp'Sc)T/%A8+U=D2AkrP<B4&+JZmd.o9B1E+cBS+&$iCs9""\Oc=Y)3H3R7SYYSG1FY3BBL12iBo'8VAOCd:U"@rP*.LUUp\eT_V#Rg5g4\!T?VWfgYpGXHf*TU.ZIRk/kSh<Sd,^IN7!rDl*'<hN$u'?"mI+p/Ua'[/%lnZNt!?m)cLl"rpMk[jb;2Rpc2Y$b%_-jqE=5=@q8uKcE0W^"\lO4!UkJH_SY'-1+72]'2!+2%gn^;1P46+AJH[^LGE91Kaj-[kJ9mrAiWA,-U<]/dr6,s8De81rt/$LgpS9'`06E*5`>TpCJ#>$pErMK^@d'lE&a.FV@s/eQaptM+)K>niHYR0Hf3_0Pp^mcorL]F&;FI2AIHe+RX$p8#'!n&;)5nG2'"_^=At*o@]N(R&ff9R%9nF#oU)5);V=Z=tQ3u^t2J.ZbqfjYj9!c.CRge'I9SdUcsEYHZd"C$("@eQq/:(E&MBu!_sD9[E,i+5tEF4k(h:YV02b5VGm:.\.io%IIYd*qkQC^pQKE:WbkIB[kC-DM0TCS5/V*aO9bC!VnVP5/KE\[^/L:P]<orX96m/L*"\Y8<%#qC,]Pg8lJ9b8)2J/[<1PFM;js$Y_7jN)'U^0,8ML*JJ$05_Q7j!'E=)uJ^@j!^V.mG-RC-Dh?ORJbQZ?Fe2[\71IR](Ms*r*/Z+p2fo.G*E*1P4VX?("iq]1[m]&kj!aF'Ek_-jj&0ZF%LE$!=O:]:%Q>9b%4gY(MP!Fiu1L_IUEMLtZ3er:AU1UoNEm.LbIM6s&..5hi%_1MC)M*jirbkE88-Vf8tY:5K$Vb#(c-WnfgOaaCOp6FFdKC[6:AX;"L0Bs2dS^:bV/O-pipkD/tgr7=OREH6WHSnNQBJ>eP<qg>e:nQ@RouFo(":n+-0L`.2\*)^p`jar9R`:T?:f?'S<SDOXDCqu^^OS$YMMnB7>)Q=@/.Ug[B3@jS3r@lk>+^qd0cdth4N0d
 UPr1VA8Wn$:\5diTM!FC:8o@[t;dV3TBleOdcctHX\1JO]I1_E(bT]*H6\&=haCr2q?+fTD)6eArSo,DY*)l!)18+VFLU/BTJAkQ&<U[[]lE8Loota_Rd+H5I`m1\RiS6FEpEF\Ul[#TR7H9>3f3OXPhm:+DbrX0o#4:"+9677#a;g.;rrIn0%IF~>
+Gatn'?#uJp'Sc)T/%A8+U=D2AkrMPN4&+JZhV!g1B1E+cBS4,%iCs:M$C!DG=qqQC16Y7<3:A*V*1_^gD\g)nP6b(eUWmr_qg[9#7X#4S61^#/Zle<B!lC.7Z\uP&XHf*TU.ZIRk/kSh<Sd,^IN7!rDl*'<hN$u'?"mI+p/Ua/[/%TfZNt!?m)cLl"rpMk[jb;2Rpc2Y$b%_-jqE=5=@q8uKcE0W^"\lO4!UkJH_SY'-1+72]'2!+2%gn^;1P46+AJH[^LGE90j+YVARQP[qYtBhM7g"g\M,uKs7>9sd3/J*+H@%);s^!\EstEDhEt*B0_DS&&[IV8Wj?SUp.]4#=*)ap,o4`?b3SCqc@C.(^p*VR5l5efbsBIlfi3n1K"/!]+,s$X5n&nVfkmXmqc8VEcY$a2`E\Z5a6WpL9un/?cfeD@f4rut%_e7?S*bC;\6ZjqSmlIe:-in]HKm<-G?<D^O`8uaL%44I$>5!g.H!&Q<Ju5dm=^_0@!qa>J%"dV^?nmK"tg(LEF8W#gchZ4_YkaLSSD@Bg$K/E;ds'bM9g,Z2<;Sunu6Q>=pkR[RI=+ObUG,crDK[l@h&c>I&`Cm6'=2IH1c&fnr1J8,H=M'ZRO-lKOA.C4,Q^ong#'$mWu&E=dC`&eStp%V7uXPX%Li[MDBOng:nII2G;2@_;$GLe)qSdSH%c62e%:4=ls#PPT61<335/Rqth%dqRWOBC1KNQMI8-n5+#l1E&`o$a]&5QqUUHla`]R%Aq1_<&)ourS>'D=X`YkB3=P?$8\9C9Gp;WhFc+F':GAJ*>Jq\]*TjOK0bl'%f[gg-UJmtcp'IV'B;?WMZ$dpB[SWR[Yfnl7l:$-<V@sl,XQ?A;*PS5GS@eTAW4>#FYufCrL^%A2UK\J-\iXEF\VYOubPdp2m<4X(V(<H?J#RcPS#APsI-ZiG\LAS`?A#4iH$^luCO@WRVJ0-P%#,^p8pd3N&&2/khn^->l-E9ZXi2_
 [Op"/1ZRhqdSJ1lfXj:smQk&"BSmiqfar2;WHa?j;$s*VMG;"P&Lfm+NE<n7V)B!B.[u8l:'a$A?_:',dJn)rn)[=fAkAV35"L-N.Xm5lZ7bnf*]K>nG)&di=_UuD>.%C[#j^)J8IpI!cUM-0[laj.fiF<W_YM&clFp@?bkt=t)+(Pl$WW*4dq[BD~>
 endstream
 endobj
 6 0 obj
@@ -20,10 +20,10 @@
 >>
 endobj
 7 0 obj
-<< /Length 316 /Filter [ /ASCII85Decode /FlateDecode ]
+<< /Length 388 /Filter [ /ASCII85Decode /FlateDecode ]
  >>
 stream
-Gaqcq4\rsL&;GE/MAt"F@peTV*be%J_uQer?uW*/7\k#p%#BCo'3Z]EVrr0rIX#AEm1)E\!Oa*Z9#?.Ua$ChGNB&DY;iV(K).'Us(&\JW:ap//)BjI/RZFdj(Gk@eol;Ps$f$I`qBkndGQ[h.mOe5SXUV&IR45-]#WVm[.&$6/OCtj:U@N*LCJ:/UjZ3VkW)f7L:Tdt!j2G`2ap6arCMKc#S2KG,;R;[^74QO]V7*)rmAf^k7m3D<ZWEB=,%OAAA5@Z+]@O[AGoqq=j1<oS(VZ?5McFHSm&[?;(qoeaqqcPSr^`s%qZFKMB>X~>
+Gar?-5uWCi&;BTNMESBQA$TVY349iSE'3>@2sYQp)^o2(Q(j<=h>jY\L"#d(p[-_+S*5#e+HURs=i;JE"E=AJ&?6H[.AjTXF+i_u)Ngn9,6W#dBehai4CJbY]e+;Lp&abZf_g5rn,hb/4/uZ.X';Z].4*#`gM>,.)+)2N=="P^Jh^pN1P'7I[43*HcD<Bk[6g6C']n)sSjo0RYD;XT<Aq]_<Aq!rY3'pTHLKFK[c%A@9G>K"3bLu[434.oB\r/Ur#M"!G]=?`.TrK!;a/o_m#OG+_(YQPl*R8U.RYgf"Guc%7GhiWS\FD6h`"iUUg<s<8IU9Bl[d::%59lsD9/b-CVFU*.*LnNgtAR'8f!ZaVsg>N4pUH"Y5WN)2XKi:]EsU>~>
 endstream
 endobj
 8 0 obj
@@ -87,19 +87,19 @@
 xref
 0 14
 0000000000 65535 f 
-0000002515 00000 n 
-0000002579 00000 n 
-0000002629 00000 n 
+0000002588 00000 n 
+0000002652 00000 n 
+0000002702 00000 n 
 0000000015 00000 n 
 0000000071 00000 n 
-0000001339 00000 n 
-0000001445 00000 n 
-0000001852 00000 n 
-0000001958 00000 n 
-0000002070 00000 n 
-0000002180 00000 n 
-0000002291 00000 n 
-0000002399 00000 n 
+0000001340 00000 n 
+0000001446 00000 n 
+0000001925 00000 n 
+0000002031 00000 n 
+0000002143 00000 n 
+0000002253 00000 n 
+0000002364 00000 n 
+0000002472 00000 n 
 trailer
 <<
 /Size 14
@@ -107,5 +107,5 @@
 /Info 4 0 R
 >>
 startxref
-2751
+2824
 %%EOF

Modified: hadoop/core/branches/branch-0.18/docs/mapred_tutorial.html
URL: http://svn.apache.org/viewvc/hadoop/core/branches/branch-0.18/docs/mapred_tutorial.html?rev=705436&r1=705435&r2=705436&view=diff
==============================================================================
--- hadoop/core/branches/branch-0.18/docs/mapred_tutorial.html (original)
+++ hadoop/core/branches/branch-0.18/docs/mapred_tutorial.html Thu Oct 16 17:57:51 2008
@@ -153,6 +153,9 @@
 <a href="api/index.html">API Docs</a>
 </div>
 <div class="menuitem">
+<a href="jdiff/changes.html">API Changes</a>
+</div>
+<div class="menuitem">
 <a href="http://wiki.apache.org/hadoop/">Wiki</a>
 </div>
 <div class="menuitem">

Modified: hadoop/core/branches/branch-0.18/docs/native_libraries.html
URL: http://svn.apache.org/viewvc/hadoop/core/branches/branch-0.18/docs/native_libraries.html?rev=705436&r1=705435&r2=705436&view=diff
==============================================================================
--- hadoop/core/branches/branch-0.18/docs/native_libraries.html (original)
+++ hadoop/core/branches/branch-0.18/docs/native_libraries.html Thu Oct 16 17:57:51 2008
@@ -153,6 +153,9 @@
 <a href="api/index.html">API Docs</a>
 </div>
 <div class="menuitem">
+<a href="jdiff/changes.html">API Changes</a>
+</div>
+<div class="menuitem">
 <a href="http://wiki.apache.org/hadoop/">Wiki</a>
 </div>
 <div class="menuitem">

Modified: hadoop/core/branches/branch-0.18/docs/quickstart.html
URL: http://svn.apache.org/viewvc/hadoop/core/branches/branch-0.18/docs/quickstart.html?rev=705436&r1=705435&r2=705436&view=diff
==============================================================================
--- hadoop/core/branches/branch-0.18/docs/quickstart.html (original)
+++ hadoop/core/branches/branch-0.18/docs/quickstart.html Thu Oct 16 17:57:51 2008
@@ -153,6 +153,9 @@
 <a href="api/index.html">API Docs</a>
 </div>
 <div class="menuitem">
+<a href="jdiff/changes.html">API Changes</a>
+</div>
+<div class="menuitem">
 <a href="http://wiki.apache.org/hadoop/">Wiki</a>
 </div>
 <div class="menuitem">

Modified: hadoop/core/branches/branch-0.18/docs/streaming.html
URL: http://svn.apache.org/viewvc/hadoop/core/branches/branch-0.18/docs/streaming.html?rev=705436&r1=705435&r2=705436&view=diff
==============================================================================
--- hadoop/core/branches/branch-0.18/docs/streaming.html (original)
+++ hadoop/core/branches/branch-0.18/docs/streaming.html Thu Oct 16 17:57:51 2008
@@ -156,6 +156,9 @@
 <a href="api/index.html">API Docs</a>
 </div>
 <div class="menuitem">
+<a href="jdiff/changes.html">API Changes</a>
+</div>
+<div class="menuitem">
 <a href="http://wiki.apache.org/hadoop/">Wiki</a>
 </div>
 <div class="menuitem">

Modified: hadoop/core/branches/branch-0.18/src/contrib/hod/CHANGES.txt
URL: http://svn.apache.org/viewvc/hadoop/core/branches/branch-0.18/src/contrib/hod/CHANGES.txt?rev=705436&r1=705435&r2=705436&view=diff
==============================================================================
--- hadoop/core/branches/branch-0.18/src/contrib/hod/CHANGES.txt (original)
+++ hadoop/core/branches/branch-0.18/src/contrib/hod/CHANGES.txt Thu Oct 16 17:57:51 2008
@@ -1,5 +1,12 @@
 HOD Change Log
 
+Release 0.18.2 - Unreleased 
+
+  BUG FIXES
+
+    HADOOP-3786. Use HDFS instead of DFS in all docs and hyperlink to Torque.
+    (Vinod Kumar Vavilapalli via acmurthy)
+
 Release 0.18.1 - 2008-09-17
 
   INCOMPATIBLE CHANGES

Modified: hadoop/core/branches/branch-0.18/src/docs/src/documentation/content/xdocs/hod_admin_guide.xml
URL: http://svn.apache.org/viewvc/hadoop/core/branches/branch-0.18/src/docs/src/documentation/content/xdocs/hod_admin_guide.xml?rev=705436&r1=705435&r2=705436&view=diff
==============================================================================
--- hadoop/core/branches/branch-0.18/src/docs/src/documentation/content/xdocs/hod_admin_guide.xml
(original)
+++ hadoop/core/branches/branch-0.18/src/docs/src/documentation/content/xdocs/hod_admin_guide.xml
Thu Oct 16 17:57:51 2008
@@ -89,7 +89,7 @@
 <p> Software </p>
 <p>The following components must be installed on ALL nodes before using HOD:</p>
 <ul>
- <li>Torque: Resource manager</li>
+ <li><a href="ext:hod/torque">Torque: Resource manager</a></li>
  <li><a href="ext:hod/python">Python</a> : HOD requires version 2.5.1 of
Python.</li>
 </ul>
 

Modified: hadoop/core/branches/branch-0.18/src/docs/src/documentation/content/xdocs/hod_config_guide.xml
URL: http://svn.apache.org/viewvc/hadoop/core/branches/branch-0.18/src/docs/src/documentation/content/xdocs/hod_config_guide.xml?rev=705436&r1=705435&r2=705436&view=diff
==============================================================================
--- hadoop/core/branches/branch-0.18/src/docs/src/documentation/content/xdocs/hod_config_guide.xml
(original)
+++ hadoop/core/branches/branch-0.18/src/docs/src/documentation/content/xdocs/hod_config_guide.xml
Thu Oct 16 17:57:51 2008
@@ -68,7 +68,15 @@
         <ul>
           <li>temp-dir: Temporary directory for usage by the HOD processes. Make 
                       sure that the users who will run hod have rights to create 
-                      directories under the directory specified here.</li>
+                      directories under the directory specified here. If you
+                      wish to make this directory vary across allocations,
+                      you can make use of the environmental variables which will
+                      be made available by the resource manager to the HOD
+                      processes. For example, in a Torque setup, having
+                      --ringmaster.temp-dir=/tmp/hod-temp-dir.$PBS_JOBID would
+                      let ringmaster use different temp-dir for each
+                      allocation; Torque expands this variable before starting
+                      the ringmaster.</li>
           
           <li>debug: Numeric value from 1-4. 4 produces the most log information,
                    and 1 the least.</li>
@@ -147,6 +155,18 @@
                       variable 'HOD_PYTHON_HOME' to the path to the python 
                       executable. The HOD processes launched on the compute nodes
                       can then use this variable.</li>
+          <li>options: Comma-separated list of key-value pairs,
+                      expressed as
+                      &lt;option&gt;:&lt;sub-option&gt;=&lt;value&gt;.
When
+                      passing to the job submission program, these are expanded
+                      as -&lt;option&gt; &lt;sub-option&gt;=&lt;value&gt;.
These
+                      are generally used for specifying additional resource
+                      contraints for scheduling. For instance, with a Torque
+                      setup, one can specify
+                      --resource_manager.options='l:arch=x86_64' for
+                      constraining the nodes being allocated to a particular
+                      architecture; this option will be passed to Torque's qsub
+                      command as "-l arch=x86_64".</li>
         </ul>
       </section>
       

Modified: hadoop/core/branches/branch-0.18/src/docs/src/documentation/content/xdocs/hod_user_guide.xml
URL: http://svn.apache.org/viewvc/hadoop/core/branches/branch-0.18/src/docs/src/documentation/content/xdocs/hod_user_guide.xml?rev=705436&r1=705435&r2=705436&view=diff
==============================================================================
--- hadoop/core/branches/branch-0.18/src/docs/src/documentation/content/xdocs/hod_user_guide.xml
(original)
+++ hadoop/core/branches/branch-0.18/src/docs/src/documentation/content/xdocs/hod_user_guide.xml
Thu Oct 16 17:57:51 2008
@@ -258,7 +258,7 @@
       </tr>
       <tr>
         <td> 7 </td>
-        <td> DFS failure </td>
+        <td> HDFS failure </td>
       </tr>
       <tr>
         <td> 8 </td>
@@ -376,7 +376,7 @@
   <section><title><code>hod</code> Hangs During Deallocation </title><anchor
id="_hod_Hangs_During_Deallocation"></anchor><anchor id="hod_Hangs_During_Deallocation"></anchor>
   <p><em>Possible Cause:</em> A Torque related problem, usually load on
the Torque server, or the allocation is very large. Generally, waiting for the command to
complete is the only option.</p>
   </section>
-  <section><title><code>hod</code> Fails With an error code and error
message </title><anchor id="hod_Fails_With_an_error_code_and"></anchor><anchor
id="_hod_Fails_With_an_error_code_an"></anchor>
+  <section><title><code>hod</code> Fails With an Error Code and Error
Message </title><anchor id="hod_Fails_With_an_error_code_and"></anchor><anchor
id="_hod_Fails_With_an_error_code_an"></anchor>
   <p>If the exit code of the <code>hod</code> command is not <code>0</code>,
then refer to the following table of error exit codes to determine why the code may have occurred
and how to debug the situation.</p>
   <p><strong> Error Codes </strong></p><anchor id="Error_Codes"></anchor>
   <table>
@@ -429,14 +429,14 @@
       </tr>
       <tr>
         <td> 7 </td>
-        <td> DFS failure </td>
-        <td> When HOD fails to allocate due to DFS failures (or Job tracker failures,
error code 8, see below), it prints a failure message "Hodring at &lt;hostname&gt;
failed with following errors:" and then gives the actual error message, which may indicate
one of the following:<br/>
+        <td> HDFS failure </td>
+        <td> When HOD fails to allocate due to HDFS failures (or Job tracker failures,
error code 8, see below), it prints a failure message "Hodring at &lt;hostname&gt;
failed with following errors:" and then gives the actual error message, which may indicate
one of the following:<br/>
           1. Problem in starting Hadoop clusters. Usually the actual cause in the error message
will indicate the problem on the hostname mentioned. Also, review the Hadoop related configuration
in the HOD configuration files. Look at the Hadoop logs using information specified in <em>Collecting
and Viewing Hadoop Logs</em> section above. <br />
           2. Invalid configuration on the node running the hodring, specified by the hostname
in the error message <br/>
           3. Invalid configuration in the <code>hodring</code> section of hodrc.
<code>ssh</code> to the hostname specified in the error message and grep for <code>ERROR</code>
or <code>CRITICAL</code> in hodring logs. Refer to the section <em>Locating
Hodring Logs</em> below for more information. <br />
           4. Invalid tarball specified which is not packaged correctly. <br />
           5. Cannot communicate with an externally configured HDFS.<br/>
-          When such DFS or Job tracker failure occurs, one can login into the host with hostname
mentioned in HOD failure message and debug the problem. While fixing the problem, one should
also review other log messages in the ringmaster log to see which other machines also might
have had problems bringing up the jobtracker/namenode, apart from the hostname that is reported
in the failure message. This possibility of other machines also having problems occurs because
HOD continues to try and launch hadoop daemons on multiple machines one after another depending
upon the value of the configuration variable <a href="hod_config_guide.html#3.4+ringmaster+options">ringmaster.max-master-failures</a>.
Refer to the section <em>Locating Ringmaster Logs</em> below to find more about
ringmaster logs.
+          When such HDFS or Job tracker failure occurs, one can login into the host with
hostname mentioned in HOD failure message and debug the problem. While fixing the problem,
one should also review other log messages in the ringmaster log to see which other machines
also might have had problems bringing up the jobtracker/namenode, apart from the hostname
that is reported in the failure message. This possibility of other machines also having problems
occurs because HOD continues to try and launch hadoop daemons on multiple machines one after
another depending upon the value of the configuration variable <a href="hod_config_guide.html#3.4+ringmaster+options">ringmaster.max-master-failures</a>.
Refer to the section <em>Locating Ringmaster Logs</em> below to find more about
ringmaster logs.
           </td>
       </tr>
       <tr>
@@ -482,6 +482,21 @@
       </tr>
   </table>
     </section>
+  <section><title>Hadoop DFSClient Warns with a
+  NotReplicatedYetException</title>
+  <p>Sometimes, when you try to upload a file to the HDFS immediately after
+  allocating a HOD cluster, DFSClient warns with a NotReplicatedYetException. It
+  usually shows a message something like - </p><table><tr><td><code>WARN
+  hdfs.DFSClient: NotReplicatedYetException sleeping &lt;filename&gt; retries
+  left 3</code></td></tr><tr><td><code>08/01/25 16:31:40
INFO hdfs.DFSClient:
+  org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
+  &lt;filename&gt; could only be replicated to 0 nodes, instead of
+  1</code></td></tr></table><p> This scenario arises when you
try to upload a file
+  to the HDFS while the DataNodes are still in the process of contacting the
+  NameNode. This can be resolved by waiting for some time before uploading a new
+  file to the HDFS, so that enough DataNodes start and contact the
+  NameNode.</p>
+  </section>
   <section><title> Hadoop Jobs Not Running on a Successfully Allocated Cluster
</title><anchor id="Hadoop_Jobs_Not_Running_on_a_Suc"></anchor>
   <p>This scenario generally occurs when a cluster is allocated, and is left inactive
for sometime, and then hadoop jobs are attempted to be run on them. Then Hadoop jobs fail
with the following exception:</p>
   <table><tr><td><code>08/01/25 16:31:40 INFO ipc.Client: Retrying
connect to server: foo.bar.com/1.1.1.1:53567. Already tried 1 time(s).</code></td></tr></table>
@@ -502,7 +517,7 @@
   <p><em>Possible Cause:</em> Version 0.16 of hadoop is required for this
functionality to work. The version of Hadoop used does not match. Use the required version
of Hadoop.</p>
   <p><em>Possible Cause:</em> The deallocation was done without using the
<code>hod</code> command; for e.g. directly using <code>qdel</code>.
When the cluster is deallocated in this manner, the HOD processes are terminated using signals.
This results in the exit code to be based on the signal number, rather than the exit code
of the program.</p>
     </section>
-  <section><title> The Hadoop Logs are Not Uploaded to DFS </title><anchor
id="The_Hadoop_Logs_are_Not_Uploaded"></anchor>
+  <section><title> The Hadoop Logs are Not Uploaded to HDFS </title><anchor
id="The_Hadoop_Logs_are_Not_Uploaded"></anchor>
   <p><em>Possible Cause:</em> There is a version mismatch between the version
of the hadoop being used for uploading the logs and the external HDFS. Ensure that the correct
version is specified in the <code>hodring.pkgs</code> option.</p>
     </section>
   <section><title> Locating Ringmaster Logs </title><anchor id="Locating_Ringmaster_Logs"></anchor>



Mime
View raw message