Return-Path: Delivered-To: apmail-hadoop-core-commits-archive@www.apache.org Received: (qmail 48066 invoked from network); 21 Oct 2008 21:43:24 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 21 Oct 2008 21:43:24 -0000 Received: (qmail 57326 invoked by uid 500); 21 Oct 2008 21:43:25 -0000 Delivered-To: apmail-hadoop-core-commits-archive@hadoop.apache.org Received: (qmail 57298 invoked by uid 500); 21 Oct 2008 21:43:25 -0000 Mailing-List: contact core-commits-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: core-dev@hadoop.apache.org Delivered-To: mailing list core-commits@hadoop.apache.org Received: (qmail 57270 invoked by uid 99); 21 Oct 2008 21:43:25 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 21 Oct 2008 14:43:25 -0700 X-ASF-Spam-Status: No, hits=-1999.0 required=10.0 tests=ALL_TRUSTED,HTTP_EXCESSIVE_ESCAPES,OBSCURED_EMAIL X-Spam-Check-By: apache.org Received: from [140.211.11.4] (HELO eris.apache.org) (140.211.11.4) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 21 Oct 2008 21:42:12 +0000 Received: by eris.apache.org (Postfix, from userid 65534) id 0BD0F23889B2; Tue, 21 Oct 2008 14:42:51 -0700 (PDT) Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: svn commit: r706781 - in /hadoop/core/trunk: CHANGES.txt docs/libhdfs.html docs/libhdfs.pdf src/docs/src/documentation/content/xdocs/libhdfs.xml src/docs/src/documentation/content/xdocs/site.xml Date: Tue, 21 Oct 2008 21:42:50 -0000 To: core-commits@hadoop.apache.org From: cutting@apache.org X-Mailer: svnmailer-1.0.8 Message-Id: <20081021214251.0BD0F23889B2@eris.apache.org> X-Virus-Checked: Checked by ClamAV on apache.org Author: cutting Date: Tue Oct 21 14:42:50 2008 New Revision: 706781 URL: http://svn.apache.org/viewvc?rev=706781&view=rev Log: HADOOP-4105. Add Forrest documentation for libhdfs. Added: hadoop/core/trunk/docs/libhdfs.html hadoop/core/trunk/docs/libhdfs.pdf hadoop/core/trunk/src/docs/src/documentation/content/xdocs/libhdfs.xml Modified: hadoop/core/trunk/CHANGES.txt hadoop/core/trunk/src/docs/src/documentation/content/xdocs/site.xml Modified: hadoop/core/trunk/CHANGES.txt URL: http://svn.apache.org/viewvc/hadoop/core/trunk/CHANGES.txt?rev=706781&r1=706780&r2=706781&view=diff ============================================================================== --- hadoop/core/trunk/CHANGES.txt (original) +++ hadoop/core/trunk/CHANGES.txt Tue Oct 21 14:42:50 2008 @@ -523,6 +523,9 @@ HADOOP-4438. Update forrest documentation to include missing FsShell commands. (Suresh Srinivas via cdouglas) + HADOOP-4105. Add forrest documentation for libhdfs. + (Pete Wyckoff via cutting) + OPTIMIZATIONS HADOOP-3556. Removed lock contention in MD5Hash by changing the Added: hadoop/core/trunk/docs/libhdfs.html URL: http://svn.apache.org/viewvc/hadoop/core/trunk/docs/libhdfs.html?rev=706781&view=auto ============================================================================== --- hadoop/core/trunk/docs/libhdfs.html (added) +++ hadoop/core/trunk/docs/libhdfs.html Tue Oct 21 14:42:50 2008 @@ -0,0 +1,329 @@ + + + + + + + + + + +C API to HDFS: libhdfs + + + + + + + + + +
+ + + +
+ + + + + + + + + + + + +
+
+
+
+ +
+ + +
+ +
+ +   +
+ + + + + +
+ +

C API to HDFS: libhdfs

+ + + +

C API to HDFS: libhdfs

+
+

+libhdfs is a JNI based C api for Hadoop's DFS. It provides C apis to a subset of the HDFS APIs to manipulate DFS files and the filesystem. libhdfs is part of the hadoop distribution and comes pre-compiled in ${HADOOP_HOME}/libhdfs/libhdfs.so . +

+
+ + +

The APIs

+
+

+The libhdfs APIs are a subset of: hadoop fs APIs. +

+

+The header file for libhdfs describes each API in detail and is available in ${HADOOP_HOME}/src/c++/libhdfs/hdfs.h +

+
+ + +

A sample program

+
+
+#include "hdfs.h" 
+
+int main(int argc, char **argv) {
+
+    hdfsFS fs = hdfsConnect("default", 0);
+    const char* writePath = "/tmp/testfile.txt";
+    hdfsFile writeFile = hdfsOpenFile(fs, writePath, O_WRONLY|O_CREAT, 0, 0, 0);
+    if(!writeFile) {
+          fprintf(stderr, "Failed to open %s for writing!\n", writePath);
+          exit(-1);
+    }
+    char* buffer = "Hello, World!";
+    tSize num_written_bytes = hdfsWrite(fs, writeFile, (void*)buffer, strlen(buffer)+1);
+    if (hdfsFlush(fs, writeFile)) {
+           fprintf(stderr, "Failed to 'flush' %s\n", writePath); 
+          exit(-1);
+    }
+   hdfsCloseFile(fs, writeFile);
+}
+
+
+
+ + + +

How to link with the library

+
+

+See the Makefile for hdfs_test.c in the libhdfs source directory (${HADOOP_HOME}/src/c++/libhdfs/Makefile) or something like: +gcc above_sample.c -I${HADOOP_HOME}/src/c++/libhdfs -L${HADOOP_HOME}/libhdfs -lhdfs -o above_sample +

+
+ + +

Common problems

+
+

+The most common problem is the CLASSPATH is not set properly when calling a program that uses libhdfs. Make sure you set it to all the hadoop jars needed to run Hadoop itself. Currently, there is no way to programmatically generate the classpath, but a good bet is to include all the jar files in ${HADOOP_HOME} and ${HADOOP_HOME}/lib as well as the right configuration directory containing hadoop-site.xml +

+
+ + +

libhdfs is thread safe

+
+

Concurrency and Hadoop FS "handles" - the hadoop FS implementation includes a FS handle cache which caches based on the URI of the namenode along with the user connecting. So, all calls to hdfsConnect will return the same handle but calls to hdfsConnectAsUser with different users will return different handles. But, since HDFS client handles are completely thread safe, this has no bearing on concurrency. +

+

Concurrency and libhdfs/JNI - the libhdfs calls to JNI should always be creating thread local storage, so (in theory), libhdfs should be as thread safe as the underlying calls to the Hadoop FS. +

+
+ +
+ +
 
+
+ + + Added: hadoop/core/trunk/docs/libhdfs.pdf URL: http://svn.apache.org/viewvc/hadoop/core/trunk/docs/libhdfs.pdf?rev=706781&view=auto ============================================================================== --- hadoop/core/trunk/docs/libhdfs.pdf (added) +++ hadoop/core/trunk/docs/libhdfs.pdf Tue Oct 21 14:42:50 2008 @@ -0,0 +1,335 @@ +%PDF-1.3 +%ª«¬­ +4 0 obj +<< /Type /Info +/Producer (FOP 0.20.5) >> +endobj +5 0 obj +<< /Length 640 /Filter [ /ASCII85Decode /FlateDecode ] + >> +stream +Gau`Q9lldX&A@ZcFEA!n[/'im;rY`Zp=)jI,s;fD(aab].!>LWbEn?]8Vcrti.L^`AY7hcG5!?S0%UM/NX:RUO*"_JO.`$KLpS6h@CIUE_+Y%].a[3bFb,k.&)-Eip%PX*(i@n!\*Jf&(?X:k\kSV$oFj[N>"1bB#K@Dq&HIQ9O#4F)W\ho_:aq/fr@sY<.Io"$pM,k&0]i!?nFprr^lle_G8*:)I8\Jk[*XY8D['BlO>-i<:7`,C;$\.i`#2W:D*)WlfS;dCfJ>cot/WGHN/F'jB;:fd69@c'B`\i[Q<]`5+KZjc560rA\k4Y0M3)O`]2?r>njk4\IEE1.R'[i3n8=Mc/r#I3V>HoYM+i)>2<&!f!p3VXl?He0Sqj&*IV^2a3lqnE>No8qT7EO)^%?<"lfJ8$)RdS$74O>n"uSaFE3B!JGNM1!uHX+TbJnc;hnK>l;2-*eC^2`6j6[\#mi.$gZ^\lX*E[a#G%61>WY#_rE=)KPN`]AYN<:u4a5poeYr>Rg0uT20+/H$c2~> +endstream +endobj +6 0 obj +<< /Type /Page +/Parent 1 0 R +/MediaBox [ 0 0 612 792 ] +/Resources 3 0 R +/Contents 5 0 R +/Annots 7 0 R +>> +endobj +7 0 obj +[ +8 0 R +10 0 R +12 0 R +14 0 R +16 0 R +18 0 R +] +endobj +8 0 obj +<< /Type /Annot +/Subtype /Link +/Rect [ 102.0 546.166 226.016 534.166 ] +/C [ 0 0 0 ] +/Border [ 0 0 0 ] +/A 9 0 R +/H /I +>> +endobj +10 0 obj +<< /Type /Annot +/Subtype /Link +/Rect [ 102.0 527.966 155.66 515.966 ] +/C [ 0 0 0 ] +/Border [ 0 0 0 ] +/A 11 0 R +/H /I +>> +endobj +12 0 obj +<< /Type /Annot +/Subtype /Link +/Rect [ 102.0 509.766 199.316 497.766 ] +/C [ 0 0 0 ] +/Border [ 0 0 0 ] +/A 13 0 R +/H /I +>> +endobj +14 0 obj +<< /Type /Annot +/Subtype /Link +/Rect [ 102.0 491.566 244.328 479.566 ] +/C [ 0 0 0 ] +/Border [ 0 0 0 ] +/A 15 0 R +/H /I +>> +endobj +16 0 obj +<< /Type /Annot +/Subtype /Link +/Rect [ 102.0 473.366 202.34 461.366 ] +/C [ 0 0 0 ] +/Border [ 0 0 0 ] +/A 17 0 R +/H /I +>> +endobj +18 0 obj +<< /Type /Annot +/Subtype /Link +/Rect [ 102.0 455.166 209.648 443.166 ] +/C [ 0 0 0 ] +/Border [ 0 0 0 ] +/A 19 0 R +/H /I +>> +endobj +20 0 obj +<< /Length 1838 /Filter [ /ASCII85Decode /FlateDecode ] + >> +stream +GatU4D/\/e&H;*)_0R%=V[XkJmiD*F9q/"K>A&qH@[I@PQ=sKY/?+f&2!b"qGBuJ&<9do5#P4<-n%%J"a.lrn-DqXq;0An#`Hn8?bsWibr2IV3@*[7ZWbNk`m)Yeh.\2\8Aeai,kM>Xol>7p0EAtUZF`IAS'>n4cp".lUr'_Ci-Ee5L04:ibpL[=VdWB5n-IGKZ'A]B08Hi6kI\48(?lDFE(!Jj0VGc6M&@No(RfOH!DYWWEK"U*2n>2`H,j9o(S0HRp+32K%Zoe&?_Xf\S`^q*+#/80u9P!_X+h63SPmOg-]LGQ&U*&nd^ASlt2J%#ii4)6-mW]EkqiL!8k8"Cj&7RSCjH&4I(%8J6m&9"CAVZK=oFM8:A_]B#&$e;,Y8jo!$^n#*QLbqI=<*,1B8B3$+>WDd)2'eG)+[(sCV#;qYqL1*DXZ6i]_j03&KF@NAZT6o-1S5,\(4;A*!o**-=R3nQ_O@c.=.5jFfWQMh%<@Y*]#QVhH,Ko9KrYiZOA!FbelA+ZQ[bmd\LBd!WY-%tW2X!Y!UR$ic!nYq"Z'00NW-9BgHd_m0-$]915?<.R9huQ;\465hCj74Q.!-U't1(mqRX\F_Lia!J`ielo'.>6r "^.V4BOl]$+dJRcDQ!;`XN@Hj[Y=_K)V*?hVFIn;e:iIQrcR&)-gjJqne,88'5Q7'(cTf*,>Mt<"QQIYfibJMs4&JM#!A1t7SF(O2MPa7KM=?%(X8R>r;E;EC72X#MpuaUGr?l(pKA/b-%OJk"r^L^]p:FnpFY9EMl96$E)>1OrNJqGfiM&@,kV`qK@s:u4QP0aGZ0%jUNsl1+O%S<#4W=p1!nX+%Z4#D<;^^@5gG+BLlu<)q0[1U$%dj9d^o-K"`Db!ss%5%YldTGMP)NO)[EL@g_/0K.d.BB/L7X0?-H:p4's++6o%4d9]8AeV20uM_eY/;&=5#^H=>/\rq(jE-U74oh$Zaa,jF`aYjlua-rQA/bRG)t.2k&*D*R^^`NnkH1CEZ0KJ)E9FOoD>9n?OJ@e<>)F6Wa-?$5Zk#HJ:pu<3[C\mZ/9\#'cOBc*JS4VKX6A6=%P!D?Tg!YjmfNHkkOrA^q_7^.R~> +endstream +endobj +21 0 obj +<< /Type /Page +/Parent 1 0 R +/MediaBox [ 0 0 612 792 ] +/Resources 3 0 R +/Contents 20 0 R +/Annots 22 0 R +>> +endobj +22 0 obj +[ +23 0 R +] +endobj +23 0 obj +<< /Type /Annot +/Subtype /Link +/Rect [ 250.308 550.932 324.3 538.932 ] +/C [ 0 0 0 ] +/Border [ 0 0 0 ] +/A << /URI (api/org/apache/hadoop/fs/FileSystem.html) +/S /URI >> +/H /I +>> +endobj +24 0 obj +<< /Length 1291 /Filter [ /ASCII85Decode /FlateDecode ] + >> +stream +Gat%#9lK&M&A@sB#`B`:+CQNu?gc2/DX(MLS$_m\\j\nXPf6Z9?';4''`IJ@=gBo,W746g*ep,OpL`QIoXLJT4N3!IX;mK$O3kmZ?:Ql=TaWKH?p%I4;o/$nI&k,Q7]RM$4&LDKVP9205D%`Q&q-=VF!kq0V\;S*SfFY8"8e4s;:M2ZFCA=p9T>gQre`p_[?J=l!!_#s[)^0AuQA=F0?SI^29$-%?5<_XO!Hjj]M?WsA(QqP8i%;E06%)6"/:^pnXe]ekiL`*^dSQdH>o$c$Y-@*S8T>p!QX5kEQM*1jMA@*.Be_flT4]FEQuR3F4JnO4#9o/-3>p9lLXFAa?e3l'LBMp["UFI,W1D3L^P6uW/DUCl!^AeQa?@l)LfFOi$ric]A_uQb"9WkGdRX)Kh8p*:#DLt+DQ,IV.&`r(G22BVDtE#_:&ID/Gp='g/*LB#:F++qsr3+F=SHDB7h9O1GV&.2eKZ:V[KI`ZO-T@k^qabnC*Qb7ZJ(T!:&n(=>fKW/EZ5U_*CMU"`56q7.b.KT'*2?GEYO>/L8O2ICCHT*ug4#>!^%MUs&MB8(3XUS=tDIHO!kD?m[^G(Pi6.1aV)eW(f+-Kn^>>THah]_l,Bp&,1E3W.];m#(cf(,c0,.LJL.-aVbbP=+^6;cPR-l+ds$4V'R[04PuN DTMQTs03Kk1`3j%Z)*TR0Jji#8R!sb;qe@ZS5RCZXlg)DeSEYMUNmYOc*f)jm?TS_TDpW8^HeJiC7_E[OSYm2_mn2J/8FU@9`?%RW5O4f:%1pQj(.Z)[1Oo46fNg7FZQ68gNKZ/L_p3)56]Q.K2^WOsY=k5W@Cn)^e$AVI\]s&",OSKWJ\i>AV?iRPG+7&/M&usGqY'GG?m?#L,QO(cBgd-"o@lj,*.,R6]ts8F2]8^rA#2!D]@.P]m$5Zb^=SD5j@MUBO/nM +endstream +endobj +25 0 obj +<< /Type /Page +/Parent 1 0 R +/MediaBox [ 0 0 612 792 ] +/Resources 3 0 R +/Contents 24 0 R +>> +endobj +27 0 obj +<< + /Title (\376\377\0\61\0\40\0\103\0\40\0\101\0\120\0\111\0\40\0\164\0\157\0\40\0\110\0\104\0\106\0\123\0\72\0\40\0\154\0\151\0\142\0\150\0\144\0\146\0\163) + /Parent 26 0 R + /Next 28 0 R + /A 9 0 R +>> endobj +28 0 obj +<< + /Title (\376\377\0\62\0\40\0\124\0\150\0\145\0\40\0\101\0\120\0\111\0\163) + /Parent 26 0 R + /Prev 27 0 R + /Next 29 0 R + /A 11 0 R +>> endobj +29 0 obj +<< + /Title (\376\377\0\63\0\40\0\101\0\40\0\163\0\141\0\155\0\160\0\154\0\145\0\40\0\160\0\162\0\157\0\147\0\162\0\141\0\155) + /Parent 26 0 R + /Prev 28 0 R + /Next 30 0 R + /A 13 0 R +>> endobj +30 0 obj +<< + /Title (\376\377\0\64\0\40\0\110\0\157\0\167\0\40\0\164\0\157\0\40\0\154\0\151\0\156\0\153\0\40\0\167\0\151\0\164\0\150\0\40\0\164\0\150\0\145\0\40\0\154\0\151\0\142\0\162\0\141\0\162\0\171) + /Parent 26 0 R + /Prev 29 0 R + /Next 31 0 R + /A 15 0 R +>> endobj +31 0 obj +<< + /Title (\376\377\0\65\0\40\0\103\0\157\0\155\0\155\0\157\0\156\0\40\0\160\0\162\0\157\0\142\0\154\0\145\0\155\0\163) + /Parent 26 0 R + /Prev 30 0 R + /Next 32 0 R + /A 17 0 R +>> endobj +32 0 obj +<< + /Title (\376\377\0\66\0\40\0\154\0\151\0\142\0\150\0\144\0\146\0\163\0\40\0\151\0\163\0\40\0\164\0\150\0\162\0\145\0\141\0\144\0\40\0\163\0\141\0\146\0\145) + /Parent 26 0 R + /Prev 31 0 R + /A 19 0 R +>> endobj +33 0 obj +<< /Type /Font +/Subtype /Type1 +/Name /F3 +/BaseFont /Helvetica-Bold +/Encoding /WinAnsiEncoding >> +endobj +34 0 obj +<< /Type /Font +/Subtype /Type1 +/Name /F5 +/BaseFont /Times-Roman +/Encoding /WinAnsiEncoding >> +endobj +35 0 obj +<< /Type /Font +/Subtype /Type1 +/Name /F1 +/BaseFont /Helvetica +/Encoding /WinAnsiEncoding >> +endobj +36 0 obj +<< /Type /Font +/Subtype /Type1 +/Name /F9 +/BaseFont /Courier +/Encoding /WinAnsiEncoding >> +endobj +37 0 obj +<< /Type /Font +/Subtype /Type1 +/Name /F2 +/BaseFont /Helvetica-Oblique +/Encoding /WinAnsiEncoding >> +endobj +38 0 obj +<< /Type /Font +/Subtype /Type1 +/Name /F7 +/BaseFont /Times-Bold +/Encoding /WinAnsiEncoding >> +endobj +1 0 obj +<< /Type /Pages +/Count 3 +/Kids [6 0 R 21 0 R 25 0 R ] >> +endobj +2 0 obj +<< /Type /Catalog +/Pages 1 0 R + /Outlines 26 0 R + /PageMode /UseOutlines + >> +endobj +3 0 obj +<< +/Font << /F3 33 0 R /F5 34 0 R /F1 35 0 R /F9 36 0 R /F2 37 0 R /F7 38 0 R >> +/ProcSet [ /PDF /ImageC /Text ] >> +endobj +9 0 obj +<< +/S /GoTo +/D [21 0 R /XYZ 85.0 659.0 null] +>> +endobj +11 0 obj +<< +/S /GoTo +/D [21 0 R /XYZ 85.0 580.266 null] +>> +endobj +13 0 obj +<< +/S /GoTo +/D [21 0 R /XYZ 85.0 493.532 null] +>> +endobj +15 0 obj +<< +/S /GoTo +/D [21 0 R /XYZ 85.0 235.618 null] +>> +endobj +17 0 obj +<< +/S /GoTo +/D [25 0 R /XYZ 85.0 659.0 null] +>> +endobj +19 0 obj +<< +/S /GoTo +/D [25 0 R /XYZ 85.0 553.866 null] +>> +endobj +26 0 obj +<< + /First 27 0 R + /Last 32 0 R +>> endobj +xref +0 39 +0000000000 65535 f +0000007483 00000 n +0000007555 00000 n +0000007647 00000 n +0000000015 00000 n +0000000071 00000 n +0000000802 00000 n +0000000922 00000 n +0000000982 00000 n +0000007781 00000 n +0000001117 00000 n +0000007844 00000 n +0000001253 00000 n +0000007910 00000 n +0000001390 00000 n +0000007976 00000 n +0000001527 00000 n +0000008042 00000 n +0000001663 00000 n +0000008106 00000 n +0000001800 00000 n +0000003731 00000 n +0000003854 00000 n +0000003881 00000 n +0000004073 00000 n +0000005457 00000 n +0000008172 00000 n +0000005565 00000 n +0000005783 00000 n +0000005936 00000 n +0000006136 00000 n +0000006405 00000 n +0000006600 00000 n +0000006821 00000 n +0000006934 00000 n +0000007044 00000 n +0000007152 00000 n +0000007258 00000 n +0000007374 00000 n +trailer +<< +/Size 39 +/Root 2 0 R +/Info 4 0 R +>> +startxref +8223 +%%EOF Added: hadoop/core/trunk/src/docs/src/documentation/content/xdocs/libhdfs.xml URL: http://svn.apache.org/viewvc/hadoop/core/trunk/src/docs/src/documentation/content/xdocs/libhdfs.xml?rev=706781&view=auto ============================================================================== --- hadoop/core/trunk/src/docs/src/documentation/content/xdocs/libhdfs.xml (added) +++ hadoop/core/trunk/src/docs/src/documentation/content/xdocs/libhdfs.xml Tue Oct 21 14:42:50 2008 @@ -0,0 +1,96 @@ + + + + + + + +
+C API to HDFS: libhdfs +Content-Type +text/html; +utf-8 +
+ +
+C API to HDFS: libhdfs + +

+libhdfs is a JNI based C api for Hadoop's DFS. It provides C apis to a subset of the HDFS APIs to manipulate DFS files and the filesystem. libhdfs is part of the hadoop distribution and comes pre-compiled in ${HADOOP_HOME}/libhdfs/libhdfs.so . +

+ +
+
+The APIs + +

+The libhdfs APIs are a subset of: hadoop fs APIs. +

+

+The header file for libhdfs describes each API in detail and is available in ${HADOOP_HOME}/src/c++/libhdfs/hdfs.h +

+
+
+A sample program + + +#include "hdfs.h" + +int main(int argc, char **argv) { + + hdfsFS fs = hdfsConnect("default", 0); + const char* writePath = "/tmp/testfile.txt"; + hdfsFile writeFile = hdfsOpenFile(fs, writePath, O_WRONLY|O_CREAT, 0, 0, 0); + if(!writeFile) { + fprintf(stderr, "Failed to open %s for writing!\n", writePath); + exit(-1); + } + char* buffer = "Hello, World!"; + tSize num_written_bytes = hdfsWrite(fs, writeFile, (void*)buffer, strlen(buffer)+1); + if (hdfsFlush(fs, writeFile)) { + fprintf(stderr, "Failed to 'flush' %s\n", writePath); + exit(-1); + } + hdfsCloseFile(fs, writeFile); +} + + +
+ +
+How to link with the library +

+See the Makefile for hdfs_test.c in the libhdfs source directory (${HADOOP_HOME}/src/c++/libhdfs/Makefile) or something like: +gcc above_sample.c -I${HADOOP_HOME}/src/c++/libhdfs -L${HADOOP_HOME}/libhdfs -lhdfs -o above_sample +

+
+
+Common problems +

+The most common problem is the CLASSPATH is not set properly when calling a program that uses libhdfs. Make sure you set it to all the hadoop jars needed to run Hadoop itself. Currently, there is no way to programmatically generate the classpath, but a good bet is to include all the jar files in ${HADOOP_HOME} and ${HADOOP_HOME}/lib as well as the right configuration directory containing hadoop-site.xml +

+
+
+libhdfs is thread safe +

Concurrency and Hadoop FS "handles" - the hadoop FS implementation includes a FS handle cache which caches based on the URI of the namenode along with the user connecting. So, all calls to hdfsConnect will return the same handle but calls to hdfsConnectAsUser with different users will return different handles. But, since HDFS client handles are completely thread safe, this has no bearing on concurrency. +

+

Concurrency and libhdfs/JNI - the libhdfs calls to JNI should always be creating thread local storage, so (in theory), libhdfs should be as thread safe as the underlying calls to the Hadoop FS. +

+
+ +
Modified: hadoop/core/trunk/src/docs/src/documentation/content/xdocs/site.xml URL: http://svn.apache.org/viewvc/hadoop/core/trunk/src/docs/src/documentation/content/xdocs/site.xml?rev=706781&r1=706780&r2=706781&view=diff ============================================================================== --- hadoop/core/trunk/src/docs/src/documentation/content/xdocs/site.xml (original) +++ hadoop/core/trunk/src/docs/src/documentation/content/xdocs/site.xml Tue Oct 21 14:42:50 2008 @@ -47,6 +47,7 @@ +