Return-Path: Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: (qmail 57911 invoked from network); 27 Oct 2009 11:35:55 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.3) by minotaur.apache.org with SMTP; 27 Oct 2009 11:35:55 -0000 Received: (qmail 26028 invoked by uid 500); 27 Oct 2009 11:35:49 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 25844 invoked by uid 500); 27 Oct 2009 11:35:49 -0000 Mailing-List: contact hdfs-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-user@hadoop.apache.org Delivered-To: mailing list hdfs-user@hadoop.apache.org Received: (qmail 25801 invoked by uid 99); 27 Oct 2009 11:35:49 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 27 Oct 2009 11:35:49 +0000 X-ASF-Spam-Status: No, hits=-2.6 required=5.0 tests=AWL,BAYES_00,HTML_MESSAGE,NORMAL_HTTP_TO_IP,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of eddymier@gmail.com designates 209.85.222.195 as permitted sender) Received: from [209.85.222.195] (HELO mail-pz0-f195.google.com) (209.85.222.195) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 27 Oct 2009 11:35:45 +0000 Received: by pzk33 with SMTP id 33so31677pzk.2 for ; Tue, 27 Oct 2009 04:35:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:cc:content-type; bh=Gi7VwxxxYbjGasL3voN8Y6h8nCZEkdzr6Y6ZTlBYrCg=; b=U238Ev9RH795Do6YLfQa4p7tc8JwSyTwD0tm2YXggAPxJHUXN+7dQNP4dVqp2dOwTu 8cdaWz6nwGzO0LFX58aB+mv+cxU6xBH6f3UC2P2yYXleKlTNwW5Ll2zAoctqKqBiYgH3 MvfkIXnmvzlA8t4ZO8/nLx2uZzoph5vOSJlDQ= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=qXvNCjg2yMBqiYNKHz2/na5Z0wDEqdpklo2h6JQR+MjrCnXx6hDyQfIxX9fbNrK327 SLWio25lmLFXQ4US7OPkSqwiJJvxnlQvee2ce2PKkVOktOiAgK0kWIfEjfpv3G8eo5HE 0oYTu7RBg3RKuV6a3JZXdvy4uGpfTa2knLGd8= MIME-Version: 1.0 Received: by 10.142.196.18 with SMTP id t18mr1221534wff.32.1256643325661; Tue, 27 Oct 2009 04:35:25 -0700 (PDT) In-Reply-To: <4AE6D73C.80601@gmail.com> References: <4AE6D1BD.8080005@gmail.com> <4AE6D73C.80601@gmail.com> Date: Tue, 27 Oct 2009 19:35:25 +0800 Message-ID: Subject: Re: Mount WebDav in Linux for HDFS-0.20.1 From: "Zhang Bingjun (Eddy)" To: Huy Phan Cc: common-user@hadoop.apache.org, core-user@hadoop.apache.org, hdfs-dev@hadoop.apache.org, hdfs-user@hadoop.apache.org Content-Type: multipart/alternative; boundary=000e0cd32f540914260476e913d2 --000e0cd32f540914260476e913d2 Content-Type: text/plain; charset=UTF-8 Dear Huy Phan, I downloaded davfs2-1.4.3 and in this version the patch you sent me seems to be applied already. I compiled and installed this version. However, the error message is still around like below... hadoop@hdfs2:/mnt$ sudo mount.davfs http://192.168.0.131:9800 hdfs-webdav/ Please enter the username to authenticate with server http://192.168.0.131:9800 or hit enter for none. Username: hadoop Please enter the password to authenticate user hadoop with server http://192.168.0.131:9800 or hit enter for none. Password: mount.davfs: mounting failed; the server does not support WebDAV Which username or password should I input? Any user in the account.properties file or the user in the WebDAV OS? Regarding the memory leak in fuse-dfs and libhdfs, I posted one patch in apache jira. However, when used in production environment, the memory leak still exists and cause the mounting point unusable after a number of write/read operations. The memory leak there is really annoying... I hope I can setup the mix of davfs2 and WebDAV to have a try on its performance. Any ideas to get around the error "mount failed; the server does not support WebDAV"? Thank you so much for your help! Best regards, Zhang Bingjun (Eddy) E-mail: eddymier@gmail.com, bingjun@nus.edu.sg, bingjun@comp.nus.edu.sg Tel No: +65-96188110 (M) On Tue, Oct 27, 2009 at 7:19 PM, Huy Phan wrote: > Hi Zhang, > I didn't play much with fuse-dfs, in my opinion, memory leak is something > solvable and I can see Apache had made some fixes for this issue on libhdfs. > If you encounter these problems with older version of Hadoop, I think you > should give a try on the latest stable version. > Since I didn't have much fun so far with fuse-dfs, i cannot say it's the > best or not, but it's definitely better than mixing davfs2 and webdav > together. > > > Best, > Huy Phan > > Zhang Bingjun (Eddy) wrote: > >> Dear Huy Phan, >> >> >> Thanks for your quick reply. >> I was using fuse-dfs before. But I found serious memory leak with fuse-dfs >> about 10MB leakage per 10k file read/write. When the occupied memory size >> reached about 150MB, the read/write performance dropped dramatically. Did >> you encounter these problems? >> >> What I am trying to do is to mount HDFS as a local directory in Ubuntu. Do >> you think fuse-dfs is the best option so far? >> >> Thank you so much for your input! >> >> Best regards, >> Zhang Bingjun (Eddy) >> >> E-mail: eddymier@gmail.com , >> bingjun@nus.edu.sg , bingjun@comp.nus.edu.sg> bingjun@comp.nus.edu.sg> >> Tel No: +65-96188110 (M) >> >> >> On Tue, Oct 27, 2009 at 6:55 PM, Huy Phan > dachuy@gmail.com>> wrote: >> >> Hi Zhang, >> >> Here is the patch for davfs2 to solve "server does not support >> WebDAV" issue: >> >> diff --git a/src/webdav.c b/src/webdav.c >> index 8ec7a2d..4bdaece 100644 >> --- a/src/webdav.c >> +++ b/src/webdav.c >> @@ -472,7 +472,7 @@ dav_init_connection(const char *path) >> >> if (!ret) { >> initialized = 1; >> - if (!caps.dav_class1 && !caps.dav_class2 && >> !ignore_dav_header) { >> + if (!caps.dav_class1 && !ignore_dav_header) { >> if (have_terminal) { >> error(EXIT_FAILURE, 0, >> _("mounting failed; the server does not >> support WebDAV")); >> >> >> davfs2 and webdav is not a good mix actually, I had tried to mix >> them together and the performance were really bad. With the load >> test of 10 requests/s, load average on my namenode were always > >> 15 and it took me about 5 mins for `ls` the root directory of HDFS >> during the test. >> >> Since you're using Hadoop 0.20.1, it's better to use fusedfs >> library provided in Hadoop package. You have to do some tricks to >> compile fusedfs with Hadoop, otherwise it would take you a lot of >> time for compiling redundant things. >> >> Best, >> Huy Phan >> >> Zhang Bingjun (Eddy) wrote: >> >> Dear Huy Phan and others, >> >> Thanks a lot for your efforts in customizing the WebDav server >> and make it work >> for Hadoop-0.20.1. >> After setting up the WebDav server, I could access it using >> Cadaver client in Ubuntu without using any username password. >> Operations like deleting files, etc, were working. The command >> is: *cadaver http://server:9800* >> >> However, when I was trying to mount the WebDav server using >> davfs2 in Ubuntu, I always get the following error: >> "mount.davfs: mounting failed; the server does not support >> WebDAV". >> >> I was promoted to input username and password like below: >> hadoop@hdfs2:/mnt$ sudo mount.davfs >> http://192.168.0.131:9800/test hdfs-webdav/ >> Please enter the username to authenticate with server >> http://192.168.0.131:9800/test or hit enter for none. >> Username: hadoop >> Please enter the password to authenticate user hadoop with server >> http://192.168.0.131:9800/test or hit enter for none. >> Password: >> mount.davfs: mounting failed; the server does not support WebDAV >> >> Even though I have tried all possible usernames and passwords >> either from the WebDAV accounts.properties file or from the >> Ubuntu system of the WebDAV server, I still got this error >> message. >> Could you and anyone give me some hints on this problem? How >> could I solve it? Very much appreciate your help! >> >> Best regards, >> Zhang Bingjun (Eddy) >> >> E-mail: eddymier@gmail.com >> >, >> >> bingjun@nus.edu.sg >> >, >> >> bingjun@comp.nus.edu.sg >> > >> >> >> Tel No: +65-96188110 (M) >> >> >> >> > --000e0cd32f540914260476e913d2 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Dear Huy Phan,

I downloaded davfs2-1.4.3 and in this ver= sion the patch you sent me seems to be applied already. I compiled and inst= alled this version. However, the error message is still around like below..= .

hadoop@hdfs2:/mnt$ sudo mount.davfs http://192.168.0.131:9800 hdfs-webdav/
Please enter the username to authenticate with server
http://192.168.0.131:9800 or hit enter f= or none.
=C2=A0=C2=A0Username: hadoop
Please enter the password to au= thenticate user hadoop with server
http://192.168.0.131:9800 or hit enter for none.
=C2= =A0=C2=A0Password:
mount.davfs: mounting failed; the server does not support WebDAV
=

Which username or password should I input? Any user in = the account.properties file or the user in the WebDAV OS?

Regarding the memory leak in fuse-dfs and libhdfs, I posted one = patch in apache jira. However, when used in production environment, the mem= ory leak still exists and cause the mounting point unusable after a number = of write/read operations. The memory leak there=C2=A0is really annoying...<= /div>

I hope I can setup the mix of davfs2 and WebDAV to have= a try on its performance. Any ideas to get around the error "mount fa= iled; the server does not support WebDAV"?

Thank you so much for your help!

Best reg= ards,
Zhang Bingjun (Eddy)

E-mail: eddymier@gmail.com, bingj= un@nus.edu.sg, bingjun@comp.= nus.edu.sg
Tel No: +65-96188110 (M)


On Tue, Oct 27, 2009 at 7:19 PM, Huy Pha= n <dachuy@gmail.co= m> wrote:
Hi Zhang,
I didn't play much with fuse-dfs, in my opinion, memory leak is somethi= ng solvable and I can see Apache had made some fixes for this issue on libh= dfs.
If you encounter these problems with older version of Hadoop, I think you s= hould give a try on the latest stable version.
Since I didn't have much fun so far with fuse-dfs, i cannot say it'= s the best or not, but it's definitely better than mixing davfs2 and we= bdav together.


Best,
Huy Phan

Zhang Bingjun (Eddy) wrote:
Dear Huy Phan,


Thanks for your quick reply.
I was using fuse-dfs before. But I found serious memory leak with fuse-dfs = about 10MB leakage per 10k file read/write. When the occupied memory size r= eached about 150MB, the read/write performance dropped dramatically. Did yo= u encounter these problems?

What I am trying to do is to mount HDFS as a local directory in Ubuntu. Do = you think fuse-dfs is the best option so far?

Thank you so much for your input!

Best regards,
Zhang Bingjun (Eddy)

On Tue, Oct 27, 2009 at 6:55 PM, Huy Phan <dachuy@gmail.com <mailto:dachuy@gmail.com>> wrote:

=C2=A0 =C2=A0Hi Zhang,

=C2=A0 =C2=A0Here is the patch for davfs2 to solve "server does not s= upport
=C2=A0 =C2=A0WebDAV" issue:

=C2=A0 =C2=A0diff --git a/src/webdav.c b/src/webdav.c
=C2=A0 =C2=A0index 8ec7a2d..4bdaece 100644
=C2=A0 =C2=A0--- a/src/webdav.c
=C2=A0 =C2=A0+++ b/src/webdav.c
=C2=A0 =C2=A0@@ -472,7 +472,7 @@ dav_init_connection(const char *path)

=C2=A0 =C2=A0 =C2=A0 if (!ret) {
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 initialized =3D 1;
=C2=A0 =C2=A0- =C2=A0 =C2=A0 =C2=A0 =C2=A0if (!caps.dav_class1 && = !caps.dav_class2 &&
=C2=A0 =C2=A0 !ignore_dav_header) {
=C2=A0 =C2=A0+ =C2=A0 =C2=A0 =C2=A0 =C2=A0if (!caps.dav_class1 && = !ignore_dav_header) {
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 if (have_terminal) {
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 error(EXIT_= FAILURE, 0,
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 _("mounting failed; the server does not
=C2=A0 =C2=A0support WebDAV"));


=C2=A0 =C2=A0davfs2 and webdav is not a good mix actually, I had tried to = mix
=C2=A0 =C2=A0them together and the performance were really bad. With the l= oad
=C2=A0 =C2=A0test of 10 requests/s, load average on my namenode were alway= s >
=C2=A0 =C2=A015 and it took me about 5 mins for `ls` the root directory of= HDFS
=C2=A0 =C2=A0during the test.

=C2=A0 =C2=A0Since you're using Hadoop 0.20.1, it's better to use = fusedfs
=C2=A0 =C2=A0library provided in Hadoop package. You have to do some trick= s to
=C2=A0 =C2=A0compile fusedfs with Hadoop, otherwise it would take you a lo= t of
=C2=A0 =C2=A0time for compiling redundant things.

=C2=A0 =C2=A0Best,
=C2=A0 =C2=A0Huy Phan

=C2=A0 =C2=A0Zhang Bingjun (Eddy) wrote:

=C2=A0 =C2=A0 =C2=A0 =C2=A0Dear Huy Phan and others,

=C2=A0 =C2=A0 =C2=A0 =C2=A0Thanks a lot for your efforts in customizing th= e WebDav server
=C2=A0 =C2=A0 =C2=A0 =C2=A0<http://github.com/huyphan/HDFS-over-Webdav> and make it work
=C2=A0 =C2=A0 =C2=A0 =C2=A0for Hadoop-0.20.1.
=C2=A0 =C2=A0 =C2=A0 =C2=A0After setting up the WebDav server, I could acc= ess it using
=C2=A0 =C2=A0 =C2=A0 =C2=A0Cadaver client in Ubuntu without using any user= name password.
=C2=A0 =C2=A0 =C2=A0 =C2=A0Operations like deleting files, etc, were worki= ng. The command
=C2=A0 =C2=A0 =C2=A0 =C2=A0is: *cadaver http://server:9800*

=C2=A0 =C2=A0 =C2=A0 =C2=A0However, when I was trying to mount the WebDav = server using
=C2=A0 =C2=A0 =C2=A0 =C2=A0davfs2 in Ubuntu, I always get the following er= ror:
=C2=A0 =C2=A0 =C2=A0 =C2=A0"mount.davfs: mounting failed; the server = does not support
=C2=A0 =C2=A0 =C2=A0 =C2=A0WebDAV".

=C2=A0 =C2=A0 =C2=A0 =C2=A0I was promoted to input username and password l= ike below:
=C2=A0 =C2=A0 =C2=A0 =C2=A0hadoop@hdfs2:/mnt$ sudo mount.davfs
=C2=A0 =C2=A0 =C2=A0 =C2=A0
http://192.168.0.131:9800/test hdfs-webdav/
=C2=A0 =C2=A0 =C2=A0 =C2=A0Please enter the username to authenticate with = server
=C2=A0 =C2=A0 =C2=A0 =C2=A0http://192.168.0.131:9800/test or hit enter for none.
=C2=A0 =C2=A0 =C2=A0 =C2=A0Username: hadoop
=C2=A0 =C2=A0 =C2=A0 =C2=A0Please enter the password to authenticate user = hadoop with server
=C2=A0 =C2=A0 =C2=A0 =C2=A0http://192.168.0.131:9800/test or hit enter for none.
=C2=A0 =C2=A0 =C2=A0 =C2=A0Password:
=C2=A0 =C2=A0 =C2=A0 =C2=A0mount.davfs: mounting failed; the server does n= ot support WebDAV

=C2=A0 =C2=A0 =C2=A0 =C2=A0Even though I have tried all possible usernames= and passwords
=C2=A0 =C2=A0 =C2=A0 =C2=A0either from the WebDAV accounts.properties file= or from the
=C2=A0 =C2=A0 =C2=A0 =C2=A0Ubuntu system of the WebDAV server, I still got= this error
=C2=A0 =C2=A0 =C2=A0 =C2=A0message.
=C2=A0 =C2=A0 =C2=A0 =C2=A0Could you and anyone give me some hints on this= problem? How
=C2=A0 =C2=A0 =C2=A0 =C2=A0could I solve it? Very much appreciate your hel= p!

=C2=A0 =C2=A0 =C2=A0 =C2=A0Best regards,
=C2=A0 =C2=A0 =C2=A0 =C2=A0Zhang Bingjun (Eddy)

=C2=A0 =C2=A0 =C2=A0 =C2=A0E-mail: eddymier@gmail.com <mailto:eddymier@gmail.com>
=C2=A0 =C2=A0 =C2=A0 =C2=A0<mailto:eddymier@gmail.com <mailto:eddymier@gmail.com>>,

=C2=A0 =C2=A0 =C2=A0 =C2=A0bingjun@nus.edu.sg <mailto:bingjun@nus.edu.sg>
=C2=A0 =C2=A0 =C2=A0 =C2=A0<mailto:bingjun@nus.edu.sg <mailto:bingjun@nus.edu.sg>>,


--000e0cd32f540914260476e913d2--