hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Seraph Imalia <ser...@eisp.co.za>
Subject Re: Fuse-DFS
Date Tue, 11 May 2010 08:29:42 GMT
Hi Eli,

I seems that I will not be able to use fuse_dfs for what I need after  
all :( - I was trying to use it with Lucene (temporarily to buy me  
some time to change our code and using the hdfs contrib for Lucene).   
It looks like Normal Read and write operations work fine in Lucene,  
but not merging.  Lucene gives an error saying that the merged index  
shard may be corrupt because it has an unexpected number of documents  
in it and it will not complete the merge to prevent corrupting the  
index.

Seems like I need to accelerate development of the Lucene hdfs contrib  
after all :(

Thank you very much for your help.  We have a number of places where  
we still plan to use fuse_dfs so it will be nice to have an answer to  
my last question - but it is not as urgent as it was before.

Regards,
Seraph




On 10 May 2010, at 9:46 PM, Seraph Imalia wrote:

> Hi Eli,
>
> I was about to try what you suggest but I think I am on to something.
>
> With the new CDH2 I have noticed something new - I am able to mount  
> the dfs both in and out of debug mode provided I use the  
> fuse_dfs_wrapper.sh script to do it.
>
> But, it does not work when putting the mount into /etc/fstab.  I  
> then tried setting the environment variables in the wrapper script  
> to invalid values and was able to recreate the same behavior as the / 
> etc/fstab way.
>
> So basically, it looks like things have improved with the use of  
> CDH2 but now when using /etc/fstab, I suspect it is unable to access  
> the following environment variables...
> export HADOOP_HOME=/opt/hadoop-0.20.1+169.68
> export JAVA_HOME=/usr/lib64/jre
>
> I have added them to the end of /etc/bash.bashrc which did not  
> work.  What else can I do to make sure they exist when "mount / 
> Volumes/hdfs" is run.  Here is the line in /etc/fstab...
>
> fuse_dfs#dfs://dynobuntu10:8020 /Volumes/hdfs fuse  
> allow_other,rw,big_writes 0 0
>
> Regards,
> Seraph
>
>
>
> On 10 May 2010, at 2:36 AM, Eli Collins wrote:
>
>> Hey Seraph,
>>
>> fuse_impls_getattr.c  connects via  hdfsConnectAsUser so you should
>> see a log (unless its returning from a case that doesn't print an
>> error). Next step is to determine that you're actually reaching the
>> code you modified by adding a syslog to the top of the function (need
>> to make sure you're actually loading the libhdfs you've built vs an
>> older one or another one installed on your system), and then  
>> determine
>> which error case in that function you're seeing. It's strange that -d
>> would cause that path to change.
>>
>> I don't use -d on our internal cluster so I know that case can work.
>> Here's how we mount in fstab:
>>
>> fuse_dfs#dfs://<host>:8020   /exports/hdfs fuse  
>> allow_other,usetrash,rw 2 0
>>
>> Thanks,
>> Eli
>>
>> On Sun, May 9, 2010 at 7:37 AM, Seraph Imalia <seraph@eisp.co.za>  
>> wrote:
>>> Hi Eli,
>>>
>>> I have changed fprintfs to syslogs and recompiled and it does not  
>>> seem to
>>> show me what is happening - nothing new in the syslog.
>>> I also saw a function called hdfsConnectAsUserNewInstance - should  
>>> the
>>> fprintfs be chaged to syslogs in there too?  I am not very  
>>> experienced in
>>> c++ so please forgive my ignorant questions here.
>>>
>>> I also downloaded a CDH2 release and have experienced the same  
>>> problem with
>>> that version too.  I downloaded it from here...
>>> http://archive.cloudera.com/docs/_choosing_a_version.html
>>>
>>> Regards,
>>> Seraph
>>>
>>>
>>> On 07 May 2010, at 5:58 PM, Eli Collins wrote:
>>>
>>>> Converting the fprintfs to syslogs in hdfsConnectAsUser in
>>>> src/c++/libhdfs/hdfs.c (and doing a clean build)  should let you  
>>>> see
>>>> the particular reason it can't connect. It's weird that it can  
>>>> connect
>>>> w/ the debug option but now w/o.
>>>>
>>>> Thanks,
>>>> Eli
>>>>
>>>>
>>>> On Thu, May 6, 2010 at 11:56 AM, Seraph Imalia  
>>>> <seraph@eisp.co.za> wrote:
>>>>>
>>>>> I don't think that is happening... I have checked that before,  
>>>>> but here
>>>>> is a
>>>>> dump of me checking again...
>>>>> root@dynobuntu17:/opt/hadoop-0.20.2/contrib/fuse-dfs# ps aux |  
>>>>> grep fuse
>>>>> root     27947  0.0  0.0   7524   892 pts/0    R+   20:49   0:00  
>>>>> grep
>>>>> fuse
>>>>> root@dynobuntu17:/opt/hadoop-0.20.2/contrib/fuse-dfs#
>>>>> ./fuse_dfs_wrapper.sh
>>>>> dfs://dynobuntu10:9000 /Volumes/hdfs -obig_writes
>>>>> port=9000,server=dynobuntu10
>>>>> fuse-dfs didn't recognize /Volumes/hdfs,-2
>>>>> root@dynobuntu17:/opt/hadoop-0.20.2/contrib/fuse-dfs# ps aux |  
>>>>> grep fuse
>>>>> root     27952  0.3  0.1 2273632 10912 ?       Ssl  20:49   0:00  
>>>>> fuse_dfs
>>>>> dfs://dynobuntu10:9000 /Volumes/hdfs -obig_writes
>>>>> root     27969  0.0  0.0   7524   904 pts/0    S+   20:49   0:00  
>>>>> grep
>>>>> fuse
>>>>> root@dynobuntu17:/opt/hadoop-0.20.2/contrib/fuse-dfs# ls / 
>>>>> Volumes/hdfs
>>>>> ls: cannot access /Volumes/hdfs: Input/output error
>>>>> root@dynobuntu17:/opt/hadoop-0.20.2/contrib/fuse-dfs# tail -- 
>>>>> lines=2
>>>>> /var/log/syslog
>>>>> May  6 20:49:38 dynobuntu17 fuse_dfs: mounting dynobuntu10:9000
>>>>> May  6 20:49:45 dynobuntu17 fuse_dfs: ERROR: could not connect to
>>>>> dynobuntu10:9000 fuse_impls_getattr.c:37
>>>>> May  6 20:49:46 dynobuntu17 fuse_dfs: ERROR: could not connect to
>>>>> dynobuntu10:9000 fuse_impls_getattr.c:37
>>>>> root@dynobuntu17:/opt/hadoop-0.20.2/contrib/fuse-dfs# umount
>>>>> /Volumes/hdfs/
>>>>> root@dynobuntu17:/opt/hadoop-0.20.2/contrib/fuse-dfs# ps aux |  
>>>>> grep fuse
>>>>> root     28063  0.0  0.0   7524   908 pts/0    S+   20:50   0:00  
>>>>> grep
>>>>> fuse
>>>>> root@dynobuntu17:/opt/hadoop-0.20.2/contrib/fuse-dfs#
>>>>> The only thing I can find on the web is on the issues.apache.org  
>>>>> where
>>>>> two
>>>>> people experienced the problem and one of them thought it had  
>>>>> something
>>>>> to
>>>>> do with fuse itself.  If I knew what I could change on fuse I  
>>>>> would do it
>>>>> -
>>>>> I am quite desperate to get this working without the debug  
>>>>> option - I am
>>>>> so
>>>>> close.
>>>>> Here is the a link I
>>>>> found:
>>>>> https://issues.apache.org/jira/browse/HADOOP-4?focusedCommentId=12563182&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#action_12563182
>>>>> Regards,
>>>>> Seraph
>>>>>
>>>>> On 06 May 2010, at 5:44 PM, Eli Collins wrote:
>>>>>
>>>>> Hey Seraph,
>>>>>
>>>>> The -d is just a debug option, to print output to the terminal,
>>>>> shouldn't change the execution. I suspect something else is  
>>>>> going on,
>>>>> perhaps you have an old fuse process running?
>>>>>
>>>>> Thanks,
>>>>> Eli
>>>>>
>>>>> On Thu, May 6, 2010 at 4:55 AM, Seraph Imalia  
>>>>> <seraph@eisp.co.za> wrote:
>>>>>
>>>>> Hi,
>>>>>
>>>>> I am experiencing an annoying problem...
>>>>>
>>>>> When I run this "./fuse_dfs_wrapper.sh dfs://dynobuntu10:8020
>>>>> /Volumes/hdfs
>>>>>
>>>>> -obig_writes -d" then everything works fine.
>>>>>
>>>>> When I run this "./fuse_dfs_wrapper.sh dfs://dynobuntu10:8020
>>>>> /Volumes/hdfs
>>>>>
>>>>> -obig_writes" (i.e. remove the daemonize option) I get issues  
>>>>> connecting
>>>>> to
>>>>>
>>>>> hadoop...
>>>>>
>>>>> Here is what is in /var/log/syslog after running "ls /Volumes/ 
>>>>> hdfs"
>>>>>
>>>>> May  6 12:57:16 dynobuntu17 fuse_dfs: ERROR: could not connect to
>>>>>
>>>>> dynobuntu10:8020 fuse_impls_getattr.c:37
>>>>>
>>>>> May  6 12:57:23 dynobuntu17 fuse_dfs: ERROR: could not connect to
>>>>>
>>>>> dynobuntu10:8020 fuse_impls_getattr.c:37
>>>>>
>>>>> No errors appear in the syslog when the -d option is specified.
>>>>>
>>>>> I also tried port 9000 with exactly the same results.  There is no
>>>>> firewall
>>>>>
>>>>> software installed on any of the servers.
>>>>>
>>>>> It almost appears as something is stopping the connection to a the
>>>>>
>>>>> nameserver when fuse_dfs runs as a daemon.  Please help.
>>>>>
>>>>> We are running OS:  Ubuntu 9.04 x64 (2.6.28-11-server #42-Ubuntu  
>>>>> SMP Fri
>>>>> Apr
>>>>>
>>>>> 17 02:45:36 UTC 2009 GNU/Linux)
>>>>>
>>>>> Seraph
>>>>>
>>>>>
>>>>>
>>>>> On 06 May 2010, at 11:31 AM, Seraph Imalia wrote:
>>>>>
>>>>> Hi Eli,
>>>>>
>>>>> Thank you very much - applying patch HDFS-961-2.patch and re- 
>>>>> building
>>>>>
>>>>> resolved the problem.
>>>>>
>>>>> Regards,
>>>>>
>>>>> Seraph
>>>>>
>>>>> On 05 May 2010, at 11:20 PM, Eli Collins wrote:
>>>>>
>>>>> Try using a port besides 8020 or applying the patch for HDFS-961.
>>>>>
>>>>> Thanks,
>>>>>
>>>>> Eli
>>>>>
>>>>> On Wed, May 5, 2010 at 9:02 AM, Seraph Imalia  
>>>>> <seraph@eisp.co.za> wrote:
>>>>>
>>>>> I have successfully managed to mount a dfs - but here is what I am
>>>>>
>>>>> experiencing...
>>>>>
>>>>> I mounted to /Volumes/hdfs  when I run "ls /Volumes/hdfs" it  
>>>>> works fine
>>>>>
>>>>> and
>>>>>
>>>>> returns no results.
>>>>>
>>>>> so I ran "mkdir /Volumes/hdfs/test" and it was successful and I  
>>>>> could
>>>>>
>>>>> see
>>>>>
>>>>> the new directory exists using the web interface to browse hadoop.
>>>>>
>>>>> but now when I run "ls /Volumes/hdfs" it does this...
>>>>>
>>>>> ls: cannot access /Volumes/hdfs/p-site.x0: No such file or  
>>>>> directory
>>>>>
>>>>> p-site.x0
>>>>>
>>>>> and on the terminal window where I mounted it, it dumps this...
>>>>>
>>>>> LOOKUP /p-site.x0
>>>>>
>>>>> unique: 13, error: -2 (No such file or directory), outsize: 16
>>>>>
>>>>> unique: 14, opcode: READDIR (28), nodeid: 1, insize: 80
>>>>>
>>>>> unique: 14, error: 0 (Success), outsize: 16
>>>>>
>>>>> unique: 15, opcode: RELEASEDIR (29), nodeid: 1, insize: 64
>>>>>
>>>>> unique: 15, error: 0 (Success), outsize: 16
>>>>>
>>>>> unique: 16, opcode: GETATTR (3), nodeid: 1, insize: 56
>>>>>
>>>>> unique: 16, error: 0 (Success), outsize: 112
>>>>>
>>>>> unique: 17, opcode: OPENDIR (27), nodeid: 1, insize: 48
>>>>>
>>>>> unique: 17, error: 0 (Success), outsize: 32
>>>>>
>>>>> unique: 18, opcode: READDIR (28), nodeid: 1, insize: 80
>>>>>
>>>>> unique: 18, error: 0 (Success), outsize: 112
>>>>>
>>>>> unique: 19, opcode: GETATTR (3), nodeid: 1, insize: 56
>>>>>
>>>>> unique: 19, error: 0 (Success), outsize: 112
>>>>>
>>>>> unique: 20, opcode: LOOKUP (1), nodeid: 1, insize: 42
>>>>>
>>>>> I am not sure what this debug info means?
>>>>>
>>>>> Here is everything that happened on terminal 1 and 2...
>>>>>
>>>>> terminal 1
>>>>>
>>>>> root@dynobuntu17:/opt/hadoop-0.20.2#
>>>>>
>>>>> contrib/fuse-dfs/fuse_dfs_wrapper.sh
>>>>>
>>>>> dfs://dynobuntu10:8020 /Volumes/hdfs -d
>>>>>
>>>>> port=8020,server=dynobuntu10
>>>>>
>>>>> fuse-dfs didn't recognize /Volumes/hdfs,-2
>>>>>
>>>>> fuse-dfs ignoring option -d
>>>>>
>>>>> unique: 1, opcode: INIT (26), nodeid: 0, insize: 56
>>>>>
>>>>> INIT: 7.10
>>>>>
>>>>> flags=0x0000003b
>>>>>
>>>>> max_readahead=0x00020000
>>>>>
>>>>> INIT: 7.8
>>>>>
>>>>> flags=0x00000001
>>>>>
>>>>> max_readahead=0x00020000
>>>>>
>>>>> max_write=0x00020000
>>>>>
>>>>> unique: 1, error: 0 (Success), outsize: 40
>>>>>
>>>>> unique: 2, opcode: GETATTR (3), nodeid: 1, insize: 56
>>>>>
>>>>> unique: 2, error: 0 (Success), outsize: 112
>>>>>
>>>>> unique: 3, opcode: OPENDIR (27), nodeid: 1, insize: 48
>>>>>
>>>>> unique: 3, error: 0 (Success), outsize: 32
>>>>>
>>>>> unique: 4, opcode: READDIR (28), nodeid: 1, insize: 80
>>>>>
>>>>> unique: 4, error: -2 (No such file or directory), outsize: 16
>>>>>
>>>>> unique: 5, opcode: RELEASEDIR (29), nodeid: 1, insize: 64
>>>>>
>>>>> unique: 5, error: 0 (Success), outsize: 16
>>>>>
>>>>> unique: 6, opcode: GETATTR (3), nodeid: 1, insize: 56
>>>>>
>>>>> unique: 6, error: 0 (Success), outsize: 112
>>>>>
>>>>> unique: 7, opcode: LOOKUP (1), nodeid: 1, insize: 45
>>>>>
>>>>> LOOKUP /test
>>>>>
>>>>> unique: 7, error: -2 (No such file or directory), outsize: 16
>>>>>
>>>>> unique: 8, opcode: MKDIR (9), nodeid: 1, insize: 53
>>>>>
>>>>> MKDIR /test
>>>>>
>>>>> NODEID: 2
>>>>>
>>>>> unique: 8, error: 0 (Success), outsize: 136
>>>>>
>>>>> unique: 9, opcode: GETATTR (3), nodeid: 1, insize: 56
>>>>>
>>>>> unique: 9, error: 0 (Success), outsize: 112
>>>>>
>>>>> unique: 10, opcode: OPENDIR (27), nodeid: 1, insize: 48
>>>>>
>>>>> unique: 10, error: 0 (Success), outsize: 32
>>>>>
>>>>> unique: 11, opcode: READDIR (28), nodeid: 1, insize: 80
>>>>>
>>>>> unique: 11, error: 0 (Success), outsize: 120
>>>>>
>>>>> unique: 12, opcode: GETATTR (3), nodeid: 1, insize: 56
>>>>>
>>>>> unique: 12, error: 0 (Success), outsize: 112
>>>>>
>>>>> unique: 13, opcode: LOOKUP (1), nodeid: 1, insize: 50
>>>>>
>>>>> LOOKUP /p-site.x0
>>>>>
>>>>> unique: 13, error: -2 (No such file or directory), outsize: 16
>>>>>
>>>>> unique: 14, opcode: READDIR (28), nodeid: 1, insize: 80
>>>>>
>>>>> unique: 14, error: 0 (Success), outsize: 16
>>>>>
>>>>> unique: 15, opcode: RELEASEDIR (29), nodeid: 1, insize: 64
>>>>>
>>>>> unique: 15, error: 0 (Success), outsize: 16
>>>>>
>>>>> unique: 16, opcode: GETATTR (3), nodeid: 1, insize: 56
>>>>>
>>>>> unique: 16, error: 0 (Success), outsize: 112
>>>>>
>>>>> unique: 17, opcode: OPENDIR (27), nodeid: 1, insize: 48
>>>>>
>>>>> unique: 17, error: 0 (Success), outsize: 32
>>>>>
>>>>> unique: 18, opcode: READDIR (28), nodeid: 1, insize: 80
>>>>>
>>>>> unique: 18, error: 0 (Success), outsize: 112
>>>>>
>>>>> unique: 19, opcode: GETATTR (3), nodeid: 1, insize: 56
>>>>>
>>>>> unique: 19, error: 0 (Success), outsize: 112
>>>>>
>>>>> unique: 20, opcode: LOOKUP (1), nodeid: 1, insize: 42
>>>>>
>>>>> LOOKUP /%
>>>>>
>>>>> unique: 20, error: -2 (No such file or directory), outsize: 16
>>>>>
>>>>> unique: 21, opcode: READDIR (28), nodeid: 1, insize: 80
>>>>>
>>>>> unique: 21, error: 0 (Success), outsize: 16
>>>>>
>>>>> unique: 22, opcode: RELEASEDIR (29), nodeid: 1, insize: 64
>>>>>
>>>>> unique: 22, error: 0 (Success), outsize: 16
>>>>>
>>>>> unique: 23, opcode: GETATTR (3), nodeid: 1, insize: 56
>>>>>
>>>>> unique: 23, error: 0 (Success), outsize: 112
>>>>>
>>>>> unique: 24, opcode: OPENDIR (27), nodeid: 1, insize: 48
>>>>>
>>>>> unique: 24, error: 0 (Success), outsize: 32
>>>>>
>>>>> unique: 25, opcode: READDIR (28), nodeid: 1, insize: 80
>>>>>
>>>>> unique: 25, error: 0 (Success), outsize: 112
>>>>>
>>>>> unique: 26, opcode: GETATTR (3), nodeid: 1, insize: 56
>>>>>
>>>>> unique: 26, error: 0 (Success), outsize: 112
>>>>>
>>>>> unique: 27, opcode: LOOKUP (1), nodeid: 1, insize: 42
>>>>>
>>>>> LOOKUP /1
>>>>>
>>>>> unique: 27, error: -2 (No such file or directory), outsize: 16
>>>>>
>>>>> unique: 28, opcode: READDIR (28), nodeid: 1, insize: 80
>>>>>
>>>>> unique: 28, error: 0 (Success), outsize: 16
>>>>>
>>>>> unique: 29, opcode: RELEASEDIR (29), nodeid: 1, insize: 64
>>>>>
>>>>> unique: 29, error: 0 (Success), outsize: 16
>>>>>
>>>>> unique: 30, opcode: GETATTR (3), nodeid: 1, insize: 56
>>>>>
>>>>> unique: 30, error: 0 (Success), outsize: 112
>>>>>
>>>>> unique: 31, opcode: OPENDIR (27), nodeid: 2, insize: 48
>>>>>
>>>>> unique: 31, error: 0 (Success), outsize: 32
>>>>>
>>>>> unique: 32, opcode: READDIR (28), nodeid: 2, insize: 80
>>>>>
>>>>> unique: 32, error: -2 (No such file or directory), outsize: 16
>>>>>
>>>>> unique: 33, opcode: RELEASEDIR (29), nodeid: 2, insize: 64
>>>>>
>>>>> unique: 33, error: 0 (Success), outsize: 16
>>>>>
>>>>> unique: 34, opcode: OPENDIR (27), nodeid: 1, insize: 48
>>>>>
>>>>> unique: 34, error: 0 (Success), outsize: 32
>>>>>
>>>>> unique: 35, opcode: READDIR (28), nodeid: 1, insize: 80
>>>>>
>>>>> unique: 35, error: 0 (Success), outsize: 120
>>>>>
>>>>> unique: 36, opcode: GETATTR (3), nodeid: 1, insize: 56
>>>>>
>>>>> unique: 36, error: 0 (Success), outsize: 112
>>>>>
>>>>> unique: 37, opcode: LOOKUP (1), nodeid: 1, insize: 50
>>>>>
>>>>> LOOKUP /d-site.x0
>>>>>
>>>>> unique: 37, error: -2 (No such file or directory), outsize: 16
>>>>>
>>>>> unique: 38, opcode: READDIR (28), nodeid: 1, insize: 80
>>>>>
>>>>> unique: 38, error: 0 (Success), outsize: 16
>>>>>
>>>>> unique: 39, opcode: RELEASEDIR (29), nodeid: 1, insize: 64
>>>>>
>>>>> unique: 39, error: 0 (Success), outsize: 16
>>>>>
>>>>> unique: 40, opcode: GETATTR (3), nodeid: 1, insize: 56
>>>>>
>>>>> unique: 40, error: 0 (Success), outsize: 112
>>>>>
>>>>> unique: 41, opcode: LOOKUP (1), nodeid: 1, insize: 45
>>>>>
>>>>> LOOKUP /test
>>>>>
>>>>> NODEID: 2
>>>>>
>>>>> unique: 41, error: 0 (Success), outsize: 136
>>>>>
>>>>> unique: 42, opcode: LOOKUP (1), nodeid: 2, insize: 47
>>>>>
>>>>> LOOKUP /test/inside
>>>>>
>>>>> unique: 42, error: -2 (No such file or directory), outsize: 16
>>>>>
>>>>> unique: 43, opcode: MKDIR (9), nodeid: 2, insize: 55
>>>>>
>>>>> MKDIR /test/inside
>>>>>
>>>>> NODEID: 3
>>>>>
>>>>> unique: 43, error: 0 (Success), outsize: 136
>>>>>
>>>>> unique: 44, opcode: GETATTR (3), nodeid: 2, insize: 56
>>>>>
>>>>> unique: 44, error: 0 (Success), outsize: 112
>>>>>
>>>>> unique: 45, opcode: OPENDIR (27), nodeid: 2, insize: 48
>>>>>
>>>>> unique: 45, error: 0 (Success), outsize: 32
>>>>>
>>>>> unique: 46, opcode: READDIR (28), nodeid: 2, insize: 80
>>>>>
>>>>> unique: 46, error: 0 (Success), outsize: 112
>>>>>
>>>>> unique: 47, opcode: GETATTR (3), nodeid: 2, insize: 56
>>>>>
>>>>> unique: 47, error: 0 (Success), outsize: 112
>>>>>
>>>>> unique: 48, opcode: LOOKUP (1), nodeid: 2, insize: 42
>>>>>
>>>>> LOOKUP /test/e
>>>>>
>>>>> unique: 48, error: -2 (No such file or directory), outsize: 16
>>>>>
>>>>> unique: 49, opcode: READDIR (28), nodeid: 2, insize: 80
>>>>>
>>>>> unique: 49, error: 0 (Success), outsize: 16
>>>>>
>>>>> unique: 50, opcode: RELEASEDIR (29), nodeid: 2, insize: 64
>>>>>
>>>>> unique: 50, error: 0 (Success), outsize: 16
>>>>>
>>>>> unique: 51, opcode: GETATTR (3), nodeid: 2, insize: 56
>>>>>
>>>>> unique: 51, error: 0 (Success), outsize: 112
>>>>>
>>>>> unique: 52, opcode: OPENDIR (27), nodeid: 2, insize: 48
>>>>>
>>>>> unique: 52, error: 0 (Success), outsize: 32
>>>>>
>>>>> unique: 53, opcode: READDIR (28), nodeid: 2, insize: 80
>>>>>
>>>>> unique: 53, error: 0 (Success), outsize: 112
>>>>>
>>>>> unique: 54, opcode: GETATTR (3), nodeid: 2, insize: 56
>>>>>
>>>>> unique: 54, error: 0 (Success), outsize: 112
>>>>>
>>>>> unique: 55, opcode: LOOKUP (1), nodeid: 2, insize: 42
>>>>>
>>>>> LOOKUP /test/e
>>>>>
>>>>> unique: 55, error: -2 (No such file or directory), outsize: 16
>>>>>
>>>>> unique: 56, opcode: READDIR (28), nodeid: 2, insize: 80
>>>>>
>>>>> unique: 56, error: 0 (Success), outsize: 16
>>>>>
>>>>> unique: 57, opcode: RELEASEDIR (29), nodeid: 2, insize: 64
>>>>>
>>>>> unique: 57, error: 0 (Success), outsize: 16
>>>>>
>>>>> unique: 58, opcode: GETATTR (3), nodeid: 2, insize: 56
>>>>>
>>>>> unique: 58, error: 0 (Success), outsize: 112
>>>>>
>>>>> unique: 59, opcode: OPENDIR (27), nodeid: 2, insize: 48
>>>>>
>>>>> unique: 59, error: 0 (Success), outsize: 32
>>>>>
>>>>> unique: 60, opcode: READDIR (28), nodeid: 2, insize: 80
>>>>>
>>>>> unique: 60, error: 0 (Success), outsize: 112
>>>>>
>>>>> unique: 61, opcode: GETATTR (3), nodeid: 2, insize: 56
>>>>>
>>>>> unique: 61, error: 0 (Success), outsize: 112
>>>>>
>>>>> unique: 62, opcode: LOOKUP (1), nodeid: 2, insize: 42
>>>>>
>>>>> LOOKUP /test/e
>>>>>
>>>>> unique: 62, error: -2 (No such file or directory), outsize: 16
>>>>>
>>>>> unique: 63, opcode: READDIR (28), nodeid: 2, insize: 80
>>>>>
>>>>> unique: 63, error: 0 (Success), outsize: 16
>>>>>
>>>>> unique: 64, opcode: RELEASEDIR (29), nodeid: 2, insize: 64
>>>>>
>>>>> unique: 64, error: 0 (Success), outsize: 16
>>>>>
>>>>> unique: 65, opcode: GETATTR (3), nodeid: 2, insize: 56
>>>>>
>>>>> unique: 65, error: 0 (Success), outsize: 112
>>>>>
>>>>> unique: 66, opcode: OPENDIR (27), nodeid: 2, insize: 48
>>>>>
>>>>> unique: 66, error: 0 (Success), outsize: 32
>>>>>
>>>>> unique: 67, opcode: READDIR (28), nodeid: 2, insize: 80
>>>>>
>>>>> unique: 67, error: 0 (Success), outsize: 112
>>>>>
>>>>> unique: 68, opcode: GETATTR (3), nodeid: 2, insize: 56
>>>>>
>>>>> unique: 68, error: 0 (Success), outsize: 112
>>>>>
>>>>> unique: 69, opcode: LOOKUP (1), nodeid: 2, insize: 42
>>>>>
>>>>> LOOKUP /test/e
>>>>>
>>>>> unique: 69, error: -2 (No such file or directory), outsize: 16
>>>>>
>>>>> unique: 70, opcode: READDIR (28), nodeid: 2, insize: 80
>>>>>
>>>>> unique: 70, error: 0 (Success), outsize: 16
>>>>>
>>>>> unique: 71, opcode: RELEASEDIR (29), nodeid: 2, insize: 64
>>>>>
>>>>> unique: 71, error: 0 (Success), outsize: 16
>>>>>
>>>>> unique: 72, opcode: GETATTR (3), nodeid: 2, insize: 56
>>>>>
>>>>> unique: 72, error: 0 (Success), outsize: 112
>>>>>
>>>>> unique: 73, opcode: OPENDIR (27), nodeid: 2, insize: 48
>>>>>
>>>>> unique: 73, error: 0 (Success), outsize: 32
>>>>>
>>>>> unique: 74, opcode: READDIR (28), nodeid: 2, insize: 80
>>>>>
>>>>> unique: 74, error: 0 (Success), outsize: 112
>>>>>
>>>>> unique: 75, opcode: GETATTR (3), nodeid: 2, insize: 56
>>>>>
>>>>> unique: 75, error: 0 (Success), outsize: 112
>>>>>
>>>>> unique: 76, opcode: LOOKUP (1), nodeid: 2, insize: 42
>>>>>
>>>>> LOOKUP /test/e
>>>>>
>>>>> unique: 76, error: -2 (No such file or directory), outsize: 16
>>>>>
>>>>> unique: 77, opcode: READDIR (28), nodeid: 2, insize: 80
>>>>>
>>>>> unique: 77, error: 0 (Success), outsize: 16
>>>>>
>>>>> unique: 78, opcode: RELEASEDIR (29), nodeid: 2, insize: 64
>>>>>
>>>>> unique: 78, error: 0 (Success), outsize: 16
>>>>>
>>>>> terminal 2
>>>>>
>>>>> root@dynobuntu17:~# ls /Volumes/hdfs/
>>>>>
>>>>> root@dynobuntu17:~# mkdir /Volumes/hdfs/test
>>>>>
>>>>> root@dynobuntu17:~# ls /Volumes/hdfs/
>>>>>
>>>>> ls: cannot access /Volumes/hdfs/p-site.x0: No such file or  
>>>>> directory
>>>>>
>>>>> p-site.x0
>>>>>
>>>>> root@dynobuntu17:~# ls /Volumes/hdfs/
>>>>>
>>>>> ls: cannot access /Volumes/hdfs/%: No such file or directory
>>>>>
>>>>> %
>>>>>
>>>>> root@dynobuntu17:~# ls /Volumes/hdfs
>>>>>
>>>>> ls: cannot access /Volumes/hdfs/1: No such file or directory
>>>>>
>>>>> 1
>>>>>
>>>>> root@dynobuntu17:~# ls /Volumes/hdfs/test
>>>>>
>>>>> root@dynobuntu17:~# ls /Volumes/hdfs/
>>>>>
>>>>> ls: cannot access /Volumes/hdfs/d-site.x0: No such file or  
>>>>> directory
>>>>>
>>>>> d-site.x0
>>>>>
>>>>> root@dynobuntu17:~# mkdir /Volumes/hdfs/test/inside
>>>>>
>>>>> root@dynobuntu17:~# ls /Volumes/hdfs/test
>>>>>
>>>>> ls: cannot access /Volumes/hdfs/test/e: No such file or directory
>>>>>
>>>>> e
>>>>>
>>>>> root@dynobuntu17:~# ls /Volumes/hdfs/test
>>>>>
>>>>> ls: cannot access /Volumes/hdfs/test/e: No such file or directory
>>>>>
>>>>> e
>>>>>
>>>>> root@dynobuntu17:~# ls /Volumes/hdfs/\test
>>>>>
>>>>> ls: cannot access /Volumes/hdfs/test/e: No such file or directory
>>>>>
>>>>> e
>>>>>
>>>>> root@dynobuntu17:~# ls /Volumes/hdfs/\te\s\t
>>>>>
>>>>> ls: cannot access /Volumes/hdfs/test/e: No such file or directory
>>>>>
>>>>> e
>>>>>
>>>>> root@dynobuntu17:~# ls /Volumes/hdfs/\t\e\s\t
>>>>>
>>>>> ls: cannot access /Volumes/hdfs/test/e: No such file or directory
>>>>>
>>>>> e
>>>>>
>>>>> On 05 May 2010, at 4:57 PM, Seraph Imalia wrote:
>>>>>
>>>>> Awesome! - thank you  after running apt-get install libfuse-dev  
>>>>> I got a
>>>>>
>>>>> successful build.
>>>>>
>>>>> Thanks for your help :)
>>>>>
>>>>>
>>>>> On 05 May 2010, at 4:19 PM, Jason Venner wrote:
>>>>>
>>>>> It does not look like you have the fuse-devel package installed  
>>>>> on your
>>>>>
>>>>> system.
>>>>>
>>>>> On Wed, May 5, 2010 at 7:14 AM, Seraph Imalia  
>>>>> <seraph@eisp.co.za> wrote:
>>>>>
>>>>> Hi Jason,
>>>>>
>>>>> Thank you,  I had autoconf installed but not automake - after  
>>>>> installing
>>>>>
>>>>> automake, the build went further, but this time failed with...
>>>>>
>>>>> BUILD FAILED
>>>>>
>>>>> /opt/hadoop-0.20.2/build.xml:497: The following error occurred  
>>>>> while
>>>>>
>>>>> executing this line:
>>>>>
>>>>> /opt/hadoop-0.20.2/src/contrib/build.xml:30: The following error
>>>>>
>>>>> occurred
>>>>>
>>>>> while executing this line:
>>>>>
>>>>> /opt/hadoop-0.20.2/src/contrib/fuse-dfs/build.xml:57: exec  
>>>>> returned: 2
>>>>>
>>>>> The output from the build is attached.
>>>>>
>>>>>
>>>>>
>>>>> We are running OS:  Ubuntu 9.04 x64 (2.6.28-11-server #42-Ubuntu  
>>>>> SMP Fri
>>>>>
>>>>> Apr
>>>>>
>>>>> 17 02:45:36 UTC 2009 GNU/Linux)
>>>>>
>>>>> Regards,
>>>>>
>>>>> Seraph
>>>>>
>>>>>
>>>>> On 05 May 2010, at 3:47 PM, Jason Venner wrote:
>>>>>
>>>>> You will need to install the gnu development tool chain for your
>>>>>
>>>>> platform, a quick check on an older redhat system I have  
>>>>> suggests the
>>>>>
>>>>> automake and autoconf rpm's will provide aclocal, automake and
>>>>>
>>>>> autoconf.
>>>>>
>>>>> The ./configure: not found error is an artifact of the earlier  
>>>>> failures.
>>>>>
>>>>>
>>>>> On Wed, May 5, 2010 at 5:37 AM, Seraph Imalia  
>>>>> <seraph@eisp.co.za> wrote:
>>>>>
>>>>> Hi,
>>>>>
>>>>> I have been following the following URL to mount an HDFS using
>>>>>
>>>>> fuse-dfs: http://wiki.apache.org/hadoop/MountableHDFS
>>>>>
>>>>> I have had many problems trying to build it but have managed to  
>>>>> get
>>>>>
>>>>> through
>>>>>
>>>>> the first two build commands without build errors.  Running the  
>>>>> last
>>>>>
>>>>> build
>>>>>
>>>>> command "ant compile-contrib -Dlibhdfs=1 -Dfusedfs=1" results in  
>>>>> the
>>>>>
>>>>> following error:
>>>>>
>>>>> compile:
>>>>>
>>>>> [echo] contrib: fuse-dfs
>>>>>
>>>>> [exec] /opt/hadoop-0.20.1/src/contrib/fuse-dfs/bootstrap.sh: 18:
>>>>>
>>>>> aclocal: not found
>>>>>
>>>>> [exec] /opt/hadoop-0.20.1/src/contrib/fuse-dfs/bootstrap.sh: 19:
>>>>>
>>>>> automake: not found
>>>>>
>>>>> [exec] /opt/hadoop-0.20.1/src/contrib/fuse-dfs/bootstrap.sh: 20:
>>>>>
>>>>> autoconf: not found
>>>>>
>>>>> [exec] /opt/hadoop-0.20.1/src/contrib/fuse-dfs/bootstrap.sh: 21:
>>>>>
>>>>> ./configure: not found
>>>>>
>>>>> BUILD FAILED
>>>>>
>>>>> /opt/hadoop-0.20.1/build.xml:497: The following error occurred  
>>>>> while
>>>>>
>>>>> executing this line:
>>>>>
>>>>> /opt/hadoop-0.20.1/src/contrib/build.xml:30: The following error
>>>>>
>>>>> occurred
>>>>>
>>>>> while executing this line:
>>>>>
>>>>> /opt/hadoop-0.20.1/src/contrib/fuse-dfs/build.xml:54: exec  
>>>>> returned: 127
>>>>>
>>>>> I found this link on
>>>>>
>>>>> google: http://issues.apache.org/jira/browse/HADOOP-4 which does  
>>>>> not
>>>>>
>>>>> seem
>>>>>
>>>>> to
>>>>>
>>>>> help me.
>>>>>
>>>>> Please can you assist?
>>>>>
>>>>> Regards,
>>>>>
>>>>> Seraph
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>>
>>>>> Pro Hadoop, a book to guide you from beginner to hadoop mastery,
>>>>>
>>>>> http://www.amazon.com/dp/1430219424?tag=jewlerymall
>>>>>
>>>>> www.prohadoopbook.com a community for Hadoop Professionals
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>>
>>>>> Pro Hadoop, a book to guide you from beginner to hadoop mastery,
>>>>>
>>>>> http://www.amazon.com/dp/1430219424?tag=jewlerymall
>>>>>
>>>>> www.prohadoopbook.com a community for Hadoop Professionals
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>
>>>
>>>
>
>



Mime
View raw message