hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Weiming Lu <weimin...@gmail.com>
Subject Re: Help with fuse-dfs
Date Tue, 22 Dec 2009 15:07:33 GMT
Yes, You are right.
Thanks Brian.

I changed my code back to the original, and used
"./fuse_dfs_wrapper.sh dfs://lwm1:54310 /mnt/dfs -d" to mount.
It works.

Best regards

Weiming


On Tue, Dec 22, 2009 at 10:46 PM, Brian Bockelman <bbockelm@cse.unl.edu> wrote:
> No.
>
> The strings *must match* in both hadoop-site.xml and your fuse_dfs arguments.
>
> For example, in my /etc/hosts, I have:
>
> [brian@red ~]$ cat /etc/hosts | grep hadoop-name
> 172.16.100.8      hadoop-name
>
> In hadoop-site.xml, I have:
>
>  <property>
>    <name>fs.default.name</name>
>    <value>hdfs://hadoop-name:9000</value>
>  </property>
>
> If I then did this:
>
> /usr/bin/fuse_dfs /mnt/hadoop -o rw,server=172.16.100.8,port=9000,rdbuffer=32768,allow_other
>
> this would successfully mount, but fail with all sorts of cryptic errors when I try to
use it.  I must instead do this:
>
> /usr/bin/fuse_dfs /mnt/hadoop -o rw,server=hadoop-name,port=9000,rdbuffer=32768,allow_other
>
> Your case might be similar.
>
> Brian
>
> On Dec 22, 2009, at 8:39 AM, Weiming Lu wrote:
>
>> Hi, Brian,
>> I have added "10.15.62.4  lwm1" in /etc/hosts.
>> So, it may be not the reason?
>>
>>
>> Best regard,
>> Weiming Lu
>>
>> On Tue, Dec 22, 2009 at 9:05 PM, Brian Bockelman <bbockelm@cse.unl.edu> wrote:
>>> Hey Weiming,
>>>
>>> One "funky" thing about FUSE-DFS:
>>>
>>> Your host name in hadoop-site.xml and the hostname you use to mount fuse-dfs
must be *exactly* the same string.
>>>
>>> I see you refer to this as lwm1 in hadoop-site.xml and 10.15.62.4 in your FUSE
output.  This will cause all your issues.
>>>
>>> I'd advise to change your code back to the original one and make sure everything
matches.
>>>
>>> Brian
>>>
>>> On Dec 22, 2009, at 5:01 AM, Weiming Lu wrote:
>>>
>>>> Hi, Eli, Thanks very much.
>>>> I added the following code in function dfs_readdir in
>>>> src/contrib/fuse-dfs/src/fuse_dfs.c:
>>>> fprintf(stderr,"numEntries: %d\n",numEntries);
>>>> I can print it correctly.
>>>>
>>>> Finally, I found the following code in function dfs_readdir in
>>>> fuse_dfs.c maybe errous.
>>>> const char *const str = info[i].mName + dfs->dfs_uri_len + path_len +
>>>> ((path_len == 1 && *path == '/') ? 0 : 1);
>>>>
>>>> so I modified it as: const char* const str = info[i].mName;
>>>>
>>>> and then, I recompiled and remount ...
>>>> when I "ls /mnt/dfs", the output is :
>>>> lwm@lwm10:~$ ls /mnt/dfs
>>>> ls: cannot access /mnt/dfs/hdfs://lwm1:54310/hbase: No such file or directory
>>>> ls: cannot access /mnt/dfs/hdfs://lwm1:54310/home: No such file or directory
>>>> ls: cannot access /mnt/dfs/hdfs://lwm1:54310/movies: No such file or directory
>>>> ls: cannot access /mnt/dfs/hdfs://lwm1:54310/system: No such file or directory
>>>> ls: cannot access /mnt/dfs/hdfs://lwm1:54310/user: No such file or directory
>>>> hdfs://lwm1:54310/hbase  hdfs://lwm1:54310/movies  hdfs://lwm1:54310/user
>>>> hdfs://lwm1:54310/home   hdfs://lwm1:54310/system
>>>>
>>>>
>>>> Fortunately, we can access the subdir by "cd /mnt/dfs/user". And,
>>>> "cannot access /mnt/dfs/hdfs://lwm1:......" maybe caused by my
>>>> modification.
>>>>
>>>> The debug message is:
>>>> port=54310,server=10.15.62.4
>>>> fuse-dfs didn't recognize /mnt/dfs,-2
>>>> fuse-dfs ignoring option -d
>>>> FUSE library version: 2.8.1
>>>> nullpath_ok: 0
>>>> unique: 1, opcode: INIT (26), nodeid: 0, insize: 56
>>>> INIT: 7.9
>>>> flags=0x0000000b
>>>> max_readahead=0x00020000
>>>>   INIT: 7.12
>>>>   flags=0x00000011
>>>>   max_readahead=0x00020000
>>>>   max_write=0x00020000
>>>>   unique: 1, success, outsize: 40
>>>> unique: 2, opcode: GETATTR (3), nodeid: 1, insize: 56
>>>> getattr /
>>>> fuse_dfs TRACE - getattr /
>>>>   unique: 2, success, outsize: 120
>>>> unique: 3, opcode: OPENDIR (27), nodeid: 1, insize: 48
>>>>   unique: 3, success, outsize: 32
>>>> unique: 4, opcode: READDIR (28), nodeid: 1, insize: 80
>>>> readdir[0] from 0
>>>> fuse_dfs TRACE - readdir /
>>>> numEntries: 5
>>>> info[0].mName = hdfs://lwm1:54310/hbase
>>>> dfs->dfs_uri_len = 23
>>>> path_len = 1
>>>> stat string: hdfs://cadal:54310/hbase
>>>> info[1].mName = hdfs://lwm1:54310/home
>>>> dfs->dfs_uri_len = 23
>>>> path_len = 1
>>>> stat string: hdfs://cadal:54310/home
>>>> info[2].mName = hdfs://lwm1:54310/movies
>>>> dfs->dfs_uri_len = 23
>>>> path_len = 1
>>>> stat string: hdfs://cadal:54310/movies
>>>> info[3].mName = hdfs://lwm1:54310/system
>>>> dfs->dfs_uri_len = 23
>>>> path_len = 1
>>>> stat string: hdfs://cadal:54310/system
>>>> info[4].mName = hdfs://lwm1:54310/user
>>>> dfs->dfs_uri_len = 23
>>>> path_len = 1
>>>> stat string: hdfs://lwm1:54310/user
>>>>   unique: 4, success, outsize: 336
>>>> unique: 5, opcode: GETATTR (3), nodeid: 1, insize: 56
>>>> getattr /
>>>> fuse_dfs TRACE - getattr /
>>>>   unique: 5, success, outsize: 120
>>>> unique: 6, opcode: LOOKUP (1), nodeid: 1, insize: 46
>>>> LOOKUP /hdfs:
>>>> getattr /hdfs:
>>>> fuse_dfs TRACE - getattr /hdfs:
>>>> Exception in thread "main" java.lang.IllegalArgumentException:
>>>> Pathname /hdfs: from /hdfs: is not a valid DFS filename.
>>>>        at org.apache.hadoop.dfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:125)
>>>>        at org.apache.hadoop.dfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:390)
>>>>        at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:667)
>>>> Call to org.apache.hadoop.fs.FileSystem::exists failed!
>>>>   unique: 6, error: -2 (No such file or directory), outsize: 16
>>>> unique: 7, opcode: LOOKUP (1), nodeid: 1, insize: 46
>>>> LOOKUP /hdfs:
>>>> getattr /hdfs:
>>>> fuse_dfs TRACE - getattr /hdfs:
>>>> Exception in thread "Thread-7" java.lang.IllegalArgumentException:
>>>> Pathname /hdfs: from /hdfs: is not a valid DFS filename.
>>>>        at org.apache.hadoop.dfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:125)
>>>>        at org.apache.hadoop.dfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:390)
>>>>        at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:667)
>>>> Call to org.apache.hadoop.fs.FileSystem::exists failed!
>>>>   unique: 7, error: -2 (No such file or directory), outsize: 16
>>>> unique: 8, opcode: LOOKUP (1), nodeid: 1, insize: 46
>>>> LOOKUP /hdfs:
>>>> getattr /hdfs:
>>>> fuse_dfs TRACE - getattr /hdfs:
>>>> Exception in thread "Thread-8" java.lang.IllegalArgumentException:
>>>> Pathname /hdfs: from /hdfs: is not a valid DFS filename.
>>>>        at org.apache.hadoop.dfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:125)
>>>>        at org.apache.hadoop.dfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:390)
>>>>        at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:667)
>>>> Call to org.apache.hadoop.fs.FileSystem::exists failed!
>>>>   unique: 8, error: -2 (No such file or directory), outsize: 16
>>>> unique: 9, opcode: LOOKUP (1), nodeid: 1, insize: 46
>>>> LOOKUP /hdfs:
>>>> getattr /hdfs:
>>>> fuse_dfs TRACE - getattr /hdfs:
>>>> Exception in thread "main" java.lang.IllegalArgumentException:
>>>> Pathname /hdfs: from /hdfs: is not a valid DFS filename.
>>>>        at org.apache.hadoop.dfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:125)
>>>>        at org.apache.hadoop.dfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:390)
>>>>        at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:667)
>>>> Call to org.apache.hadoop.fs.FileSystem::exists failed!
>>>>   unique: 9, error: -2 (No such file or directory), outsize: 16
>>>> unique: 10, opcode: LOOKUP (1), nodeid: 1, insize: 46
>>>> LOOKUP /hdfs:
>>>> getattr /hdfs:
>>>> fuse_dfs TRACE - getattr /hdfs:
>>>> Exception in thread "Thread-8" java.lang.IllegalArgumentException:
>>>> Pathname /hdfs: from /hdfs: is not a valid DFS filename.
>>>>        at org.apache.hadoop.dfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:125)
>>>>        at org.apache.hadoop.dfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:390)
>>>>        at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:667)
>>>> Call to org.apache.hadoop.fs.FileSystem::exists failed!
>>>>   unique: 10, error: -2 (No such file or directory), outsize: 16
>>>> unique: 11, opcode: READDIR (28), nodeid: 1, insize: 80
>>>>   unique: 11, success, outsize: 16
>>>> unique: 12, opcode: RELEASEDIR (29), nodeid: 1, insize: 64
>>>>   unique: 12, success, outsize: 16
>>>>
>>>>
>>>>
>>>>
>>>> On Mon, Dec 21, 2009 at 12:53 PM, Eli Collins <eli@cloudera.com> wrote:
>>>>> By the way, feel free to send your response to just me
>>>>> (eli@cloudera.com) if you don't want to spam the list.
>>>>>
>>>>> Even if you use the list if you also include my email I'll see your
>>>>> responses faster.
>>>>>
>>>>> Thanks,
>>>>> Eli
>>>>>
>>>>>
>>>>> On Sun, Dec 20, 2009 at 8:51 PM, Eli Collins <eli@cloudera.com>
wrote:
>>>>>>> fuse_dfs TRACE - readdir /
>>>>>>>   unique: 4, success, outsize: 200
>>>>>>> unique: 5, opcode: RELEASEDIR (29), nodeid: 1, insize: 64
>>>>>>>   unique: 5, success, outsize: 16
>>>>>>>
>>>>>>> Does It seem OK?
>>>>>>
>>>>>> Hm, seems like it's not finding any directory entries. Mind putting
a
>>>>>> printf in dfs_readdir after hdfsListDirectories in
>>>>>> fuse_impls_readdir.c to see what numEntries is?
>>>>>>
>>>>>> Thanks,
>>>>>> Eli
>>>>>>
>>>>>
>>>
>>>
>
>

Mime
View raw message