hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stuti Awasthi <stutiawas...@hcl.com>
Subject RE: Unable to mount HDFS using entry in /etc/fstab
Date Mon, 16 Jan 2012 11:17:06 GMT
Hi,

The issue is fixed now. It was because of different Hadoop version I suppose.


-----Original Message-----
From: Stuti Awasthi 
Sent: Monday, January 16, 2012 4:28 PM
To: hdfs-user@hadoop.apache.org
Subject: RE: Unable to mount HDFS using entry in /etc/fstab

Hi,

Now since have working fuse_dfs , I wanted to mount hdfs running on different machine. So
now I have 2 machine, 1 with HDFS running and other with fuse_dfs working for mounting.

When I try to mount with following command I get the following error :

[root@slave fuse-dfs]# fuse_dfs /hdfs -oserver=slave1 -oport=54310 -oallow_other -ousetrash
rw -d fuse-dfs didn't recognize /hdfs,-2 fuse-dfs ignoring option allow_other fuse-dfs ignoring
option -d FUSE library version: 2.8.3
nullpath_ok: 0
unique: 1, opcode: INIT (26), nodeid: 0, insize: 56
INIT: 7.13
flags=0x0000007b
max_readahead=0x00020000
   INIT: 7.12
   flags=0x00000011
   max_readahead=0x00020000
   max_write=0x00020000
   unique: 1, success, outsize: 40
unique: 2, opcode: GETATTR (3), nodeid: 1, insize: 56 getattr / Exception in thread "main"
java.io.IOException: Call to slave1/10.33.100.112:54310 failed on local exception: java.io.EOFException
        at org.apache.hadoop.ipc.Client.wrapException(Client.java:775)
        at org.apache.hadoop.ipc.Client.call(Client.java:743)
        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
        at $Proxy0.getProtocolVersion(Unknown Source)
        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
        at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106)
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:207)
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:170)
        at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82)
        at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
        at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
Caused by: java.io.EOFException
        at java.io.DataInputStream.readInt(DataInputStream.java:392)
        at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:501)
        at org.apache.hadoop.ipc.Client$Connection.run(Client.java:446)
Call to org.apache.hadoop.fs.Filesystem::get(URI, Configuration) failed!
   
Here hostname of HDFS machine is slave1. 
How to fix this? Any idea




-----Original Message-----
From: alo.alt [mailto:wget.null@googlemail.com]
Sent: Thursday, January 12, 2012 10:19 PM
To: hdfs-user@hadoop.apache.org
Subject: Re: Unable to mount HDFS using entry in /etc/fstab

try this:
fuse_dfs#dfs://NAMENODE:PORT /mnt fuse usetrash,rw 0 0

--
Alexander Lorenz
http://mapredit.blogspot.com

On Jan 12, 2012, at 5:05 AM, Stuti Awasthi wrote:

> Hi,
> I modified the /etc/fstab to following :
> fuse_dfs#dfs://slave:54310 /mnt fuse allow_other,rw,usetrash 0 0
> 
> Now I am just getting warnings when I try to mount.
> 
> [root@slave fuse-dfs]# mount /mnt
> port=54310,server=slave
> fuse-dfs didn't recognize /mnt,-2
> fuse-dfs ignoring option allow_other
> fuse-dfs ignoring option dev
> fuse-dfs ignoring option suid
> 
> But getting "Transport endpoint is not connected"
> Output of df-h is :
> [root@slave fuse-dfs]# df -h
> Filesystem            Size  Used Avail Use% Mounted on
> /dev/mapper/vg_slave-lv_root                       50G  4.4G   43G  10% /
> tmpfs                 999M  272K  999M   1% /dev/shm
> /dev/sda1             485M   30M  430M   7% /boot
> /dev/mapper/vg_slave-lv_home                       94G  188M   89G   1% /home
> df: `/mnt': Transport endpoint is not connected
> 
> Thanks
> -----Original Message-----
> From: Stuti Awasthi
> Sent: Thursday, January 12, 2012 5:32 PM
> To: hdfs-user@hadoop.apache.org
> Subject: Unable to mount HDFS using entry in /etc/fstab
> 
> Hi All,
> I am able to mount HDFS using fuse-dfs. I am using http://wiki.apache.org/hadoop/MountableHDFS
for the reference.
> Currently able to HDFS mount using fuse_dfs_wrapper.sh and also directly executing fuse_dfs
executable.
> 
> Eg:
> [root@slave fuse-dfs]# fuse_dfs -oserver=slave -oport=54310 
> -oallow_other -ousetrash rw /mnt -d fuse-dfs ignoring option 
> allow_other fuse-dfs didn't recognize /mnt,-2 fuse-dfs ignoring option 
> -d FUSE library version: 2.8.3
> nullpath_ok: 0
> unique: 1, opcode: INIT (26), nodeid: 0, insize: 56
> INIT: 7.13
> flags=0x0000007b
> max_readahead=0x00020000
>   INIT: 7.12
>   flags=0x00000011
>   max_readahead=0x00020000
>   max_write=0x00020000
>   unique: 1, success, outsize: 40
> unique: 2, opcode: STATFS (17), nodeid: 1, insize: 40 statfs /
>   unique: 2, success, outsize: 96
> unique: 3, opcode: GETATTR (3), nodeid: 1, insize: 56 getattr /
>   unique: 3, success, outsize: 120
> 
> Now when I add the entry in /etc/fstab file and try to mount hdfs , I get the following
error :
> 
> Entry in /etc/fstab:
> fuse_dfs#dfs://slave:54310 /mnt fuse -oallow_other,rw,-ousetrash 0 0
> 
> [root@slave fuse-dfs]# mount /mnt
> port=54310,server=slave
> fuse-dfs didn't recognize /mnt,-2
> fuse-dfs ignoring option -oallow_other fuse-dfs ignoring option 
> -ousetrash fuse-dfs ignoring option dev fuse-dfs ignoring option suid
> fuse: unknown option `-oallow_other'
> 
> Exported env variable:
> 
> declare -x CLASSPATH="/root/MountHDFS/hadoop-0.20.2/lib/commons-cli-1.2.jar:/root/MountHDFS/hadoop-0.20.2/lib/commons-codec-1.3.jar:/root/MountHDFS/hadoop-0.20.2/lib/commons-el-1.0.jar:/root/MountHDFS/hadoop-0.20.2/lib/commons-httpclient-3.0.1.jar:/root/MountHDFS/hadoop-0.20.2/lib/commons-logging-1.0.4.jar:/root/MountHDFS/hadoop-0.20.2/lib/commons-logging-api-1.0.4.jar:/root/MountHDFS/hadoop-0.20.2/lib/commons-net-1.4.1.jar:/root/MountHDFS/hadoop-0.20.2/lib/core-3.1.1.jar:/root/MountHDFS/hadoop-0.20.2/lib/hsqldb-1.8.0.10.jar:/root/MountHDFS/hadoop-0.20.2/lib/jasper-compiler-5.5.12.jar:/root/MountHDFS/hadoop-0.20.2/lib/jasper-runtime-5.5.12.jar:/root/MountHDFS/hadoop-0.20.2/lib/jets3t-0.6.1.jar:/root/MountHDFS/hadoop-0.20.2/lib/jetty-6.1.14.jar:/root/MountHDFS/hadoop-0.20.2/lib/jetty-util-6.1.14.jar:/root/MountHDFS/hadoop-0.20.2/lib/junit-3.8.1.jar:/root/MountHDFS/hadoop-0.20.2/lib/kfs-0.2.2.jar:/root/MountHDFS/hadoop-0.20.2/lib/log4j-1.2.15.jar:/root/MountHDFS/hadoop-0.20.2/lib/mockito-all-1.8.0.jar:/root/MountHDFS/hadoop-0.20.2/lib/oro-2.0.8.jar:/root/MountHDFS/hadoop-0.20.2/lib/servlet-api-2.5-6.1.14.jar:/root/MountHDFS/hadoop-0.20.2/lib/slf4j-api-1.4.3.jar:/root/MountHDFS/hadoop-0.20.2/lib/slf4j-log4j12-1.4.3.jar:/root/MountHDFS/hadoop-0.20.2/lib/xmlenc-0.52.jar:/root/MountHDFS/hadoop-0.20.2/hadoop-0.20.2-ant.jar:/root/MountHDFS/hadoop-0.20.2/hadoop-0.20.2-core.jar:/root/MountHDFS/hadoop-0.20.2/hadoop-0.20.2-examples.jar:/root/MountHDFS/hadoop-0.20.2/hadoop-0.20.2-test.jar:/root/MountHDFS/hadoop-0.20.2/hadoop-0.20.2-tools.jar"
> declare -x FUSE_HOME="/usr/include/fuse"
> declare -x HADOOP_HOME="/root/MountHDFS/hadoop-0.20.2"
> eclare -x JAVA_HOME="/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64"
> declare -x LD_LIBRARY_PATH="/usr/lib:/usr/lib64:/usr/local/lib:/usr/local/lib64:/root/MountHDFS/hadoop-0.20.2/build/libhdfs:/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/jre/lib/amd64/server/:/lib64/libfuse.so.2:/lib64/libfuse.so"
> declare -x PATH="/usr/include/fuse:/usr/include/fuse.h:/lib64/libfuse.so.2:/lib64:/usr/lib64:/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/bin/:/root/MountHDFS/Ant/apache-ant-1.8.2/bin"
> 
> Has anyone also faced same issue. Please suggest me how to fix this.
> 
> Regards,
> Stuti Awasthi
> HCL Comnet Systems and Services Ltd
> F-8/9 Basement, Sec-3,Noida.
> 
> 
> ::DISCLAIMER::
> ----------------------------------------------------------------------
> -------------------------------------------------
> 
> The contents of this e-mail and any attachment(s) are confidential and intended for the
named recipient(s) only.
> It shall not attach any liability on the originator or HCL or its 
> affiliates. Any views or opinions presented in this email are solely those of the author
and may not necessarily reflect the opinions of HCL or its affiliates.
> Any form of reproduction, dissemination, copying, disclosure, 
> modification, distribution and / or publication of this message 
> without the prior written consent of the author of this e-mail is 
> strictly prohibited. If you have received this email in error please delete it and notify
the sender immediately. Before opening any mail and attachments please check them for viruses
and defect.
> 
> ----------------------------------------------------------------------
> -------------------------------------------------


Mime
View raw message