hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Eli Collins <...@cloudera.com>
Subject Re: fuse_dfs dfs problem
Date Wed, 20 Jan 2010 18:18:29 GMT
Hey Sergey,

Here's a link to the jira: http://issues.apache.org/jira/browse/HDFS-856

You can find a patch under the file attachment section, here's a direct link:

http://issues.apache.org/jira/secure/attachment/12429027/HADOOP-856.patch

Thanks,
Eli

On Wed, Jan 20, 2010 at 8:03 AM,  <fenix.serega@gmail.com> wrote:
> Hello, Eli could you please point me - where I can get this patch (from jira)
> to fix this issue ?
>
> Regards,
> Sergey S. Ropchan
>
> 2010/1/13 Eli Collins <eli@cloudera.com>:
>> Hey Klaus,
>>
>> That's HDFS-856, you can apply the patch from the jira. The fix will
>> also be in the next cdh2 release.
>>
>> Thanks,
>> Eli
>>
>> On Tue, Jan 12, 2010 at 8:23 PM, Klaus Nagel <das@gibtsdochgar.net> wrote:
>>> Hello, I am using Hadoop 0.20.1 and have a litte problem with it...hope
>>> someone can help...
>>>
>>> I have a 3 Node Setup, and set dfs.replication and dfs.replication.max to
>>> 1 (in the file hdfs-site.xml).
>>> That works great when putting a file to the hadoop filesystem
>>> (eg. ./hadoop fs -put ~/debian-503-i386-businesscard.iso abc.iso)
>>>
>>> when I try that with fuse_dfs I get the following error message from the
>>> fuse_dfs_wrapper.sh script
>>>
>>> LOOKUP /temp/test.test
>>>   unique: 21, error: -2 (No such file or directory), outsize: 16
>>> unique: 22, opcode: CREATE (35), nodeid: 7, insize: 58
>>> WARN: hdfs does not truly support O_CREATE && O_EXCL
>>> Exception in thread "Thread-6" org.apache.hadoop.ipc.RemoteException:
>>> java.io.IOException: failed to create file /temp/test.test on client
>>> 10.8.0.1.
>>> Requested replication 3 exceeds maximum 1
>>>        at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1074)
>>>        at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:977)
>>>        at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:377)
>>>  ...
>>> ...
>>> ...
>>>
>>>
>>> ...same messages in the namenode-log
>>> 2010-01-13 04:36:57,183 WARN org.apache.hadoop.hdfs.StateChange: DIR*
>>> NameSystem.startFile: failed to create file /temp/test.test on client
>>> 10.8.0.1.
>>> Requested replication 3 exceeds maximum 1
>>> 2010-01-13 04:36:57,183 INFO org.apache.hadoop.ipc.Server: IPC Server
>>> handler 4 on 9000, call create(/temp/test.test, rwxr-xr-x,
>>> DFSClient_814881830$
>>> Requested replication 3 exceeds maximum 1
>>> java.io.IOException: failed to create file /temp/test.test on client
>>> 10.8.0.1.
>>> Requested replication 3 exceeds maximum 1
>>>        at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1074)
>>>        at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:977)
>>>        at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:377)
>>> ...
>>> ...
>>>
>>> ...hope someone can help me solving that problem,
>>> best regards: Klaus
>>>
>>>
>>
>

Mime
View raw message