hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Robert Engel <enge...@ligo.caltech.edu>
Subject Re: cannot open an hdfs file in O_RDWR mode
Date Mon, 04 May 2009 17:54:49 GMT
Hash: SHA1

Hey Philip,

	how could I enable "append to and existing file" in Hadoop?


Philip Zeyliger wrote:
> HDFS does not allow you to overwrite bytes of a file that have already been
> written.  The only operations it supports are read (an existing file), write
> (a new file), and (in newer versions, not always enabled) append (to an
> existing file).
> -- Philip
> On Fri, May 1, 2009 at 5:56 PM, Robert Engel <engel_r@ligo.caltech.edu>wrote:
> Hello,
>    I am using Hadoop on a small storage cluster (x86_64, CentOS 5.3,
> Hadoop-0.19.1). The hdfs is mounted using fuse and everything seemed
> to work just fine so far. However, I noticed that I cannot:
> 1) use svn to check out files on the mounted hdfs partition
> 2) request that stdout and stderr of Globus jobs is written to the
> hdfs partition
> In both cases I see following error message in /var/log/messages:
> fuse_dfs: ERROR: could not connect open file fuse_dfs.c:1364
> When I run fuse_dfs in debugging mode I get:
> ERROR: cannot open an hdfs file in O_RDWR mode
> unique: 169, error: -5 (Input/output error), outsize: 16
> My question is if this is a general limitation of Hadoop or if this
> operation is just not currently supported? I searched Google and JIRA
> but could not find an answer.
> Thanks,
> Robert

Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org


  • Unnamed multipart/mixed (inline, None, 0 bytes)
View raw message