hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Philip Zeyliger <phi...@cloudera.com>
Subject Re: cannot open an hdfs file in O_RDWR mode
Date Sat, 02 May 2009 01:34:12 GMT
HDFS does not allow you to overwrite bytes of a file that have already been
written.  The only operations it supports are read (an existing file), write
(a new file), and (in newer versions, not always enabled) append (to an
existing file).

-- Philip

On Fri, May 1, 2009 at 5:56 PM, Robert Engel <engel_r@ligo.caltech.edu>wrote:

> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> Hello,
>
>    I am using Hadoop on a small storage cluster (x86_64, CentOS 5.3,
> Hadoop-0.19.1). The hdfs is mounted using fuse and everything seemed
> to work just fine so far. However, I noticed that I cannot:
>
> 1) use svn to check out files on the mounted hdfs partition
> 2) request that stdout and stderr of Globus jobs is written to the
> hdfs partition
>
> In both cases I see following error message in /var/log/messages:
>
> fuse_dfs: ERROR: could not connect open file fuse_dfs.c:1364
>
> When I run fuse_dfs in debugging mode I get:
>
> ERROR: cannot open an hdfs file in O_RDWR mode
> unique: 169, error: -5 (Input/output error), outsize: 16
>
> My question is if this is a general limitation of Hadoop or if this
> operation is just not currently supported? I searched Google and JIRA
> but could not find an answer.
>
> Thanks,
> Robert
>
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.9 (GNU/Linux)
> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
>
> iEYEARECAAYFAkn7mksACgkQrxCAtr5BXdMx5wCeICTHQbOwjZoGpVTO6ayd7l7t
> LXoAn0WBwfo6ZYdJX1sh2eO2owAR0HLm
> =PUCc
> -----END PGP SIGNATURE-----
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message