hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Eli Collins (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-245) Create symbolic links in HDFS
Date Sat, 07 Nov 2009 04:17:41 GMT

    [ https://issues.apache.org/jira/browse/HDFS-245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12774548#action_12774548
] 

Eli Collins commented on HDFS-245:
----------------------------------

Hey Doug,  Thanks for taking a look.

bq. Looking at the patch: - the javadoc refers to
UnresolvedPathException when it should read UnresolvedLinkException

Assume you're referring to FileContext. These shouldn't be there, I
removed them and updated the corresponding comments in
AbstractFileSystem to indicate that getFileStatus throws an
UnresolvedPathException if it encounters a symlink in the given path
and that getFileLinkStatus throws an UnresolvedPathException if it
encounters a symlink in the path leading up to the file or symlink
that the given path refers to.

bq.  - under what conditions would FileContext#getFileStatus or
getLinkFileStatus throw UnresolvedLinkException?  it looks to me like
this exception is always handled internally.  if a filesystem doesn't
support symlinks, or the symlink is broken, then other exceptions are
thrown.  this exception appears to be used purely for internal control
purposes and shouldn't be seen by end users so far as i can tell.  its
visibility should be "limited private", i think.

You're right, it's always handled internally I inadvertently added it
there instead of AbstractFileSystem. I moved the comments (above) and
made UnresolvedLinkException LimitedPrivate for HDFS.

bq.  - shouldn't FilterFS#getFileLinkStatus() call getFileLinkStatus
instead of getFileStatus?

Yup, fixed. I also added a comment to getFileLinkStatus ("FileSystem
does not support symlinks") explaining why it does not call
fsImpl.getFileLinkStatus.

bq.  - in FSLinkResolver, what's the point of the 'catch (IOException
e) { throw e; }'?

I suspect it was to pass any non-UnresolvedLinkExceptions up to the
caller, that will happen naturally so it's not needed. I removed it.

I also forgot to bump the versionID in ClientProtocol, fixed that.

Also noticed that the new fs image version wasn't being tested in
TestOfflineImageViewer, will fix that and upload a new patch.

Thanks,
Eli


> Create symbolic links in HDFS
> -----------------------------
>
>                 Key: HDFS-245
>                 URL: https://issues.apache.org/jira/browse/HDFS-245
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>            Reporter: dhruba borthakur
>            Assignee: dhruba borthakur
>         Attachments: 4044_20081030spi.java, designdocv1.txt, designdocv2.txt, HADOOP-4044-strawman.patch,
symlink-0.20.0.patch, symLink1.patch, symLink1.patch, symLink11.patch, symLink12.patch, symLink13.patch,
symLink14.patch, symLink15.txt, symLink15.txt, symlink16-common.patch, symlink16-hdfs.patch,
symlink16-mr.patch, symlink17-common.txt, symlink17-hdfs.txt, symlink18-common.txt, symlink19-common-delta.patch,
symlink19-common.txt, symlink19-common.txt, symlink19-hdfs-delta.patch, symlink19-hdfs.txt,
symlink20-common.patch, symlink20-hdfs.patch, symlink21-common.patch, symlink21-hdfs.patch,
symLink4.patch, symLink5.patch, symLink6.patch, symLink8.patch, symLink9.patch
>
>
> HDFS should support symbolic links. A symbolic link is a special type of file that contains
a reference to another file or directory in the form of an absolute or relative path and that
affects pathname resolution. Programs which read or write to files named by a symbolic link
will behave as if operating directly on the target file. However, archiving utilities can
handle symbolic links specially and manipulate them directly.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message