hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Luca Telloli (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HDFS-456) Problems with dfs.name.edits.dirs as URI
Date Wed, 26 Aug 2009 14:12:59 GMT

     [ https://issues.apache.org/jira/browse/HDFS-456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Luca Telloli updated HDFS-456:

    Attachment: HDFS-456.patch

Suresh, thanks for the detailed comments, my replies are below. 

In general, after some tests, I came up with an update in the URI creation logic: 

- if the uri is correctly specified, the URI constructor will take care of it
- if the user specified a directory/file and not a uri, then the file constructor will be
invoked, and the URi constructed from there
- if the scheme is file, then the authoritative part should not be a Windows like drive letter,
like C: 

The last one comes after reading this page: http://blogs.msdn.com/ie/archive/2006/12/06/file-uris-in-windows.aspx

Let me know if you think that the logic is solid and the unit test is good enough, thanks!

FSImage.getDirectories() in this method, the path for storage directories is interpreted only
as File. Could this also have URI?

I'm not 100% sure on this. in FSImage.getDirectories() the assumption is that the list of
directories should be already correctly set from the initialization methods no? What I mean
is that that method is not reading any configuration property 

FSImage.java - when creating new URI please catch specific exception URISyntaxExcepion instead
of blanket Exception


FSImage.java - has System.out.println - please remove it 

Done: sorry about it

FSImage.java - when trying to process name as file, on exception, should the error say "Error
while processing element as URI or File: " + name    


Interpreting a path as URI, and on failure as file is repeated in multiple places. Move this
into a util, instead of duplicating the code. 


Please add unit tests to test this method on various platforms to ensure different configuration
(that is files, windows files, URIs) are handled. I would like to review the unit tests to
see if we are catching all the conditions. These tests need to pass both on Windows and Linux

I added a unit test as requested. 

In some of the tests, replace(' ', '+'); has been removed. Not sure if this is intentional?

Was intentional two patches ago. I removed it though. I think some tests on windows behave
weirdly because of directories with spaces (that is, if you run them in c:\Documents and Settings
the outcome is different than running them in /cygdrive/c/tmp/) but this should be the subject
of a different patch in case. 

> Problems with dfs.name.edits.dirs as URI
> ----------------------------------------
>                 Key: HDFS-456
>                 URL: https://issues.apache.org/jira/browse/HDFS-456
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: name-node
>    Affects Versions: 0.21.0
>            Reporter: Konstantin Shvachko
>            Assignee: Luca Telloli
>             Fix For: 0.21.0
>         Attachments: failing-tests.zip, HDFS-456.patch, HDFS-456.patch, HDFS-456.patch,
HDFS-456.patch, HDFS-456.patch, HDFS-456.patch
> There are several problems with recent commit of HDFS-396.
> # It does not work with default configuration "file:///". Throws {{IllegalArgumentException}}.
> # *ALL* hdfs tests fail on Windows because "C:\mypath" is treated as an illegal URI.
Backward compatibility is not provided.
> # {{IllegalArgumentException}} should not be thrown within hdfs code because it is a
{{RuntimException}}. We should throw {{IOException}} instead. This was recently discussed
in another jira.
> # Why do we commit patches without running unit tests and test-patch? This is the minimum
requirement for a patch to qualify as committable, right?

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message