hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Chris Douglas (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-3173) inconsistent globbing support for dfs commands
Date Tue, 03 Jun 2008 20:30:45 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-3173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12602060#action_12602060
] 

Chris Douglas commented on HADOOP-3173:
---------------------------------------

The possible solutions are constrained by the fact that Path objects go through two passes
of normalization before they're globbed. Unless we preserve the path as a String prior those
transformations, we can only glob the result, which yields the behavior in this JIRA. Come
to think of it, how does the escape get to the globbing code? Given:

{code:title=Path.java::normalizePath}
    path = path.replace("\\", "/");
{code}

How does

{code:title=FileSystem.GlobFilter::setRegex}
if (pCh == PAT_ESCAPE) {
  fileRegex.append(pCh);
  i++;
  if (i >= len)
    error("An escaped character does not present", filePattern, i);
  pCh = filePattern.charAt(i);
}
{code}

ever occur, when the pattern comes from {{pathPattern.toUri().getPath()}} ? Anyway- I agree
with Hairong, the proposed solution is clearly a hack, but I don't see how we can avoid something
like it.

I propose we mark this as "Won't fix" and live with programmatically manipulating the odd
file until we have a model that can handle this elegantly.

> inconsistent globbing support for dfs commands
> ----------------------------------------------
>
>                 Key: HADOOP-3173
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3173
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>         Environment: Hadoop 0.16.1
>            Reporter: Rajiv Chittajallu
>             Fix For: 0.18.0
>
>         Attachments: 3173-0.patch
>
>
> hadoop dfs -mkdir /user/*/bar creates a directory "/user/*/bar" and you cant deleted
/user/* as -rmr expands the glob
> $ hadoop dfs -mkdir /user/rajive/a/*/foo
> $ hadoop dfs -ls /user/rajive/a
> Found 4 items
> /user/rajive/a/*	<dir>		2008-04-04 16:09	rwx------	rajive	users
> /user/rajive/a/b	<dir>		2008-04-04 16:08	rwx------	rajive	users
> /user/rajive/a/c	<dir>		2008-04-04 16:08	rwx------	rajive	users
> /user/rajive/a/d	<dir>		2008-04-04 16:08	rwx------	rajive	users
> $ hadoop dfs -ls /user/rajive/a/*
> /user/rajive/a/*/foo	<dir>		2008-04-04 16:09	rwx------	rajive	users
> $ hadoop dfs -rmr /user/rajive/a/*
> Moved to trash: hdfs://namenode-1:8020/user/rajive/a/*
> Moved to trash: hdfs://namenode-1:8020/user/rajive/a/b
> Moved to trash: hdfs://namenode-1:8020/user/rajive/a/c
> Moved to trash: hdfs://namenode-1:8020/user/rajive/a/d
> I am not able to escape '*' from being expanded.
> $ hadoop dfs -rmr '/user/rajive/a/*'
> Moved to trash: hdfs://namenode-1:8020/user/rajive/a/*
> Moved to trash: hdfs://namenode-1:8020/user/rajive/a/b
> Moved to trash: hdfs://namenode-1:8020/user/rajive/a/c
> Moved to trash: hdfs://namenode-1:8020/user/rajive/a/d
> $ hadoop dfs -rmr  '/user/rajive/a/\*'
> Moved to trash: hdfs://namenode-1:8020/user/rajive/a/*
> Moved to trash: hdfs://namenode-1:8020/user/rajive/a/b
> Moved to trash: hdfs://namenode-1:8020/user/rajive/a/c
> Moved to trash: hdfs://namenode-1:8020/user/rajive/a/d
> $ hadoop dfs -rmr  /user/rajive/a/\* 
> Moved to trash: hdfs://namenode-1:8020/user/rajive/a/*
> Moved to trash: hdfs://namenode-1:8020/user/rajive/a/b
> Moved to trash: hdfs://namenode-1:8020/user/rajive/a/c
> Moved to trash: hdfs://namenode-1:8020/user/rajive/a/d

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message