hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Karthik Palanisamy (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-14320) Support skipTrash for WebHDFS
Date Fri, 22 Mar 2019 00:32:00 GMT

    [ https://issues.apache.org/jira/browse/HDFS-14320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16798588#comment-16798588
] 

Karthik Palanisamy commented on HDFS-14320:
-------------------------------------------

Thank you for your review [~daryn].
{quote}Trash is a FSShell concept.  I really question whether it belongs in the REST api.
{quote}
No, I would like to introduce this feature for Rest calls. 
{quote} # If the default is "true", which it must be for any kind of compatibility, an explicit
change is required by the user to set the param to false. Few will do it, so is it really
that useful?{quote}
I meant, skiptrash is set to "true" by default. "true" will not move files to trash, It will
permanently delete the file. 

User don't need to change/set explicitly unless they need skiptrash feature.

If user wants to move files to trash, then we require to set "skiptrash=false".
{quote} # You cannot or should not create a default fs and parse a path in the NN. It's very
dangerous.  Give me some time (that I don't have) and I'd likely come up with a nasty exploit.{quote}
I constructed a FileSystem object from an existing Webhdfs config object, as it is currently
used by all webhdfs calls in Namenode.

I am unsure (NamenodeWebHdfsMethods), if this webHdfs server implementation is going to be
used by any other filesystems. Here, My thought was it only used by Namenode. 

 

I guess we can't construct filesystem object from a path because curl will only have a URI
as "http://". I can apply an additional check. By allowing only if schema is "hdfs://" (from
config object). 

 
{quote}Also, this patch won't work with security enabled since the NN's handler does not have
any credentials. 
{quote}
My bad!. Let me think of it. Would you like to share any approach for handing with secure
cluster?  

 

 

> Support skipTrash for WebHDFS 
> ------------------------------
>
>                 Key: HDFS-14320
>                 URL: https://issues.apache.org/jira/browse/HDFS-14320
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>          Components: namenode, webhdfs
>    Affects Versions: 3.2.0
>            Reporter: Karthik Palanisamy
>            Assignee: Karthik Palanisamy
>            Priority: Major
>         Attachments: HDFS-14320-001.patch, HDFS-14320-002.patch, HDFS-14320-003.patch,
HDFS-14320-004.patch, HDFS-14320-005.patch, HDFS-14320-006.patch, HDFS-14320-007.patch, HDFS-14320-008.patch
>
>
> Files/Directories deleted via webhdfs rest call doesn't use the skiptrash feature, it
would be deleted permanently. This feature is very important us because our user has deleted
large directory accidentally.
> By default, Skiptrash option is set to true, skiptrash=true. Any files, Using CURL
will be permanently deleted.
> Example:
> curl -iv -X DELETE "http://xxxx:50070/webhdfs/v1/tmp/sampledata?op=DELETE&user.name=hdfs&recursive=true"
>  
> Use skiptrash=false, to move files to trash Instead.
> Example:
> curl -iv -X DELETE "http://xxxx:50070/webhdfs/v1/tmp/sampledata?op=DELETE&user.name=hdfs&recursive=true&skiptrash=false"
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message