hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Steve Loughran (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-7878) API - expose an unique file identifier
Date Wed, 25 Oct 2017 12:35:01 GMT

    [ https://issues.apache.org/jira/browse/HDFS-7878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16218511#comment-16218511
] 

Steve Loughran commented on HDFS-7878:
--------------------------------------

filesystem.md LGTM, +1 for that bit, 
Same for the API

AbstractContractOpenTest: L241, L260. if you use LambdaTestUtils.intercept() it automatically
creates the exception message including a toString() value of the lambda exp. 

I could imagine adding something similar for all the skip-if-unsupported ops, eg.

{code}
static <T> T evalOrSkip(String op, Callable<T> eval) throws Exception {
  try {
    return eval.call()
  } catch (UnsupportedException ex) {
    skip("Unsupported feature: " + op)
  }
}
{code}

Then you could go

{code}
PathHandle fd = evalOrSkip("exact", 
  () -> fd.getFileSystem().getPathHandle(stat, HandleOpt.exact()));
{code}

...etc

w.r.t HDFS changes, I'm not qualified to comment on the implementation. Sorry.

> API - expose an unique file identifier
> --------------------------------------
>
>                 Key: HDFS-7878
>                 URL: https://issues.apache.org/jira/browse/HDFS-7878
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>            Reporter: Sergey Shelukhin
>            Assignee: Sergey Shelukhin
>              Labels: BB2015-05-TBR
>         Attachments: HDFS-7878.01.patch, HDFS-7878.02.patch, HDFS-7878.03.patch, HDFS-7878.04.patch,
HDFS-7878.05.patch, HDFS-7878.06.patch, HDFS-7878.07.patch, HDFS-7878.08.patch, HDFS-7878.09.patch,
HDFS-7878.10.patch, HDFS-7878.11.patch, HDFS-7878.12.patch, HDFS-7878.13.patch, HDFS-7878.14.patch,
HDFS-7878.15.patch, HDFS-7878.16.patch, HDFS-7878.patch
>
>
> See HDFS-487.
> Even though that is resolved as duplicate, the ID is actually not exposed by the JIRA
it supposedly duplicates.
> INode ID for the file should be easy to expose; alternatively ID could be derived from
block IDs, to account for appends...
> This is useful e.g. for cache key by file, to make sure cache stays correct when file
is overwritten.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message