hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "stack (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-3702) Add an option for NOT writing the blocks locally if there is a datanode on the same box as the client
Date Wed, 13 Apr 2016 14:02:25 GMT

    [ https://issues.apache.org/jira/browse/HDFS-3702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15239291#comment-15239291
] 

stack commented on HDFS-3702:
-----------------------------

bq. Suppose we find that the CreateFlag.NO_LOCAL_WRITE is bad. How do we remove it, i.e. what
is the procedure to remove it? I believe we cannot simply remove it since it probably will
break HBASE compilation.

Just remove it. HBase has loads of practice dealing with stuff being moved/removed and changed
under it by HDFS.

You could also just leave the flag in place since there is no obligation that any filesystem
respect the flag. It is a suggestion only (See http://linux.die.net/man/2/open / create for
the long, interesting set of flags it has) 

 bq. Another possible case: suppose that we find the disfavorNodes feature is very useful
later on. How do we add it?

Same way you'd add any feature.. and HBase would look for it the way it does now peeking for
presence of extra facility with if/else hdfs, reflection, try/catches of nosuchmethod, etc.
We have lots of practice doing this also. We'd keep using the NO_LOCAL_WRITE flag though,
unless it purged, since it does what we want. As I understand it, disfavoredNodes would require
a lot more work of hbase to get the same functionality as NO_LOCAL_WRITE provides.

bq. It seems that the "whatever proofing" is to let the community try the features for a period
of time. Then, we may add it to the FileSystem API.

Sorry. 'whatever proofing' is overly expansive. We are just adding a flag. I just meant, if
the tests added here are not sufficient or you want some other proof it works, pre-commit,
just say. No problem.

Also, the community has been running with this 'feature' for years (See HBASE-6435) so no
need of our taking the suggested disruptive 'indirection' just to add a filesystem 'hint'
with attendant mess in HDFS -- extra params on create -- that cannot subsequently be removed.

Thanks [~szetszwo]

What do you think of our adding the attributes LimitedPrivate and Evolving to the flag. Would
that be indicator enough for you?

> Add an option for NOT writing the blocks locally if there is a datanode on the same box
as the client
> -----------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-3702
>                 URL: https://issues.apache.org/jira/browse/HDFS-3702
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: hdfs-client
>    Affects Versions: 2.5.1
>            Reporter: Nicolas Liochon
>            Assignee: Lei (Eddy) Xu
>            Priority: Minor
>              Labels: BB2015-05-TBR
>         Attachments: HDFS-3702.000.patch, HDFS-3702.001.patch, HDFS-3702.002.patch, HDFS-3702.003.patch,
HDFS-3702.004.patch, HDFS-3702.005.patch, HDFS-3702.006.patch, HDFS-3702.007.patch, HDFS-3702.008.patch,
HDFS-3702.009.patch, HDFS-3702.010.patch, HDFS-3702.011.patch, HDFS-3702_Design.pdf
>
>
> This is useful for Write-Ahead-Logs: these files are writen for recovery only, and are
not read when there are no failures.
> Taking HBase as an example, these files will be read only if the process that wrote them
(the 'HBase regionserver') dies. This will likely come from a hardware failure, hence the
corresponding datanode will be dead as well. So we're writing 3 replicas, but in reality only
2 of them are really useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message