hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Zesheng Wu <wuzeshen...@gmail.com>
Subject Support multiple block placement policies
Date Mon, 15 Sep 2014 12:43:24 GMT
Hi there,

According to the code, the current implement of HDFS only supports one
specific type of block placement policy, which is
BlockPlacementPolicyDefault by default.
The default policy is enough for most of the circumstances, but under some
special circumstances, it works not so well.

For example, on a shared cluster, we want to erasure encode all the files
under some specified directories. So the files under these directories need
to use a new placement policy.
But at the same time, other files still use the default placement policy.
Here we need to support multiple placement policies for the HDFS.

One plain thought is that, the default placement policy is still configured
as the default. On the other hand, HDFS can let user specify customized
placement policy through the extended attributes(xattr). When the HDFS
choose the replica targets, it firstly check the customized placement
policy, if not specified, it fallbacks to the default one.

Any thoughts?

Best Wishes!

Yours, Zesheng

View raw message