hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hudson (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDDS-748) Use strongly typed metadata Table implementation
Date Sun, 02 Dec 2018 01:15:00 GMT

    [ https://issues.apache.org/jira/browse/HDDS-748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16706055#comment-16706055
] 

Hudson commented on HDDS-748:
-----------------------------

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15542 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/15542/])
HDDS-748. Use strongly typed metadata Table implementation. Contributed (bharat: rev d15dc436598d646de67b553207ab6624741f56a5)
* (edit) hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/DBStore.java
* (edit) hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/RDBStoreIterator.java
* (edit) hadoop-hdds/common/src/test/java/org/apache/hadoop/utils/db/TestRDBTableStore.java
* (edit) hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
* (edit) hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/RDBTable.java
* (edit) hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/TableIterator.java
* (edit) hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/OMMetadataManager.java
* (edit) hadoop-hdds/common/src/test/java/org/apache/hadoop/utils/db/TestRDBStore.java
* (add) hadoop-hdds/common/src/test/java/org/apache/hadoop/utils/db/TestTypedRDBTableStore.java
* (edit) hadoop-hdds/common/src/test/java/org/apache/hadoop/utils/db/TestDBStoreBuilder.java
* (add) hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/ByteArrayKeyValue.java
* (edit) hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/RDBStore.java
* (add) hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/StringCodec.java
* (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManager.java
* (add) hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/Codec.java
* (add) hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/TypedTable.java
* (add) hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/CodecRegistry.java
* (edit) hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/Table.java


> Use strongly typed metadata Table implementation
> ------------------------------------------------
>
>                 Key: HDDS-748
>                 URL: https://issues.apache.org/jira/browse/HDDS-748
>             Project: Hadoop Distributed Data Store
>          Issue Type: Improvement
>            Reporter: Elek, Marton
>            Assignee: Elek, Marton
>            Priority: Major
>             Fix For: 0.4.0
>
>         Attachments: HDDS-748.001.patch, HDDS-748.002.patch, HDDS-748.003.patch, HDDS-748.004.patch,
HDDS-748.005.patch
>
>
> NOTE: This issue is a proposal. I assigned it to myself to make it clear that it's not
ready to implement, I just start a discussion about the proposed change.
> org.apache.hadoop.utils.db.DBStore (from HDDS-356) is a new generation MetadataStore
to store all persistent state of hdds/ozone scm/om/datanodes.
> It supports column families with via the Table interface which supports methods like:
> {code:java}
> byte[] get(byte[] key) throws IOException;
> void put(byte[] key, byte[] value)
> {code}
> In our current code we usually use static helpers to do the _byte[] -> object_ and
_object -> byte[]_ conversion with protobuf.
> For example in KeyManagerImpl the OmKeyInfo.getFromProtobuf is used multiple times to
deserialize the OmKeyInfo project.
>  
> *I propose to create a type-safe table* with using:
> {code:java}
> public interface Table<KEY_TYPE, VALUE_TYPE> extends AutoCloseable
> {code}
> The put and get could be modified to:
> {code:java}
> VALUE_TYPE[] get(KEY_TYPE[] key) throws IOException;
> void put(KEY_TYPE[] key, VALUE_TYPE value)
> {code}
> For example for the key table it could be:
> {code:java}
> OmKeyInfo get(String key) throws IOException;
> void put(String key, OmKeyInfo value)
> {code}
>  
> It requires to register internal codec (marshaller/unmarshaller) implementations during
the creation of (..)Table.
> The registration of the codecs would be optional. Without it the Table could work as
now (using byte[],byte[])
> *Advantages*:
>  * More simplified code (Don't need to repeat the serialization everywhere) less error-prone.
>  * Clear separation of the layers (As of now I can't see the serialization overhead with
OpenTracing) and measurablity). Easy to test different serialization in the future.
>  * Easier to create additional developer tools to investigate the current state of the
rocksdb metadata stores. We had SQLCLI to export all the data to sql, but with registering
the format in the rocksdb table we can easily create a calcite based SQL console.
> *Additional info*:
> I would modify the interface of the DBStoreBuilder and DBStore:
> {code:java}
>    this.store = DBStoreBuilder.newBuilder(conf)
>         .setName(OM_DB_NAME)
>         .setPath(Paths.get(metaDir.getPath()))
>         .addTable(KEY_TABLE, DBUtil.STRING_KEY_CODEC, new OmKeyInfoCoder())
> //...
>         .build();
> {code}
> And using it from the DBStore:
> {code:java}
> //default, without codec
> Table<byte[],byte[]> getTable(String name) throws IOException;
> //advanced with codec from the codec registry
> Table<String,OmKeyInfo> getTable(String name, Class keyType, Class valueType);
> //for example
> table.getTable(KEY_TABLE,String.class,OmKeyInfo.class);
> //or
> table.getTable(KEY_TABLE,String.class,UserInfo.class)
> //exception is thrown: No codec is registered for KEY_TABLE with type UserInfo.{code}
> *Priority*:
> I think it's a very useful and valuable step forward but the real priority is lower.
Ideal for new contributors especially as it's independent, standalone part of ozone code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message