hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "YangY (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (HADOOP-15616) Incorporate Tencent Cloud COS File System Implementation
Date Wed, 12 Dec 2018 08:02:00 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-15616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16718572#comment-16718572
] 

YangY edited comment on HADOOP-15616 at 12/12/18 8:01 AM:
----------------------------------------------------------

Thanks [~xyao] for comment on this code.

Here are the answers to your comments:

1. Changes under hadoop-tools/hadoop-aliyun unrelated to this patch.
 This may be a misoperation when formatting my code, and the error has been corrected in the
new patch.

2. Should we put hadoop-cos under hadoop-tools project like s3a, adsl, etc. instead of hadoop-cloud-storage-project?
 At first, I also thought I should put it under the hadoop-tools project. However, as steve's
comment above, using "hadoop-cloud-storage-project" seems more appropriate,isn't it?

3. More description to keys.
 Thank you for your reminder, I will add some detailed descriptions in our document.

4. BufferPool.java: since it sets the disk buffer file delete on exist, does it support recovery
if client restart?
 BufferPool is a shared buffer pool. It initially provides two buffer types: Memory and Disk.
The latter uses the memory file mapping to construct a byte buffer that can be used by other
classes uniformly.
 Therefore, it can not support recovery if client restart. After all, the disk buffer is mapped
a temporal file, and it will be cleaned up automatically when the Java Virtual Machine exists.

In the latest patch, I further optimize it by combining two buffer types and gain two improvements:
memory usage and buffer performance. For this reason, the type of buffers here will not be
visible to the user.

Finally, I look forward to your more comments.


was (Author: yuyang733):
Thanks [~xyao] for comment on this code.

Here are the answers to your comments:

1. Changes under hadoop-tools/hadoop-aliyun unrelated to this patch.
 This may be a misoperation when formatting my code, and the error has been corrected in the
new patch.

2. Should we put hadoop-cos under hadoop-tools project like s3a, adsl, etc. instead of hadoop-cloud-storage-project?
 At first, I also thought I should put it under the hadoop-tools project. However, as steve's
comment above, using "hadoop-cloud-storage-project" seems more appropriate,isn't it?

3. More description to keys.
 Thank you for your reminder, I will add some detailed descriptions in our document.

4. BufferPool.java: since it sets the disk buffer file delete on exist, does it support recovery
if client restart?
 BufferPool is a shared buffer pool. It initially provides two buffer types: Memory and Disk.
The latter uses the memory file mapping to construct a byte buffer that can be used by other
classes uniformly.
 Therefore, it can not support recovery if client restart. After all, the disk buffer is mapped
a temporal file, and it will be cleaned up automatically when the Java Virtual Machine exists.

In the latest patch, I further optimize it by combining two buffer types: memory usage and
buffer performance. For this reason, the type of buffers here will not be visible to the user.

Finally, I look forward to your more comments.

> Incorporate Tencent Cloud COS File System Implementation
> --------------------------------------------------------
>
>                 Key: HADOOP-15616
>                 URL: https://issues.apache.org/jira/browse/HADOOP-15616
>             Project: Hadoop Common
>          Issue Type: New Feature
>          Components: fs/cos
>            Reporter: Junping Du
>            Assignee: YangY
>            Priority: Major
>         Attachments: HADOOP-15616.001.patch, HADOOP-15616.002.patch, HADOOP-15616.003.patch,
HADOOP-15616.004.patch, HADOOP-15616.005.patch, Tencent-COS-Integrated.pdf
>
>
> Tencent cloud is top 2 cloud vendors in China market and the object store COS ([https://intl.cloud.tencent.com/product/cos])
is widely used among China’s cloud users but now it is hard for hadoop user to access data
laid on COS storage as no native support for COS in Hadoop.
> This work aims to integrate Tencent cloud COS with Hadoop/Spark/Hive, just like what
we do before for S3, ADL, OSS, etc. With simple configuration, Hadoop applications can read/write
data from COS without any code change.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Mime
View raw message