hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Mukul Kumar Singh (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-12425) Ozone: OzoneFileSystem: OzoneFileystem read/write/create/open/getFileInfo APIs
Date Thu, 21 Sep 2017 07:47:00 GMT

    [ https://issues.apache.org/jira/browse/HDFS-12425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16174400#comment-16174400
] 

Mukul Kumar Singh commented on HDFS-12425:
------------------------------------------

Thanks for the review [~xyao] I have incorporated almost all of the review comments. The current
patch works on the ozone rest client. I am working on another patch which will use the rpc
client, this patch will also take care of using the new streaming read/write interfaces as
well.

1. OzoneFileSystem.java
LINE 151/168/269/321, NIT: change the log level from debug to trace, also suggest using parameterized
syntax to reduce logging overhead.
bq. done

Line 183: the try/catch is not being used to catch any exception that we plan to handle, should
we remove it?
bq. This exception needs to ignored as this means that the file currently does not exists
and a new file can thus be created.
 
Line 254: suggest using parameterized syntax to reduce logging overhead.
bq. done

Line 268-273: can you add document somewhere mentioning how ozfs differentiate directory/file
by the trailing "/"?
Maybe define a OZONE_URI_SEPARATOR "/" as I've seen "/" used in many places if there is not
one exist. The other choice is to use URI#resolve to handle this without worrying about the
"/"
bq. done

OzoneInputStream.java
1. Line 70/81: suggest using URI to handle this.
bq. Can you please elaborate this comment :)

2. Line 53/92: can we use a stream (Bucket#readKey) instead of a local file(Bucket#getKey)
here for better perf?
This will affect other API implementation based on RandomAccessFile
bq. Yes, I wanted to use this jira to provide the basic functionality of read/write/open and
getFileInfo. I will replace these functions when the new put key api's are in.

OzoneOutputStream.java
Similar to the inputstream, can we avoid the stream backed by local file that write to the
bucket when close()?
bq. same as for the intputstream.

TestOzoneFileInterfaces.java
Line 113: inputStream needs to be closed to avoid leaking. Consider using try-with-resource
to achieve that easily.
bq. done



> Ozone: OzoneFileSystem: OzoneFileystem read/write/create/open/getFileInfo APIs
> ------------------------------------------------------------------------------
>
>                 Key: HDFS-12425
>                 URL: https://issues.apache.org/jira/browse/HDFS-12425
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: ozone
>    Affects Versions: HDFS-7240
>            Reporter: Mukul Kumar Singh
>            Assignee: Mukul Kumar Singh
>              Labels: ozoneMerge
>             Fix For: HDFS-7240
>
>         Attachments: HDFS-12425-HDFS-7240.001.patch, HDFS-12425-HDFS-7240.002.patch,
HDFS-12425-HDFS-7240.003.patch
>
>
> This jira will add create/open and read/write APIs for OzoneFileSystem.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message