hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Dinesh Chitlangia (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (HDDS-120) Adding HDDS datanode Audit Log
Date Tue, 06 Nov 2018 05:19:04 GMT

    [ https://issues.apache.org/jira/browse/HDDS-120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16676144#comment-16676144
] 

Dinesh Chitlangia edited comment on HDDS-120 at 11/6/18 5:18 AM:
-----------------------------------------------------------------

[~ajayydv]: Thank you for your review comments.

Attached patch 002 that addresses your review comments.

Further, I discussed with [~anu] and here is the summary:
 * BlockData
 ** BlockID(containerID + localID + blockCommitSequenceId), size, chunks is sufficient for
audit purpose.
 ** getChunksForAudit is fine to have so that all info can be inferred upon from a single
point - the logs. Also, assuming 256MB Blocks with 16MB chunks we might not see too many chunks
in a block.
 * HddsDispatcher
 ** Missing createContainer cmd must constitute an error. Reason being that this info will
be useful for investigation in case an user is firing invalid requests and thus spamming the
system.

Lastly, for the improvement to DNAction that you have suggested, I will file a new Jira as
it needs changes in core Audit classes.

cc: [~xyao]


was (Author: dineshchitlangia):
[~ajayydv]: Thank you for your review comments.

Attached patch 002 that addresses your review comments.

Further, I discussed with [~anu] and here is the summary:
 * BlockData
 ** BlockID(containerID + localID + blockCommitSequenceId), size, blockCommitSequenceId is
sufficient for audit purpose.
 ** getChunksForAudit is fine to have so that all info can be inferred upon from a single
point - the logs. Also, assuming 256MB Blocks with 16MB chunks we might not see too many chunks
in a block.
 * HddsDispatcher
 ** Missing createContainer cmd must constitute an error. Reason being that this info will
be useful for investigation in case an user is firing invalid requests and thus spamming the
system.

Lastly, for the improvement to DNAction that you have suggested, I will file a new Jira as
it needs changes in core Audit classes.

cc: [~xyao]

> Adding HDDS datanode Audit Log
> ------------------------------
>
>                 Key: HDDS-120
>                 URL: https://issues.apache.org/jira/browse/HDDS-120
>             Project: Hadoop Distributed Data Store
>          Issue Type: Sub-task
>            Reporter: Xiaoyu Yao
>            Assignee: Dinesh Chitlangia
>            Priority: Major
>              Labels: alpha2
>         Attachments: HDDS-120.001.patch, HDDS-120.002.patch
>
>
> This can be useful to find users who overload the DNs. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message