hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Marcelo Vanzin (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-3680) Allows customized audit logging in HDFS FSNamesystem
Date Fri, 05 Oct 2012 18:02:03 GMT

    [ https://issues.apache.org/jira/browse/HDFS-3680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13470508#comment-13470508
] 

Marcelo Vanzin commented on HDFS-3680:
--------------------------------------

bq. Why is the following code part of this patch?

See my previous response:

"I'm creating a FileStatus object based on an HdfsFileStatus, which is a private audience
class and thus cannot be used in the public audience AuditLogger."

bq. Given loggers are going to some kind of io, to database, or some server etc. IOException
should be expected and seems like a logical thing to throw and not RunTimeException.

You're making assumptions about the implementation of the logger. Why would it throw IOException
and not SQLException? What if my logger doesn't do any I/O in the thread doing the logging
at all? Saying "throws IOException" would just make implementors wrap whatever is the real
exception being thrown in an IOException instead of a RuntimeException, to no benefit I can
see. If FSNamesystem should handle errors from custom loggers, it should handle all errors,
not just specific ones.

bq. I do not think it is outside the scope of this patch. Current logger could fail, on system
failures. However here it may fail because poorly written code

We can't prevent badly written code from doing bad things (also see previous comments from
ATM that this is not an interface that people will be implementing willy nilly - people who'll
touch it are expected to know what they are doing). The reason I say it's out of the scope
of this patch is because it's a change in the current behavior that's unrelated to whether
you have a custom audit logger or not; if audit logs should cause the name node to shut down,
that's a change that needs to be made today, right now, independent of this patch going in
or not.

. hdfs-default.xml document does not cover my previous comment

It covers details related to configuration. Details about what the implementation is expected
to do should be (and are) documented in the interface itself, which is the interface that
the person writing the implementation will be looking at.

If you're talking about the "what about when things don't work correctly part", I'll wait
for closure on the other comments.
                
> Allows customized audit logging in HDFS FSNamesystem
> ----------------------------------------------------
>
>                 Key: HDFS-3680
>                 URL: https://issues.apache.org/jira/browse/HDFS-3680
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: name-node
>    Affects Versions: 2.0.0-alpha
>            Reporter: Marcelo Vanzin
>            Assignee: Marcelo Vanzin
>            Priority: Minor
>         Attachments: accesslogger-v1.patch, accesslogger-v2.patch, hdfs-3680-v3.patch,
hdfs-3680-v4.patch, hdfs-3680-v5.patch, hdfs-3680-v6.patch, hdfs-3680-v7.patch, hdfs-3680-v8.patch
>
>
> Currently, FSNamesystem writes audit logs to a logger; that makes it easy to get audit
logs in some log file. But it makes it kinda tricky to store audit logs in any other way (let's
say a database), because it would require the code to implement a log appender (and thus know
what logging system is actually being used underneath the fa├žade), and parse the textual
log message generated by FSNamesystem.
> I'm attaching a patch that introduces a cleaner interface for this use case.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message