hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Enis Soztutar (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-53) MapReduce log files should be storable in dfs.
Date Fri, 26 Oct 2007 09:25:51 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-53?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12537891
] 

Enis Soztutar commented on HADOOP-53:
-------------------------------------

The appender ignores the massages until it is properly initialized, but the thing is that
the logger itself generates logging statements during initialization(for example ipc debug
logs.). FsLogAppender will work on the filesystem that is active in the configuration given
to its init() method. 

bq. In particular, we should avoid having special methods that need to be invoked for initialization
and closing. 
Current design does need extra initialization and finalization because we are using log4j's
configurable way of using appenders. It is good that we can configure logging either to use
fs or local files, but then we need to let log4j construct the appender for us, so we should
somehow pass the conf object to the appender, right? We can definitely use something like

{code}
JobConf#enableDFSLogging();
JobConf#setLogLevel();
JobConf#setLogDir();
{code}
then construct FslogAppender, and add it to the rootLogger however what if we extend the logging
system so that it can be used to store the logs of {job|task}trackers and {name|data}nodes.
Then we should have custom code to set the appender rather that using conf/log4j.properties.


Long story short, I think current architecture is slightly ugly, but I'm OK with it. 





> MapReduce log files should be storable in dfs.
> ----------------------------------------------
>
>                 Key: HADOOP-53
>                 URL: https://issues.apache.org/jira/browse/HADOOP-53
>             Project: Hadoop
>          Issue Type: New Feature
>          Components: mapred
>    Affects Versions: 0.16.0
>            Reporter: Doug Cutting
>            Assignee: Enis Soztutar
>             Fix For: 0.16.0
>
>         Attachments: mapredDFSLog_v1.patch, mapredDFSLog_v2.patch
>
>
> It should be possible to cause a job's log output to be stored in dfs.  The jobtracker's
log output and (optionally) all tasktracker log output related to a job should be storable
in a job-specified dfs directory.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message