hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yoram Arnon" <yar...@yahoo-inc.com>
Subject RE: [jira] Commented: (HADOOP-92) Error Reporting/logging in MapReduce
Date Wed, 22 Mar 2006 17:10:45 GMT
DFS files can only be written once, and by a single writer.
Until that changes our hands are tied, as long as we require the output to
reside in the output directory.

Unless... we create a protocol whereby the task masters report up to the job
master, and it's only the job master that does the logging.
That might introduce unwanted overhead and some load on the job master.

> -----Original Message-----
> From: Eric Baldeschwieler [mailto:eric14@yahoo-inc.com]
> Sent: Tuesday, March 21, 2006 8:54 PM
> To: hadoop-dev@lucene.apache.org
> Subject: Re: [jira] Commented: (HADOOP-92) Error Reporting/logging in
> MapReduce
> Will it really make sense to have 300,000 subdirectories with several
> log files?  Seems like a real loosing proposition.  I'd just go for a
> single log file with reasonable per line prefixes (time, job, ...).
> Then you can grep out what you want.

View raw message