hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ajay Srivastava <Ajay.Srivast...@guavus.com>
Subject Re: Only log.index
Date Wed, 24 Jul 2013 06:52:28 GMT
Yes. That explains it and confirms my guess too :-)

stderr:156 0
syslog:995 166247

What are these numbers ? Byte offset in corresponding files from where logs of this task starts.

Ajay Srivastava

On 24-Jul-2013, at 12:10 PM, Vinod Kumar Vavilapalli wrote:

Ah, I should've guessed that. You seem to have JVM reuse enabled. Even if JVMs are reused,
all the tasks write to the same files as they share the JVM. They only have different index
files. The same thing happens for what we call the TaskCleanup tasks which are launched for
failing/killed tasks.


On Jul 23, 2013, at 10:55 PM, Ajay Srivastava wrote:

Hi Vinod,

Thanks. It seems that something else is going on -

Here is the content of log.index -

ajay-srivastava:userlogs ajay.srivastava$ cat job_201307222115_0188/attempt_201307222115_0188_r_000000_0/log.index
stdout:0 0
stderr:156 0
syslog:995 166247

Looks like that the log.index is pointing to another attempt directory.
Is it doing some kind of optimization ? What is purpose of log.index ?

Ajay Srivastava

On 24-Jul-2013, at 11:09 AM, Vinod Kumar Vavilapalli wrote:

It could either mean that all those task-attempts are crashing before the process itself is
getting spawned (check TT logs) or those logs are getting deleted after the fact. Suspect
the earlier.


On Jul 23, 2013, at 9:33 AM, Ajay Srivastava wrote:


I see that most of the tasks have only log.index created in /opt/hadoop/logs/userlogs/jobId/task_attempt
When does this happen ?
Is there a config setting for this OR this is a bug ?

Ajay Srivastava

View raw message