maxAllowedJobTimeMilliseconds is supposed to do exactly what
you want, see the code here:
giraph/blob/trunk/giraph-core/ src/main/java/org/apache/ giraph/job/ DefaultJobProgressTrackerServi ce.java#L123
However, I have never tested it with any hadoop distro other than
hadoop 1.0, so maybe it doesn't work in your environment.
Can you share exact configuration (job parameters, and hadoop version)
and what messages do you see in the log?
On Tue, Jan 24, 2017 at 7:26 PM, José Luis Larroque
> I have to execute several Giraph process in AWS. For doing it, i have a
> script that launch one process after another until all process are finished.
> The problem is that some times, a container gets killed, and i spent a lot
> of time waiting for the entire giraph app gets killed, so the following can
> start. I'm trying to diminish this time, because i know that a process that
> takes more than 5 minutes isn't going to be ended (i prefer get a few giraph
> process being killed, if the maximum time for executing all of them gets
> reduced significantly).
> I already try putting a "maximum amount of time" with the following options
> putting a really low value (1 milisecond):
> giraph.waitTaskDoneTimeoutMs -> This option make the container throw an
> IllegalStateException but doens's stop the Giraph app from running. I know
> that this option have a bug reported, but i hope that is not the case here.
maxAllowedJobTimeMilliseconds -> With LOG level in DEBUG, i couldn't
> see any impact of using this option.
> But yet, i'm not getting the expected result, and i have Giraph applications
> that take like 12000 seconds or more (a big waste of time, resources and
> Any help will be greatly appreciated.
> José Luis Larroque
> Analista Programador Universitario - Facultad de Informática - UNLP
> Desarrollador Java en LIFIA