giraph-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From José Luis Larroque <larroques...@gmail.com>
Subject Re: RELATION BETWEEN THE NUMBER OF GIRAPH WORKERS AND THE PROBLEM SIZE
Date Sat, 25 Feb 2017 16:00:14 GMT
You are probably looking at your giraph application manager (gam) logs. You
should look for your workers logs, each one have a log (container's logs).
If you can't find them, you should look at your yarn configuration in order
to know where are them, see this:
http://stackoverflow.com/questions/21621755/where-does-hadoop-store-the-logs-of-yarn-applications
.

I don't recommend you to enable checkpointing until you now the specific
error that you are facing. If you are facing out of memory errors for
example, checkpointing won't be helpful in my experience, the same error
will happen over and over.

-- 
*José Luis Larroque*
Analista Programador Universitario - Facultad de Informática - UNLP
Desarrollador Java y .NET  en LIFIA

2017-02-25 12:38 GMT-03:00 Sai Ganesh Muthuraman <saiganeshpsn@gmail.com>:

> Hi Jose,
>
> Which logs do I have to look into exactly, because in the application
> logs, I found the error message that I mentioned and it was also mentioned
> that there was *No good last checkpoint.*
> I am not able to figure out the reason for the failure of a worker for
> bigger files. What do I have to look for in the logs?
> Also, How do I enable Checkpointing?
>
>
> - Sai Ganesh Muthuraman
>
>
>
>

Mime
View raw message