hama-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Roman Shapovalov <shapova...@graphics.cs.msu.su>
Subject Re: Location of task logs in distributed mode
Date Fri, 01 Nov 2013 14:12:05 GMT
>  I would want to use all three nodes. Is it possible?

I found the answer to that question. Just needed to add the following
to hama-site.xml on each slave node:

  <description>The maximum number of BSP tasks that will be run simultaneously
by a groom server.</description>


On Thu, Oct 31, 2013 at 3:48 PM, Roman Shapovalov
<shapovalov@graphics.cs.msu.su> wrote:
> Hi Edward,
> Thanks for your reply.
> I figured out that the logs are stored on the same node the
> corresponding task is run, right?
> So, I have a cluster of 3 nodes, and if I run 3 tasks, all of them are
> executed on a single slave node. Why so? I would want to use all three
> nodes. Is it possible?
> Also, if I run 6 tasks, they occupy both slaves, but not the master.
> If I run 7­-8 tasks, the additional tasks run on the master, but the
> tasks cannot send messages due to the following error:
> 13/10/31 11:40:46 ERROR bsp.BSPPeerImpl: Error while sending messages
> java.net.ConnectException: Call to localhost/ failed on
> connection exception: java.net.ConnectException:
> Connection refused
> Regards,
> Roman
> On Thu, Oct 31, 2013 at 3:15 PM, Edward J. Yoon <edwardyoon@apache.org> wrote:
>> No, but we'll provide a web interface for easy debugging in the future.
>> On Thu, Oct 31, 2013 at 6:12 PM, Roman Shapovalov
>> <shapovalov@graphics.cs.msu.su> wrote:
>>> Hi all,
>>> I had managed to run my program in a fully-distributed mode, but I was
>>> surprised to find the task logs on one of the slave nodes. I expected
>>> to see them on the master node (i.e. the one where the BSPMasterRunner
>>> works). If a cluster is large, it might be hard to find the logs.
>>> Is there a way to specify the logger node, or at least find it out postmortem?
>>> Thanks,
>>> Roman
>> --
>> Best Regards, Edward J. Yoon
>> @eddieyoon

View raw message