singa-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From GitBox <>
Subject [GitHub] [singa-doc] nudles edited a comment on issue #14: rearrange contents in
Date Sat, 04 Apr 2020 07:33:28 GMT
nudles edited a comment on issue #14: rearrange contents in
   The [DIST](
variable can be inferred based on the num of gpus?
   for MPI, you do not need to pass `num_gpus` explicitly to `DistOpt`? but for multiprocessing,
you need?
   The format of the docString is very good!
   Some arguments may need more explanations:
   1. [nccl_id]( is compulsory
for multiprocessing? and should be none for MPI?
   2. how about num_gpu and gpu_per_node?
   3. give a concrete example for `rank_in_local` and `rank_in_global`
   In addition, we may need to introduce the implementation of distributed training code in
SINGA at the end of this documentation. We have given the overview of  the synchronous training
algorithm at the beginning in this documentation. But how what is done at the Python side
and C++ side is unknown. When NCCL and MPI APIs are called. This part is mainly for developers
(not for end users). 
   You can refer to the [tensor documentation](

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:

With regards,
Apache Git Services

View raw message