singa-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Wang Wei <wang...@comp.nus.edu.sg>
Subject Communication between GPUs
Date Tue, 21 Apr 2015 04:05:19 GMT
As planed in the previous discussion, we are stabilizing the APIs of each
module.
One problem I am encountered is about the communication APIs to support
GPUs.

We can use some libraries like cudamat (https://code.google.com/p/cudamat/)
for linear algebra computation. Hence, the APIs on computation would almost
the same as those for CPU. But I have poor knowledge on the communication
between GPU and CPU, and the communication between GPUs.
I am asking you for your suggestions.

Wangyuan, Wuwei and Haibo: Since you are working on deep learning using
GPUs, it would be appreciated if you can give some feedback.

As far as I know that traditionally messages are transferred from GPU
memory to CPU memory and then transferred through TCP/IP to other nodes and
then transferred from CPU memory to GPU memory. We can easily support such
communication using the current APIs for CPU. But the transferring between
GPU and CPU would bring extra cost.
NVDIA has provided a technique called GPUDirect, which enables directly
message passing from GPU memory to network (e.g., infiniband) card. Some
MPI variants now use this technique. But we have switched from MPI to
ZeroMQ, we need to make sure that ZeroMQ supports GPUDirect and
Infiniband.  Do you have any investigations on this? Or how do you
implement the message transferring in your implementation?

Thanks.

regards,
Wei

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message