singa-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF subversion and git services (JIRA)" <>
Subject [jira] [Commented] (SINGA-100) Implement layers using CUDNN for GPU training
Date Wed, 16 Dec 2015 12:11:46 GMT


ASF subversion and git services commented on SINGA-100:

Commit f9d42687a623e6f010aedeca14692baaadc8241d in incubator-singa's branch refs/heads/master
from [~zhongle]
[;h=f9d4268 ]

SINGA-100 Implement layers using CUDNN for GPU training

Automatically add library path when using `--with-cuda' and `--with-cudnn'.
Users can pass the lib check even if they do not set cuda/cudnn library path in environment

> Implement layers using CUDNN for GPU training
> ---------------------------------------------
>                 Key: SINGA-100
>                 URL:
>             Project: Singa
>          Issue Type: New Feature
>            Reporter: wangwei
> NVIDIA has released the cudnn library optimized for CNN operations like convolution,
pooling, etc. It has achieved overall good performance. Hence, it is essential to add cudnn
supported layers in SINGA for efficient GPU training (SINGA-41).
> We will use the cudnn library to implement CNN layers, namely,
>  cudnnConvolutionLayer, cudnnPoolingLayer, cudnnLRNLayer, cudnnSoftmaxLayer, cudnnReLULayer,
cudnnSigmoidLayer, cudnnTanhLayer, cudnnDivNormLayer.
> Data type float-16 will not be consider in this ticket.

This message was sent by Atlassian JIRA

View raw message