singa-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From GitBox <...@apache.org>
Subject [GitHub] [singa] chrishkchris edited a comment on issue #552: SINGA-496 Implement softplus and softsign functions for tensor math
Date Thu, 14 Nov 2019 14:04:33 GMT
chrishkchris edited a comment on issue #552: SINGA-496 Implement softplus and softsign functions
for tensor math
URL: https://github.com/apache/singa/pull/552#issuecomment-553900186
 
 
   For example, it is something like this (the following is for reference which is not tested):
   
   1. In math_kernal.h, it may be like this:
   
   void softsign(const size_t n, const float *in, float *out, cudaStream_t s);
   
   2. In math_kernal.cu, it may be like this:
   
   __global__ void KernelSoftsign(const size_t n, const float *in, float *out) {
     for (int i = blockIdx.x * blockDim.x + threadIdx.x; i < n;
          i += blockDim.x * gridDim.x) {
       out[i] = in[i] / (std::fabsf(in[i]) + 1);
     }
   }
   
   void softsign(const size_t n, const float *in, float *out, cudaStream_t s) {
     KernelSoftsign <<<ceil(n / CU1DBLOCKF), CU1DBLOCKF, 0, s>>> (n, in,
out);
   }
   
   More accurately, you may try this fabsf in the cuda math api
   https://docs.nvidia.com/cuda/cuda-math-api/group__CUDA__MATH__SINGLE.html#group__CUDA__MATH__SINGLE_1gb00f8593e1bfb1985526020fbec4e0fc

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

Mime
View raw message