mxnet-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From GitBox <...@apache.org>
Subject [GitHub] [incubator-mxnet] xidulu edited a comment on issue #15928: [RFC] A faster version of Gamma sampling on GPU.
Date Mon, 19 Aug 2019 09:04:30 GMT
xidulu edited a comment on issue #15928: [RFC] A faster version of Gamma sampling on GPU.
URL: https://github.com/apache/incubator-mxnet/issues/15928#issuecomment-522469258
 
 
   @ptrendx 
   
   The device-side api I mentioned is the `RandGenerator` class. (the one used in `ndarray.random()`),
it generates random number with `curand_uniform()`: 
   https://github.com/apache/incubator-mxnet/blob/master/include/mxnet/random_generator.h#L111
   
   Host api can be seen here (the one I used) 
   https://github.com/apache/incubator-mxnet/blob/master/3rdparty/mshadow/mshadow/random.h#L370

   Random numbers are generated with `curandGenerateUniform()`
   
   In terms of random number generation, `RandGenerator` (which is basically a wrapper over
the CUDA device api, IMO) may be comparable to mshadow/random. 
   ~However, is it possible that the overhead of _managing random states_ in `RandGenerator`
affects its performance ?~
   
   ------------------
   Update:
   
   To find out the bottleneck of `ndarray.random()`, I remove the while loop in the kernel:
https://github.com/apache/incubator-mxnet/blob/fb4f9d55382538fe688638b741830d84ae0d783e/src/operator/random/sampler.h#L183
   
   The new version becomes ten times faster than the origin one:  160ms V.S 1600ms. (of course,
some samples are not sampled correctly). 
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

Mime
View raw message