singa-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From GitBox <>
Subject [GitHub] [singa] joddiy opened a new issue #710: CUDA failed when call tensor.concatenate several times
Date Wed, 27 May 2020 03:33:48 GMT

joddiy opened a new issue #710:

   Hi, @dcslin , current I have an issue when I run the Conv2d with odd_padding. As you see,
sometimes, I need to padding zeros from only one direction, so I write this function:
   def handle_odd_pad_fwd(x, odd_padding):
       handle odd padding mode forward
           x, the input tensor
           odd_padding, the odd_padding
           tensor, the output
       x_tensor = tensor.from_raw_tensor(x)
       # (axis, left padding if True else right padding)
       flags = [(2, True), (2, False), (3, True), (3, False)]
       for (axis, left), pad in zip(flags, odd_padding):
           if pad == 0:
           zeros_shape = list(
           zeros_shape[axis] = pad
           zero_padding = np.zeros(zeros_shape).astype(np.float32)
           zero_padding = tensor.Tensor(device=x.device(), data=zero_padding)
           if left:
               x_tensor = tensor.concatenate((zero_padding, x_tensor), axis)
               x_tensor = tensor.concatenate((x_tensor, zero_padding), axis)
   But it seems, when I call this func, it'd be fine if I call only one or two times, however,
if I call it more times, it will report a error:
   > F0526 12:53:40.017063 15641 tensor_math_cuda.h:791] Check failed: status == CURAND_STATUS_SUCCESS
   I guess, the reason maybe it doesn't release the PGU memory in time?

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:

View raw message