From dev-return-5046-archive-asf-public=cust-asf.ponee.io@singa.apache.org Thu Apr 9 14:16:04 2020 Return-Path: X-Original-To: archive-asf-public@cust-asf.ponee.io Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [207.244.88.153]) by mx-eu-01.ponee.io (Postfix) with SMTP id EF2A5180634 for ; Thu, 9 Apr 2020 16:16:03 +0200 (CEST) Received: (qmail 59485 invoked by uid 500); 9 Apr 2020 14:16:03 -0000 Mailing-List: contact dev-help@singa.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@singa.apache.org Delivered-To: mailing list dev@singa.apache.org Received: (qmail 59474 invoked by uid 99); 9 Apr 2020 14:16:03 -0000 Received: from ec2-52-202-80-70.compute-1.amazonaws.com (HELO gitbox.apache.org) (52.202.80.70) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 09 Apr 2020 14:16:03 +0000 From: GitBox To: dev@singa.apache.org Subject: [GitHub] [singa] nudles commented on a change in pull request #662: CUDNN LSTM Message-ID: <158644176328.1900.5922454486202282154.gitbox@gitbox.apache.org> References: In-Reply-To: Date: Thu, 09 Apr 2020 14:16:03 -0000 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit nudles commented on a change in pull request #662: CUDNN LSTM URL: https://github.com/apache/singa/pull/662#discussion_r406223532 ########## File path: python/singa/autograd.py ########## @@ -3239,95 +3239,176 @@ def __init__( bidirectional (bool): If True, becomes a bidirectional RNN. Default: False """ - self.nonlinearity = nonlinearity - - Wx_shape = (input_size, hidden_size) - self.Wx = [] - for i in range(4): - w = Tensor(shape=Wx_shape, requires_grad=True, stores_grad=True) - w.gaussian(0.0, 1.0) - self.Wx.append(w) - - Wh_shape = (hidden_size, hidden_size) - self.Wh = [] - for i in range(4): - w = Tensor(shape=Wh_shape, requires_grad=True, stores_grad=True) - w.gaussian(0.0, 1.0) - self.Wh.append(w) - - Bx_shape = (hidden_size,) - self.Bx = [] - for i in range(4): - b = Tensor(shape=Bx_shape, requires_grad=True, stores_grad=True) - b.set_value(0.0) - self.Bx.append(b) - - self.Bh = [] - for i in range(4): - b = Tensor(shape=Bx_shape, requires_grad=True, stores_grad=True) - b.set_value(0.0) - self.Bh.append(b) - - self.params = self.Wx + self.Wh + self.Bx + self.Bh + self.backend = backend + if backend == "singa": + self.nonlinearity = nonlinearity + + Wx_shape = (input_size, hidden_size) + self.Wx = [] + for i in range(4): + w = Tensor(shape=Wx_shape, requires_grad=True, stores_grad=True) + w.gaussian(0.0, 1.0) + self.Wx.append(w) + + Wh_shape = (hidden_size, hidden_size) + self.Wh = [] + for i in range(4): + w = Tensor(shape=Wh_shape, requires_grad=True, stores_grad=True) + w.gaussian(0.0, 1.0) + self.Wh.append(w) + + Bx_shape = (hidden_size,) + self.Bx = [] + for i in range(4): + b = Tensor(shape=Bx_shape, requires_grad=True, stores_grad=True) + b.set_value(0.0) + self.Bx.append(b) + + self.Bh = [] + for i in range(4): + b = Tensor(shape=Bx_shape, requires_grad=True, stores_grad=True) + b.set_value(0.0) + self.Bh.append(b) + + self.params = self.Wx + self.Wh + self.Bx + self.Bh + elif backend == "cudnn": + if not singa.USE_CUDA: + raise Exception("Could not use cudnn without cuda compiled.\n") + if not inputs: + raise Exception("Input is required for init cudnn LSTM.\n") Review comment: do you need the inputs data or just input shape? ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: users@infra.apache.org With regards, Apache Git Services