singa-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From GitBox <...@apache.org>
Subject [GitHub] [singa] Shashankwer opened a new issue #719: Unable to convert maxpool2d from onnx to singa with ceil_mode set as False
Date Tue, 02 Jun 2020 20:57:28 GMT

Shashankwer opened a new issue #719:
URL: https://github.com/apache/singa/issues/719


   Hi,
   
   For converting a ONNX model to SINGA, sonnx.py is being used. Different modules are converted
using sonnx.py. Current implementation does not support maxpool2d with ceil_mode set to True
and count_include_pad attributes. 
   
   For MaxPool2d implemented using PyTorch, ceil_mode is a boolean operator which is default
set to false. Sometimes while converting the pytorch model to onnx the atttribute get transferred
to onnx implementation as: 
   
   `onnx_node.attrs["ceil_mode"] = 0` 
    
   Which represents a valid context for conversion to singa format. The current code in sonnx
however checks only for the presence of attribute **ceil_mode** in **onnx_node.attrs** before
raising an exception as illustrated below: 
   ```
   def _create_max_avg_pool(cls, onnx_node, inputs, opset_version):
           """
           get the max or avg pool operator from onnx node
           Args:
               onnx_node: a given onnx node
           Args:
               inputs: the input tensor
           Args:
               opset_version: the opset version
           Returns: 
               handle, the handle of singa operator
           Returns: 
               forward, the autograd of singa operator
           """
           kernel = tuple(onnx_node.attrs["kernel_shape"])
           padding = tuple(
               onnx_node.attrs["pads"]) if "pads" in onnx_node.attrs else (0, 0)
           stride = tuple(onnx_node.getattr('strides', (1, 1)))
           # default the odd_padding is 0, once there are same pad mode, we modify it
           # for odd_padding, please refer the autegrade.py
           odd_padding = (0, 0, 0, 0)
           if "auto_pad" in onnx_node.attrs:
               auto_pad = utils.force_unicode(onnx_node.attrs['auto_pad'])
               if auto_pad in ('SAME_UPPER', 'SAME_LOWER'):
                   padding, odd_padding = utils.get_padding_shape(
                       auto_pad, inputs[0].shape[2:], kernel, stride)
   
           **# not support count_include_pad and auto_pad
           if "count_include_pad" in onnx_node.attrs or "ceil_mode" in onnx_node.attrs :
               raise ValueError(
                   "Not implemented yet for count_include_pad or ceil_mode")**
   
           # only support 2d
           if len(kernel) != 2:
               raise ValueError("Not implemented yet")
   
           is_max = onnx_node.op_type == 'MaxPool'
           x = inputs[0]
           if x.device.id() == -1:
               handle = singa.PoolingHandle(x.data, kernel, stride, padding,
                                            is_max)
           else:
               handle = singa.CudnnPoolingHandle(x.data, kernel, stride, padding,
                                                 is_max)
   
           _, forward = cls._common_onnx_node_to_singa_op(onnx_node, inputs,
                                                          opset_version)
           return _, forward(handle, odd_padding)
   ```
   
   The code does not consider if the value of **ceil_mode** is set as **False/0**
   
   The following changes can allow considering this edge case
   
   ```
   def _create_max_avg_pool(cls, onnx_node, inputs, opset_version):
           """
           get the max or avg pool operator from onnx node
           Args:
               onnx_node: a given onnx node
           Args:
               inputs: the input tensor
           Args:
               opset_version: the opset version
           Returns: 
               handle, the handle of singa operator
           Returns: 
               forward, the autograd of singa operator
           """
           kernel = tuple(onnx_node.attrs["kernel_shape"])
           padding = tuple(
               onnx_node.attrs["pads"]) if "pads" in onnx_node.attrs else (0, 0)
           stride = tuple(onnx_node.getattr('strides', (1, 1)))
           # default the odd_padding is 0, once there are same pad mode, we modify it
           # for odd_padding, please refer the autegrade.py
           odd_padding = (0, 0, 0, 0)
           if "auto_pad" in onnx_node.attrs:
               auto_pad = utils.force_unicode(onnx_node.attrs['auto_pad'])
               if auto_pad in ('SAME_UPPER', 'SAME_LOWER'):
                   padding, odd_padding = utils.get_padding_shape(
                       auto_pad, inputs[0].shape[2:], kernel, stride)
   
           # not support count_include_pad and auto_pad
           if "ceil_mode" in onnx_node.attrs and onnx_node.attrs["ceil_mode"]:
             raise ValueError(
                   "Not implemented yet for count_include_pad or ceil_mode")
           if "count_include_pad" in onnx_node.attrs:
               raise ValueError(
                   "Not implemented yet for count_include_pad or ceil_mode")
   
           # only support 2d
           if len(kernel) != 2:
               raise ValueError("Not implemented yet")
   
           is_max = onnx_node.op_type == 'MaxPool'
           x = inputs[0]
           if x.device.id() == -1:
               handle = singa.PoolingHandle(x.data, kernel, stride, padding,
                                            is_max)
           else:
               handle = singa.CudnnPoolingHandle(x.data, kernel, stride, padding,
                                                 is_max)
   
           _, forward = cls._common_onnx_node_to_singa_op(onnx_node, inputs,
                                                          opset_version)
           return _, forward(handle, odd_padding)
   ```
   
   The issue is faced while converting shufflenetv2 from  onnx to singa 
   
   Request to let us know if this change is possible
   
   Thanks and Regards,
   Shashank Nigam
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



Mime
View raw message