tvm-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Zhao Wu <>
Subject Re: [dmlc/tvm] [RFC][Quantization] Support quantized models from TensorflowLite (#2351)
Date Wed, 29 May 2019 02:55:38 GMT

For the `q_conv2d`, we will add two more arguments.
These will be used for restrict the output range, which could be calculated previously. see
TFLite's `CalculateActivationRangeUint8` function.

>From my experience, we needn't `q_relu`. But we need `q_add` / `q_concate` and so on.
I suggest we use `MobilenetV2` quant model for example, which is used very widely and have
common ops we should consider. For example, `depthwise convolution / add / pool and so on`.

You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
  • Unnamed multipart/alternative (inline, 7-Bit, 0 bytes)
View raw message