tvm-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From GitBox <...@apache.org>
Subject [GitHub] [incubator-tvm] siju-samuel commented on a change in pull request #5362: [Tutorial - QNN] Prequantized MXNet model compilation.
Date Sat, 18 Apr 2020 04:37:33 GMT
siju-samuel commented on a change in pull request #5362: [Tutorial - QNN] Prequantized MXNet
model compilation.
URL: https://github.com/apache/incubator-tvm/pull/5362#discussion_r410610393
 
 

 ##########
 File path: tutorials/frontend/deploy_prequantized_pytorch.py
 ##########
 @@ -15,17 +15,20 @@
 # specific language governing permissions and limitations
 # under the License.
 """
-Deploy a Framework-prequantized Model with TVM
-==============================================
+Deploy a Framework-prequantized Model with TVM - Part 1 (PyTorch)
+=================================================================
 **Author**: `Masahiro Masuda <https://github.com/masahi>`_
 
 This is a tutorial on loading models quantized by deep learning frameworks into TVM.
 Pre-quantized model import is one of the quantization support we have in TVM. More details
on
 the quantization story in TVM can be found
 `here <https://discuss.tvm.ai/t/quantization-story/3920>`_.
 
-Here, we demonstrate how to load and run models quantized by PyTorch, MXNet, and TFLite.
-Once loaded, we can run compiled, quantized models on any hardware TVM supports.
+In this series of tutorials, we demonstrate how to load and run models quantized by PyTorch
(Part
+1), MXNet (Part 2), and TFLite (Part 3). Once loaded, we can run compiled, quantized models
on any
+hardware TVM supports.
+
+This is part 1 of the tutorial, where we will focus on PyTorch-prequantized models.
 
 Review comment:
   Since the 3 tutorials are in different files, suggest we can remove the references to MxNet
and TFLite here. May be the below line is enough.
   
   ```
   Here, we demonstrate how to load and run models quantized by PyTorch.
   Once loaded, we can run compiled, quantized models on any hardware TVM supports.
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

Mime
View raw message