tvm-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From GitBox <...@apache.org>
Subject [GitHub] [incubator-tvm] Hzfengsy commented on issue #5099: [TOPI][Tensor Core] Conv2d and Dense ops support on Tensor Core
Date Sat, 21 Mar 2020 16:09:55 GMT
Hzfengsy commented on issue #5099: [TOPI][Tensor Core] Conv2d and Dense ops support on Tensor
Core
URL: https://github.com/apache/incubator-tvm/pull/5099#issuecomment-602065959
 
 
   Thank you for your interest @jwfromm! That is a very good question and I think there must
be other people who have the same confusion as yours. So, I would like to tell the story in
detail. 
   
   The PR https://github.com/apache/incubator-tvm/pull/4136 introduced the Tensor Core low-level
intrinsic, which is tvm Tensor Core infrastructure. It enabled many different ways to use
Tensor Core in TVM. This PR and https://github.com/apache/incubator-tvm/pull/4234 are exactly
two ways.
   
   In PR https://github.com/apache/incubator-tvm/pull/4234, it uses a pass `RewriteForTensorCore`
to detect the Matmul pattern. (Please see RFC https://github.com/apache/incubator-tvm/issues/4105
for the details). The good thing is that users can write the normal matmul schedule and the
pass will do the reset. However, this algorithm brings too many constraints:
   - The pass can only detect gemm pattern, but cannot support conv2d
   - This algorithm only supports one local fragment in a warp, which brings a great performance
regression on large scale workloads. For a better understanding of this point, please see
the CUTLASS  introduction (https://devblogs.nvidia.com/cutlass-linear-algebra-cuda/) Figure
7. It uses 2*4 local fragments in large scale gemm.
   
   As we know, performance is the most important thing for schedules in topi. Hence, we directly
use intrinsics rather than Auto Tensor Core CodeGen pass, the similar way as that shows in
the tutorial (https://docs.tvm.ai/tutorials/optimize/opt_conv_tensorcore.html). The major
differences are the following two things:
   - It enables traditional data layout (NHWC, and NCHW maybe in the future). The tutorial
requires a packed layout(NHWCnc). We have done a lot of experience to choose the layout and
achieved the best performance as we can. If you are interested in the details, @Shawn-Inspur
may show it.
   - AutoTVM boosts the op calculation. Search space contains different local fragment numbers,
different warp numbers in one block and different memory layout offset. Even more, we can
use AutoTVM to search the fragment shape(32 * 16 * 8, 16 * 16 * 16 or 8 * 16 *32)

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

Mime
View raw message