tvm-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From GitBox <...@apache.org>
Subject [GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #5511: [AutoTVM][TOPI] AutoTVM incorrect measurement
Date Fri, 08 May 2020 22:51:06 GMT

kevinthesun commented on a change in pull request #5511:
URL: https://github.com/apache/incubator-tvm/pull/5511#discussion_r422409992



##########
File path: topi/python/topi/mali/conv2d.py
##########
@@ -138,20 +138,15 @@ def _schedule_spatial_pack(cfg, s, output, conv, data_vec, kernel_vec):
         s[data_vec].unroll(vw)
 
     if isinstance(kernel_vec.op, tvm.te.ComputeOp) and kernel_vec.name == 'kernel_vec':
-        if autotvm.GLOBAL_SCOPE.in_tuning:
-            # kernel packing will be pre-computed during compilation, so we skip
-            # this part to make tuning records correct
-            s[kernel_vec].pragma(s[kernel_vec].op.axis[0], 'debug_skip_region')
-        else:
-            max_threads = tvm.target.Target.current(allow_none=False).max_num_threads
-            co, ci, kh, kw, vc = s[kernel_vec].op.axis
-            fused = s[kernel_vec].fuse(co, ci, kh, kw, vc)
-            fused, vec = s[kernel_vec].split(fused, VC)
-            bb, tt = s[kernel_vec].split(fused, max_threads)
-            s[kernel_vec].bind(bb, te.thread_axis("blockIdx.x"))
-            s[kernel_vec].bind(tt, te.thread_axis("threadIdx.x"))
-            if VC in vec_size:
-                s[kernel_vec].vectorize(vec)
+        max_threads = tvm.target.Target.current(allow_none=False).max_num_threads

Review comment:
       While doing autotvm, all the workloads fetched are in the original data/kernel layouts,
for example NCHW/OIHW. It means we need to do layout conversion first. For spatial_pack this
is done inside compute. However, we need to skip this layout conversion stage while doing
autotuning. Previously we use debug_skip_region which will cause inaccurate measurement. Another
way to skip it is to change the input kernel tensor to a new placeholder with converted layout.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



Mime
View raw message