tvm-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Junru Shao via TVM Discuss <nore...@discuss.tvm.ai>
Subject [TVM Discuss] [Development] Google lasted work: MLIR Primer
Date Mon, 08 Apr 2019 17:49:40 GMT


Personally I am not quite into polyhedral optimization for now, mainly because most kernels
in deep learning can get fine performance with handcrafted scheduling. For very computational
intensive kernels we already have good vendor library support. Relatively, graph-level optimization
is somehow more like low-hanging fruits.





---
[Visit Topic](https://discuss.tvm.ai/t/google-lasted-work-mlir-primer/1721/20) to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click here](https://discuss.tvm.ai/email/unsubscribe/a653a41a8cfc7609eed55c34d62fb6802b685492d83135552f3ddd46eb44c188).

Tianqi Chen, UW, Seattle, WA, 98105, United States
http://tracking.discuss.tvm.ai/tracking/unsubscribe?msgid=vkBsreEhYwWepRb7tNuZWw2
Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message