tvm-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mei Ye via TVM Discuss <nore...@discuss.tvm.ai>
Subject [TVM Discuss] [Development] Google lasted work: MLIR Primer
Date Sun, 07 Apr 2019 17:05:10 GMT


In my vision, there could be a vendor-neutral library that implements higher level MLIR dialect
operators in lower (algebraic) level.   There could be a graph optimizer, a tensor optimizer
and a traditional compiler optimizer.   Graph optimizer does higher level graph optimizations
like fusion as well as serves as a driver.  It partitions graph, inlines operators from the
vendor-neutral library and directs selected partitions to the tensor optimizer.   It also
invokes traditional compilers for traditional global optimizations.   It should also accommodate
vendor-specific libraries by keeping them as intrinsics to be lowered into function/kernel
calls.   Tensor compiler will not see the dialects.





---
[Visit Topic](https://discuss.tvm.ai/t/google-lasted-work-mlir-primer/1721/16) to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click here](https://discuss.tvm.ai/email/unsubscribe/f938a1ce8eb0719d712d5810a8ab9fcf7c4840189c709e0bc8c44305bb463f3e).

Tianqi Chen, UW, Seattle, WA, 98105, United States
http://tracking.discuss.tvm.ai/tracking/unsubscribe?msgid=5ekiwpWo3VKJEeiX2nKpLQ2
Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message