incubator-cvs mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Wiki <>
Subject [Incubator Wiki] Update of "TVMProposal" by MarkusWeimer
Date Sun, 24 Feb 2019 23:30:20 GMT
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Incubator Wiki" for change notification.

The "TVMProposal" page has been changed by MarkusWeimer:

Formatting changes

- === Proposal ===
+ = Apache TVM Proposal =
  We propose to incubate the TVM project the Apache Software Foundation.
  TVM is a full stack open deep learning compiler stack for CPUs, GPUs, and specialized accelerators.
It aims to close the gap between the productivity-focused deep learning frameworks, and the
performance- or efficiency-oriented hardware backends.
- === Background ===
+ == Background ==
  There is an increasing need to bring machine learning to a wide diversity of hardware devices.
  Current frameworks rely on vendor-specific operator libraries and optimize for a narrow
range of server-class GPUs. Deploying workloads to new platforms -- such as mobile phones,
embedded devices, and accelerators (e.g., FPGAs, ASICs) -- requires significant manual effort.
@@ -14, +14 @@

  Moreover, there is increasing interest in designing specialized hardware which accelerates
machine learning. Towards this goal, TVM introduces VTA, an open source deep learning accelerator
as part of its stack. The open source VTA driver and hardware design is a crucial step toward
building software support for future ASICs. The TVM-VTA flow acts as a is the great frontier
for researchers and practitioners to explore specialized hardware designs.
- === Rationale ===
+ == Rationale ==
  Deep learning compilation will be the next frontier of machine learning systems.
  TVM is already one of the leading open source projects pursuing this direction.
@@ -22, +22 @@

  Specifically, TVM provides infrastructure to use machine learning to automatically optimize
deployment of deep learning programs on diverse hardware backends. 
- === VTA: Open Source Hardware Design ===
+ == VTA: Open Source Hardware Design ==
  TVM also contains open source hardware as part of its stack. The VTA hardware design is
a fully open sourced deep learning accelerator that allows us to experiment with compiler,
driver, runtime, and execute the code on FPGA. VTA provides a path to target future ASICs,
and build software-driven solutions to co-design future deep learning accelerators. 
@@ -35, +35 @@

  Finally, we can still view VTA design as “software”,  as its source code is written
in source description language and can generate “binary” which can run on FPGA and possibly
- === Current Status ===
+ == Current Status ==
  TVM is open sourced under the Apache License for one and half years. See the current project
website (, Github (, as well as TVM Conference
  TVM has already been used in production, some highlights are AWS (Sagemaker Neo), Huawei
(AI Chip compilation) and Facebook (mobile optimization). We anticipate the list of adopters
to grow over the  next few years.
- === Meritocracy ===
+ == Meritocracy ==
  The TVM stack began as a research project of the SAMPL group at Paul G. Allen School of
Computer Science & Engineering, University of Washington.
  The project is now driven by an open source community involving multiple industry and academic
institutions. The project is currently governed by the Apache Way (
@@ -51, +51 @@

  The community highly values open collaboration among contributors from different backgrounds.The
current contributors come from UW, Berkeley, Cornell, SJTU, AMD, AWS, Huawei, Google, Facebook,
NTT, Ziosoft, TuSimple and many other organizations
- === Community ===
+ == Community ==
  The project currently has 185 contributors. As per the Apache way, all the discussions are
conducted in publicly archivable places.
@@ -70, +70 @@

- ==== Development and Decision Process ====
+ === Development and Decision Process ===
  See for the current
development guideline. The key points are:
  Open public roadmap during development, which turns into release notes
@@ -100, +100 @@

  for a full list of RFCs.
- === Alignment ===
+ == Alignment ==
  TVM is useful for building deep learning deployment solutions. It is perhaps also the first
Apache incubator proposal that includes both open source software and hardware system design.
  It has the potential to benefit existing related ML projects such as MXNet, Singa, SystemML,
and Mahout by providing powerful low-level primitives for matrix operations.
- === Known Risks ===
+ == Known Risks ==
- ==== Orphaned products ====
+ === Orphaned products ===
  The project has a diverse contributor base. As an example, the current contributors come
from UW, Berkeley, Cornell, SJTU, AMD, AWS, Huawei, Google, Facebook, NTT, Ziosoft, TuSimple
and many other organizations We are actively growing this list. Given that the project has
already been used in production, there is a minimum risk of the project being abandoned.
- ==== Inexperience with Open Source ====
+ === Inexperience with Open Source ===
  The TVM community has extensive experience in open source. Three of current six PMC members
  are already PPMC members of existing Apache(incubating) projects. Over the course of development,
the community already has a good way bringing RFCs, discussions and most importantly, welcoming
new contributors in the Apache way.
- ==== Homogenous Developers ====
+ === Homogenous Developers ===
  The project has a diverse contributor base. As an example, the current contributors come
from UW, Berkeley, Cornell, SJTU, AMD, AWS, Huawei, Google, Facebook, NTT, Ziosoft, TuSimple
and many other organizations The community actively seeks to collaborative broadly. The PMC
members followed a principle to *only* nominate committers outside their own organizations.
- === Reliance on Salaried Developers ===
+ == Reliance on Salaried Developers ==
  Most of the current committers are volunteers.
- === Relationships with Other Apache Products ===
+ == Relationships with Other Apache Products ==
  TVM can serve as a fundamental compiler stack for deep learning and machine learning in
general. We expect it can benefit projects like MXNet, Spark, Flink, Mahout, and SystemML.
- === Documentation ===
+ == Documentation ==
- === Initial Source ===
+ == Initial Source ==
  We plan to move our repository to
- === Source and Intellectual Property Submission Plan ===
+ == Source and Intellectual Property Submission Plan ==
  TVM source code is available under Apache V2 license.
  We will work with the committers to get ICLAs signed.
- === External Dependencies ===
+ == External Dependencies ==
  We put all the source level dependencies under
@@ -161, +161 @@

  All of the current he dependencies are stable, which means that the current TVM repo is
standalone and main development activities only happen at the TVM repo. The dependencies are
periodically updated in the rate about once a month when necessary. For source level dependencies,
we will always point to a stable release version for software release in the future.
- === External Dependencies on DMLC projects ===
+ == External Dependencies on DMLC projects ==
  There are three dependencies to dmlc projects in the 3rdparty. The current proposal is to
keep the current dependencies in the 3rdparty. We elaborate on the background of these dependencies
@@ -176, +176 @@

  While it is possible to fork the code in the tvm repo, given that the current tvm repo is
self-contained, and community development is stand-alone, we feel that there are have enough
justifications to treat these as 3rdparty dependencies.
- === Required Resources ===
+ == Required Resources ==
- ==== Mailing List: ====
+ === Mailing List: ===
  The usual mailing lists are expected to be set up when entering incubation:
@@ -192, +192 @@

- ==== Git Repositories: ====
+ === Git Repositories: ===
  Upon entering incubation, we plan to transfer the existing repo from
- ==== Issue Tracking: ====
+ === Issue Tracking: ===
  TVM currently uses GitHub to track issues. Would like to continue to do so while we discuss
migration possibilities with the ASF Infra team.
- ==== URL: ====
+ === URL: ===
  Current project website:, as we proceed website will migrate to
and hopefully 
- === Initial Committers and PMC Members ===
+ == Initial Committers and PMC Members ==
  As the project has already followed the Apache way of development(in terms of meritocracy,
community, and archive of public discussion). We plan to transition the current PMC members
to PPMC members , and committers to apache committers. There are also ongoing votes and discussions
in the current tvm PMC private mail-list about new committers/PMC members(we also invited
our tentative mentors as observers to the mail-list). We plan to migrate the discussions to
private@ after the proposal has been accepted and bring in the new committers/PPMC member
according to the standard Apache community procedure.
@@ -241, +238 @@

  - Lianming Zheng Shanghai Jiao Tong University
- === Sponsors: ===
+ == Sponsors: ==
  ==== Champion: ====
    * Markus Weimer, Microsoft
@@ -251, +248 @@

    * Sebastian Schelter, New York University
    * Byung-Gon Chun, Seoul National University
- ==== Sponsoring Entity ====
+ === Sponsoring Entity ===
  We are requesting the Incubator to sponsor this project.

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message