Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 96C73200D23 for ; Thu, 19 Oct 2017 21:14:24 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 9539D160BEC; Thu, 19 Oct 2017 19:14:24 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 390721609D7 for ; Thu, 19 Oct 2017 21:14:23 +0200 (CEST) Received: (qmail 10320 invoked by uid 500); 19 Oct 2017 19:14:22 -0000 Mailing-List: contact dev-help@mxnet.incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@mxnet.incubator.apache.org Delivered-To: mailing list dev@mxnet.incubator.apache.org Received: (qmail 10306 invoked by uid 99); 19 Oct 2017 19:14:22 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 19 Oct 2017 19:14:22 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id 0119BC3FDB for ; Thu, 19 Oct 2017 19:14:21 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 2.481 X-Spam-Level: ** X-Spam-Status: No, score=2.481 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, HEADER_FROM_DIFFERENT_DOMAINS=0.001, HTML_MESSAGE=2, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, RCVD_IN_SORBS_SPAM=0.5, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=disabled Authentication-Results: spamd1-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id ZdywzzsZpy57 for ; Thu, 19 Oct 2017 19:14:14 +0000 (UTC) Received: from mail-it0-f51.google.com (mail-it0-f51.google.com [209.85.214.51]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with ESMTPS id CF1EE5FC38 for ; Thu, 19 Oct 2017 19:14:13 +0000 (UTC) Received: by mail-it0-f51.google.com with SMTP id r127so10960454itb.5 for ; Thu, 19 Oct 2017 12:14:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:sender:in-reply-to:references:from:date:message-id :subject:to; bh=zYSefs69HR2VS7oKKoXStgLVeP/khj0y9RHeun7ELns=; b=PhHD/3bu3NoJbZuk/UkLfqteQhBzMtX31G4hiFGUhu1SBL2QU+EwwjkE3xYAYRt4zI OgpY5e4zky3DaT2KEqHuedZRlfmQn6Z7yykr2HU8bG5Rvn/vu/dkMUmvC4vy17XBbW1j uL031r+FoTbrBpBUT1q1P8T/5NoJVaZ1PHuPMPBzNnnxQD/qFVV7ro1ahMmjTl8XyaKI 2b6gXv1SaYHot/Y1DjeRguoyEhev1l4cLgryeMtrzcyQuI7Psxfz+Ol03s+sc7K1jV0s antKA71B9sG6XhhcHE8xTQVzHqBaaDj3HPN6279rXYlAEZAEC/iqyuZV5QVRGslIGc/1 /xYA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:sender:in-reply-to:references:from :date:message-id:subject:to; bh=zYSefs69HR2VS7oKKoXStgLVeP/khj0y9RHeun7ELns=; b=Ju0sBUKYOqLzoxQLirp2kn12RlYDGjBWrcKTM9jZs4BTok+lHE3KI/p5iJSX+pBJNV LHUQGFHahVM8KoYFxVO9oGHoPi/+73BC6GGoZeGWz0alMYo3sOtK18pIydbbjFVE6Xvm E4aFe11lfRc2V4AVV2P5L1weF48UdziXcY8sylAK085K+JiVwWopJKNGBlwqOn8F+ua1 XGPHhUXm8SAnqwyuPmBiTQrFCEXrtiyvJ2V7RN7s1cXLv40COQfPqO4NXk5lBD6oWBXm OoUgECptTL6B+2czdeoR7DWhqxcLWNqIJXboAF2OHUc5T47T0k6p3cv0k9RFLaAndl7b pxFA== X-Gm-Message-State: AMCzsaUvEvOPXZGhCOE3r2UAXGVmwQdsCnbji9AmEEAEDUvREHO+pWLq oz3VJfukLK5tu1P71ucwWCAKSCNdOWqGpxu1m/udlg== X-Google-Smtp-Source: ABhQp+T2kpn9I+XRE1ojhP3aPe7ZwiDGJ3CuoZRgFZC2xqGTyWT8ziA5z7nvh1B1+s9pMlnWKfXJpf+sagsfG2M4JMA= X-Received: by 10.36.8.136 with SMTP id 130mr3909700itc.18.1508440452780; Thu, 19 Oct 2017 12:14:12 -0700 (PDT) MIME-Version: 1.0 Sender: workcrow@gmail.com Received: by 10.2.96.33 with HTTP; Thu, 19 Oct 2017 12:14:11 -0700 (PDT) In-Reply-To: References: <7EF8380D-7085-4D5D-9D0E-31518E72B9BA@gmail.com> From: Tianqi Chen Date: Thu, 19 Oct 2017 12:14:11 -0700 X-Google-Sender-Auth: 1m5qVNSLSuGjflbHcfpqnVl47ho Message-ID: Subject: Re: Request for suggestions- Supporting onnx in mxnet To: dev@mxnet.incubator.apache.org Content-Type: multipart/alternative; boundary="001a1140ab605a56fb055beb299b" archived-at: Thu, 19 Oct 2017 19:14:24 -0000 --001a1140ab605a56fb055beb299b Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Hi Hen: It is sad to think DMLC adversarially in this matter. DMLC projects adopt apache way of doing things and we are planning moving more modules into Apache. All the discussion so far happens under the Apache manner and I do think that healthy discussion on critical design issues is important. It is unfair to say something is rotten just when there is a debate going on in terms of technical issues. They are merely based on our technical assessment of what is better for the project in general, instead of being political or chanting the detailed credits or ownership of the code. Tianqi On Thu, Oct 19, 2017 at 12:03 PM, Hen wrote: > What I think I'm seeing here is that: > > * MXNet moved to Apache. > * Some of the code it relied on (50% per the last release thread, but tha= t > may have been bombastic) remained at DMLC. > * The MXNet community thinks one thing. > * The DMLC community (which is a subset of the MXNet community that runs > under different community rules) thinks another. > > Something is rotten. > > One solution: The MXNet community forks the DMLC code it relies on into t= he > MXNet codebase and moves on without being tied down by the decisions of a > non-compatible community. > > Hen > > > > On Thu, Oct 19, 2017 at 11:59 AM, Tianqi Chen > wrote: > > > Here are the detailed points(sorry for resenting it over again) > > > > Technical Reasoning: > > > > - Model exchange format like CoreML and ONNX are not lossless and > > complete. They are designed to an contain a core set of the > > minimum operators to support necessary inference tasks like ResNet, etc= . > > So you cannot rely on a bi-directional serialization with this format f= or > > all MXNet models. As a simple example, broadcast add/mul is simply not > > supported in onnx. > > > > - Same problem goes for compilation and in-memory IR, a core set of mos= t > > interesting primitives are effectively supported. > > > > - Either in the case of supporting exchange format, or in-memory IR, we > > need to make the decision on what core set of operators are we interest= ed > > in support. We cannot simply say let us support everything from the > > beginning due to the limitations of the exchange format. > > > > - It is crucial for us articulate what is the core set of operators we > care > > about in MXNet. Either in terms of providing guidelines to the communit= y, > > or influence the design of model exchange format them-selfs to move in > > favor of MXNet. > > > > - nnvm/top is that initial core set of operators for both compiler > support > > and exchange purposes. It is modeled under numpy and gluon, under the > > supervision of Eric, Me and Mu. It can be bi-directionally exchanged > with > > a current mxnet operator without loss of information. > > > > The Effort of Engineering: > > > > - Because nnvm/top is modeled with numpy and gluon, mxnet<-> nnvm/top i= s > > quite easy, and we already have one direction done. I would be very hap= py > > to answer any questions on another. No information loss will happen wit= h > > this path. > > > > - mxnet/symbol or nnvm/symbol(they are essentially the same thing with = a > > bit different op defs) <- onnx is harder. There has been already enough > > effort to support onnx 0.1 as Roshani mentioned. Which is contributed b= y > > Zhi Zhang, another Apache MXNet committer. Zhi already provided code to > > alleviate this process. Built code on the existing effort would actuall= y > > make the problem easier. > > > > On Thu, Oct 19, 2017 at 11:55 AM, Tianqi Chen > > wrote: > > > > > As for where the code should sit, we have seen onnx's support for > caffe2 > > > sitting on a separate repo. My suggestion would be put code under > > nnvm/top > > > and migrate into mxnet eventually when the top components get into > MXNet, > > > hopefully by end of next month. > > > > > > I have elaborated my point in the last email thread. This (going > through > > > nnvm/top) is an important design decision both technically > (compilation, > > > more hardware) and strategically (articulate our core set of operator= s > > and > > > influence the model exchange format). > > > > > > I am glad to see the discussion happening and surely there is doubt, = as > > > with every big step of changes. But with the rapidly changing pace o= f > > deep > > > learning systems, this is the direction that we thought is most > > promising. > > > We can call for a vote if necessary among the committers for the desi= gn > > > decision if there is still debate on this issue. Or we can keep the > > > discussion open and start some effort around nnvm/top to see how it > goes > > > > > > Tianqi > > > > > > On Thu, Oct 19, 2017 at 11:15 AM, Lupesko, Hagay > > > wrote: > > > > > >> Mu, > > >> > > >> You=E2=80=99re mentioning plans for a new model format and compiler,= but I > don=E2=80=99t > > >> recall seeing it shared/discussed on the dev list. Can you share > these, > > so > > >> it is more accessible to folks to understand the plan and vision? > > >> > > >> Personally, I think it will be a shame to add ONNX support to MXNet, > and > > >> have it implemented outside of MXNet. At the end of the day, it make= s > > >> things difficult for MXNet users. > > >> > > >> Hagay > > >> > > >> On 10/19/17, 10:01, "Mu Li" > >> muli.cmu@gmail.com> wrote: > > >> > > >> I'm speaking under my "MXNet contributor" hat. > > >> > > >> It will be sad that our new model format and compiler is not > > >> supported by > > >> our own contributors. It puts us in a bad position to reach out = to > > >> outside > > >> to ask for support. > > >> > > >> If you really what to do it with the onnx <-> mxnet way, I sugge= st > > >> putting > > >> the codes under https://github.com/aws. > > >> > > >> Best > > >> Mu > > >> > > >> On Thu, Oct 19, 2017 at 9:51 AM, Lupesko, Hagay < > lupesko@gmail.com> > > >> wrote: > > >> > > >> > Since there seems to be a difficulty to reach a consensus here= , > > and > > >> this > > >> > is a new area, maybe a good compromise would be to contribute > this > > >> under > > >> > /contrib as experimental, with whatever way Roshani thinks mak= es > > >> sense. > > >> > Once there is code in place, and MXNet users and contributors > are > > >> able to > > >> > check it out, we can consider future steps. > > >> > > > >> > Does this proposal make sense to folks? > > >> > > > >> > On 10/18/17, 23:01, "Tianqi Chen" > of > > >> > tqchen@cs.washington.edu> wrote: > > >> > > > >> > I want to offer one last thing in terms of technical > details. > > I > > >> > mentioned > > >> > two trends in the deep learning systems. There is one last > > >> thing that > > >> > is > > >> > omitted. How should we build a good deploy end for deep > > learning > > >> > models. > > >> > > > >> > There is always a paradox to this problem: > > >> > > > >> > - On one hand, the deployment end needs to be lightweight > and > > >> portable. > > >> > - We want a lot of optimizations (memory layout compute) a= nd > > >> feature > > >> > support, this makes the project big. > > >> > > > >> > All the existing systems suffer from this problem. The > > solution > > >> is > > >> > simple, > > >> > separates the optimization part from the actual runtime an= d > > >> compiles > > >> > the > > >> > things down to a bare metal module. And this is the soluti= on > > >> nnvm/top > > >> > compiler pipeline offer, which I believe will become a > > standard > > >> > practice of > > >> > deployment and where all systems go to > > >> > > > >> > Tianqi > > >> > > > >> > On Wed, Oct 18, 2017 at 10:03 PM, Tianqi Chen < > > >> > tqchen@cs.washington.edu> > > >> > wrote: > > >> > > > >> > > OK, there is some miscommunication in here I guess. We > only > > >> need to > > >> > do a > > >> > > "canonization" step in python API that goes a symbol to > > symbol > > >> > translation > > >> > > layer. It can be done in purely in python, and there is = no > > >> need for > > >> > going > > >> > > "down" into c++ to do this. > > >> > > > > >> > > For example, the current nnvm.from_mxnet API takes Modul= e > or > > >> Gluon > > >> > module > > >> > > and get you back nnvm/top graph in python. > > >> > > > > >> > > All we are asking for is to decomposing it into > > >> > > > > >> > > def mxnet_to_onnx(module): > > >> > > nnvm_graph, params =3D nnvm_from_mxnet(module) > > >> > > onnx =3D nnvm_to_onnx(nnvm_graph, params) > > >> > > return onnx > > >> > > > > >> > > This allows nnvm_from_mxnet to be reused for other > purposes, > > >> like > > >> > > compiling API to deployable modules > > >> > > > > >> > > Tianqi > > >> > > > > >> > > On Wed, Oct 18, 2017 at 9:55 PM, Lupesko, Hagay < > > >> lupesko@gmail.com> > > >> > wrote: > > >> > > > > >> > >> Tianqi: > > >> > >> Thanks for detailing the trends. I fully agree that ONN= X > is > > >> just a > > >> > graph > > >> > >> serialization format =E2=80=93 nothing more, nothing le= ss. I also > > >> think we > > >> > all > > >> > >> agree that this simple mechanism holds lots of value to > DL > > >> users > > >> > since it > > >> > >> allows them to move between frameworks easily (e.g. tra= in > > >> with > > >> > MXNet, > > >> > >> deploy on a mobile device with Caffe2, or the other way > > >> around). > > >> > >> As you said, In Memory IR is different than serializati= on > > >> formats > > >> > such as > > >> > >> ONNX. They are designed to make the runtime execution a= s > > >> efficient > > >> > as > > >> > >> possible, leveraging software and hardware optimization= s. > > >> They are > > >> > indeed > > >> > >> complex, and where the =E2=80=9Cmeat=E2=80=9D is. > > >> > >> (BTW ONNX regards itself as an =E2=80=9CIR=E2=80=9D for= mat, but not in > the > > >> same > > >> > sense as > > >> > >> NNVM). > > >> > >> > > >> > >> At the end of the day, Roshani is aiming to deliver a > > simple > > >> > >> functionality to MXNet users: (1) take an ONNX file, an= d > > >> load it > > >> > into MXNet > > >> > >> so you get a graph+weights you can work with (2) Given = a > > >> trained > > >> > model, > > >> > >> save it as an ONNX file. Since MXNet users do not > interact > > >> with NNVM > > >> > >> directly, but rather interact with MXNet API (MXNet > > Module), > > >> isn=E2=80=99t > > >> > the > > >> > >> simplest thing to do is just to construct the Module = =E2=80=9Con > > the > > >> fly=E2=80=9D > > >> > using > > >> > >> MXNet API? Taking the other approach, we will go from t= he > > >> top level > > >> > MXNet > > >> > >> =E2=80=9Cload=E2=80=9D API, go =E2=80=9Cdown=E2=80=9D t= o NNVM to construct the graph, go > > >> back up to > > >> > MXNet > > >> > >> to expose it as a Module. This seems to complex and doe= s > > not > > >> add any > > >> > >> benefit. In whatever way we construct the MXNet Module > > >> object, NNVM > > >> > will > > >> > >> always be the underlying in memory IR that is being > > >> executed, so > > >> > why not > > >> > >> take the simpler route? > > >> > >> > > >> > >> Hagay > > >> > >> > > >> > >> On 10/18/17, 19:42, "Tianqi Chen" > >> behalf of > > >> > >> tqchen@cs.washington.edu> wrote: > > >> > >> > > >> > >> Hi Chris: > > >> > >> > > >> > >> There is no intention to move things away from mxne= t. > > The > > >> > reduction of > > >> > >> lines of code by having a better design in general, > and > > >> > usually, you > > >> > >> write > > >> > >> less redundant code by benefiting from better desig= n. > > As > > >> I may > > >> > quote: > > >> > >> "the > > >> > >> best design is not achieved not when there is nothi= ng > > to > > >> add, > > >> > but when > > >> > >> there is nothing to be taken away." > > >> > >> > > >> > >> MXNet has always benefited from this philosophy and > > >> improves > > >> > with the > > >> > >> new > > >> > >> designs and proper modularization. For example, we > see > > >> such > > >> > reduction > > >> > >> and > > >> > >> convenience happening when migrating from MXNet's > > legacy > > >> op to > > >> > the > > >> > >> NNVM's mechanism. The new mechanism now enables > things > > >> like > > >> > sparse > > >> > >> aware > > >> > >> support and other stuff which would be much harder = to > > >> support. > > >> > >> > > >> > >> The nnvm/tvm stack comes brings the same benefit(if > not > > >> more) > > >> > and it > > >> > >> will > > >> > >> only add more features to MXNet itself. Offering mo= re > > >> hardware > > >> > >> backends and > > >> > >> optimization, allowing us to write less code and > spent > > >> less > > >> > time to > > >> > >> optimize for each backend by going through TVM > > >> > >> > > >> > >> Tianqi > > >> > >> > > >> > >> On Wed, Oct 18, 2017 at 7:15 PM, Chris Olivier < > > >> > cjolivier01@gmail.com > > >> > >> > > > >> > >> wrote: > > >> > >> > > >> > >> > Reduce code base of mxnet? By increasing scope of > the > > >> dmlc > > >> > modules? > > >> > >> Is the > > >> > >> > intent to make mxnet a thin language wrapper > around a > > >> group > > >> > of dmlc > > >> > >> > modules? > > >> > >> > > > >> > >> > > > >> > >> > On Wed, Oct 18, 2017 at 6:58 PM Tianqi Chen < > > >> > >> tqchen@cs.washington.edu> > > >> > >> > wrote: > > >> > >> > > > >> > >> > > To better answer Hagay's question, I would like > to > > >> dive > > >> > down a > > >> > >> bit deeper > > >> > >> > > on the relation between MXNet, NNVM and model > > >> exchange > > >> > format > > >> > >> like ONNX. > > >> > >> > > > > >> > >> > > There are two major trends in deep learning > systems > > >> now: > > >> > >> > > > > >> > >> > > - Common serializable formats, like ONNX and > > CoreML, > > >> that > > >> > defines > > >> > >> the > > >> > >> > model > > >> > >> > > exchange format. > > >> > >> > > - The in-memory graph IR for quick optimization > and > > >> JIT. > > >> > NNVM, > > >> > >> > Tensorflow's > > >> > >> > > XLA falls into this category. > > >> > >> > > > > >> > >> > > The exchange formats are great, it only poses a > > >> layer of > > >> > >> conversion, > > >> > >> > which > > >> > >> > > is good for exchange. The real meat still comes > > from > > >> the > > >> > >> compilation and > > >> > >> > > JIT pipeline you have to offer. For that, we wi= ll > > >> need an > > >> > >> in-memory IR, > > >> > >> > > because of the cost of constructing, serialize > > could > > >> be > > >> > high for > > >> > >> the > > >> > >> > > exchange formats like protobuf. And usually, t= he > > >> exchange > > >> > >> formats are > > >> > >> > > designed in a minimalistic fashion, making it > less > > >> easy to > > >> > extend > > >> > >> more > > >> > >> > > information to support in-depth optimization li= ke > > >> automatic > > >> > >> quantization, > > >> > >> > > accelerator support. > > >> > >> > > > > >> > >> > > The current MXNet relies on NNVM for in-memory = IR > > >> > manipulation > > >> > >> but does > > >> > >> > not > > >> > >> > > contain a compilation component that compiles t= o > > the > > >> > hardware > > >> > >> backends. > > >> > >> > > Doing export to an exchange format and then bac= k > > >> into NNVM > > >> > run the > > >> > >> > > compilation poses too much burden that JIT > compiler > > >> could > > >> > pay. > > >> > >> Using the > > >> > >> > > same in-memory graph IR as the compilation stac= k > > >> give much > > >> > more > > >> > >> advantage > > >> > >> > > in terms of this. > > >> > >> > > > > >> > >> > > The newly introduces nnvm/top and compiler offe= rs > > >> in-memory > > >> > graph > > >> > >> > > optimization and compilation and offers more > > hardware > > >> > backend > > >> > >> directly > > >> > >> > via > > >> > >> > > TVM. We already see promising results in edge > > >> deployments > > >> > with a > > >> > >> much > > >> > >> > lower > > >> > >> > > overhead of runtime. We will further benefit > > quickly > > >> from > > >> > more > > >> > >> graph > > >> > >> > > optimizations that it has to offer. > > >> > >> > > > > >> > >> > > Building support around this new paradigm offer= s > us > > >> > advantage of > > >> > >> being > > >> > >> > > future compatible and takes full benefit of the > > >> points I > > >> > >> mentioned above > > >> > >> > > > > >> > >> > > Tianqi > > >> > >> > > > > >> > >> > > > > >> > >> > > > > >> > >> > > On Wed, Oct 18, 2017 at 4:57 PM, Lupesko, Hagay= < > > >> > >> lupesko@gmail.com> > > >> > >> > wrote: > > >> > >> > > > > >> > >> > > > Roshani =E2=80=93 this is an exciting initiat= ive, ONNX > > >> support on > > >> > MXNet > > >> > >> will > > >> > >> > > > enable more users to ramp up on MXNet, which = is > > >> great. > > >> > >> > > > > > >> > >> > > > Tianqi =E2=80=93 a few questions and thoughts= about > your > > >> note: > > >> > >> > > > - =E2=80=9CMore hardware backends to mxnet=E2= =80=9D =E2=80=93 MXNet > users > > >> get the > > >> > same > > >> > >> benefit > > >> > >> > of > > >> > >> > > > HW support implementing ONNX import on top of > > MXNet > > >> > symbolic, > > >> > >> right? > > >> > >> > > > - =E2=80=9CNNVM Compiler now received contrib= utions > from > > >> AWS, UW > > >> > and > > >> > >> many other > > >> > >> > > > folks in MXNet community.=E2=80=9D =E2=80=93 = agreed it is > ramping > > >> up, but > > >> > when > > >> > >> you look > > >> > >> > > at > > >> > >> > > > the data, it is clear that it is very early o= n > > for > > >> NNVM. > > >> > >> Looking at the > > >> > >> > > > repo, it has overall 223 commits, 0 releases. > > >> Compare it > > >> > to > > >> > >> MXNet with > > >> > >> > > 6136 > > >> > >> > > > commits and 32 releases. It seems to be still > > >> early on for > > >> > >> NNVM, and > > >> > >> > for > > >> > >> > > a > > >> > >> > > > more reliable initial implementation building > the > > >> import > > >> > on top > > >> > >> of > > >> > >> > MXNet > > >> > >> > > is > > >> > >> > > > easier, faster and safer. MXNet has lots of > users > > >> already > > >> > using > > >> > >> the > > >> > >> > > > Symbolic API which hopefully mean that is a > > mature > > >> API > > >> > that is > > >> > >> not > > >> > >> > likely > > >> > >> > > > to have breaking changes or major issues. > > >> > >> > > > > > >> > >> > > > I=E2=80=99m supportive option 1 proposed by R= oshani > > >> (building > > >> > serde on > > >> > >> top of > > >> > >> > > > MXNet symbolic), but to do it as an > encapsulated > > >> > implementation > > >> > >> detail, > > >> > >> > > so > > >> > >> > > > the implementation can be migrated to NNVM or > > >> another > > >> > >> implementation in > > >> > >> > > the > > >> > >> > > > future, if at that point it seems like the > right > > >> thing to > > >> > do. > > >> > >> > > > > > >> > >> > > > Interested in hearing other opinions though= =E2=80=A6 > > >> > >> > > > > > >> > >> > > > Hagay > > >> > >> > > > > > >> > >> > > > On 10/18/17, 14:13, "Tianqi Chen" < > > >> workcrow@gmail.com on > > >> > >> behalf of > > >> > >> > > > tqchen@cs.washington.edu> wrote: > > >> > >> > > > > > >> > >> > > > I am strongly recommending going through > the > > >> > nnvm/top. One > > >> > >> major > > >> > >> > > > reason in > > >> > >> > > > here is that the support of nnvm/top laye= r > > NOT > > >> ONLY > > >> > mean > > >> > >> > > compatibility > > >> > >> > > > of > > >> > >> > > > model format with onnx. These are the maj= or > > >> benefits: > > >> > >> > > > > > >> > >> > > > > > >> > >> > > > - More hardware backends to mxnet, > including > > >> opencl, > > >> > metal, > > >> > >> > Raspberry > > >> > >> > > > Pi, > > >> > >> > > > web browser. These things are automatical= ly > > >> enabled > > >> > by going > > >> > >> > through > > >> > >> > > > this > > >> > >> > > > layer. In general, we design nnvm/tvm sta= ck > > to > > >> > resolve the > > >> > >> > challenge > > >> > >> > > of > > >> > >> > > > current mxnet's weakness in terms deployi= ng > > to > > >> more > > >> > hardware > > >> > >> > > backends. > > >> > >> > > > > > >> > >> > > > - More frontend capabilities, nnvm's gluo= n > > >> style IR > > >> > ingests > > >> > >> now > > >> > >> > from > > >> > >> > > > CoreML, ONNX and in future keras. > Supporting > > >> those > > >> > will > > >> > >> reduce the > > >> > >> > > > amount > > >> > >> > > > of engineering effort needed. > > >> > >> > > > > > >> > >> > > > - Future compatibility. We all agree that > the > > >> future > > >> > being > > >> > >> migrated > > >> > >> > > to > > >> > >> > > > gluon's API. NNVM/top tries to look ahead > by > > >> directly > > >> > >> adopting the > > >> > >> > > > symbolic > > >> > >> > > > API to be gluon. > > >> > >> > > > > > >> > >> > > > > > >> > >> > > > I would also like to correct some of the > > >> mentioned > > >> > facts > > >> > >> with > > >> > >> > regard > > >> > >> > > to > > >> > >> > > > nnvm/tvm stack > > >> > >> > > > > > >> > >> > > > 1. Nascent project with few contributor= s > > >> > >> > > > > > >> > >> > > > NNVM Compiler now received contributions > from > > >> AWS, UW > > >> > and > > >> > >> many > > >> > >> > other > > >> > >> > > > folks > > >> > >> > > > in MXNet community. NNVM itself is alread= y > > >> being used > > >> > by > > >> > >> MXNet. > > >> > >> > > > MXNet's internal IR is migrating toward > > gluon, > > >> and its > > >> > >> final form > > >> > >> > > being > > >> > >> > > > nnvm/top > > >> > >> > > > > > >> > >> > > > 3. Does not support all operators that > > exist > > >> in > > >> > MXNet > > >> > >> Symbolic > > >> > >> > API > > >> > >> > > > > > >> > >> > > > Neither NNVM/top or onnx support all > > operators > > >> that > > >> > exist > > >> > >> in mxnet > > >> > >> > > > symbolic > > >> > >> > > > API. The end goal here is mainly to make > > >> nnvm/top onnx > > >> > >> compatible, > > >> > >> > > > which is > > >> > >> > > > a more reasonable goal. > > >> > >> > > > > > >> > >> > > > 4. No CI Pipeline and testcases > > >> > >> > > > > > >> > >> > > > NNVM already contains a compiler contains > > >> unittests > > >> > and ci > > >> > >> tested > > >> > >> > > with > > >> > >> > > > integration https://github.com/dmlc/nnvm= , > > >> with a CI > > >> > >> pipline that > > >> > >> > is > > >> > >> > > > well > > >> > >> > > > tested on CPU and GPU cases for front-end= s. > > >> > >> > > > > > >> > >> > > > Tianqi > > >> > >> > > > > > >> > >> > > > > > >> > >> > > > On Wed, Oct 18, 2017 at 1:41 PM, Roshani > > >> Nagmote < > > >> > >> > > > roshaninagmote2@gmail.com> > > >> > >> > > > wrote: > > >> > >> > > > > > >> > >> > > > > Hi guys, > > >> > >> > > > > > > >> > >> > > > > > > >> > >> > > > > I am working on supporting ONNX < > > >> > >> https://github.com/onnx/onnx> > > >> > >> > > > pre-trained > > >> > >> > > > > models in Apache MXNet and would like t= o > > >> seek your > > >> > >> opinion on the > > >> > >> > > > choice of > > >> > >> > > > > implementation. I also have created a > > GitHub > > >> issue > > >> > >> > > > > > >> > incubator-mxnet/issues/8319>. > > >> > >> > > Supporting > > >> > >> > > > ONNX > > >> > >> > > > > in > > >> > >> > > > > MXNet will enable users to move between > > >> frameworks > > >> > with > > >> > >> their > > >> > >> > > > models, this > > >> > >> > > > > will also enable MXNet project to be a > part > > >> of the > > >> > ONNX > > >> > >> open > > >> > >> > > > standard and > > >> > >> > > > > steer the direction of ONNX. > > >> > >> > > > > > > >> > >> > > > > > > >> > >> > > > > For those who don=E2=80=99t know ONNX, = ONNX is an > > >> open > > >> > source > > >> > >> format for > > >> > >> > AI > > >> > >> > > > models > > >> > >> > > > > which enables models to be transferred > > >> between > > >> > >> frameworks. Refer > > >> > >> > to > > >> > >> > > > > https://github.com/onnx/onnx for more > > >> details. > > >> > >> > > > > > > >> > >> > > > > > > >> > >> > > > > To implement the import/export > > functionality > > >> in > > >> > MXNet, I > > >> > >> propose > > >> > >> > to > > >> > >> > > > expose > > >> > >> > > > > a MXNet python module =E2=80=9Cserde=E2= =80=9D(name taken > > from > > >> > Apache Hive > > >> > >> > project) > > >> > >> > > > with the > > >> > >> > > > > following methods supporting different > > >> formats: > > >> > >> > > > > > > >> > >> > > > > sym, params =3D > > mxnet.serde.import(other_forma > > >> t_file, > > >> > >> > > > other_format=3D=E2=80=98onnx=E2=80=99) > > >> > >> > > > > > > >> > >> > > > > other_format_file =3D > > >> mxnet.serde.export(mxnet_sym, > > >> > >> mxnet_params, > > >> > >> > > > =E2=80=98onnx=E2=80=99) > > >> > >> > > > > > > >> > >> > > > > > > >> > >> > > > > The implementation under the hood can b= e > > >> done in > > >> > two ways: > > >> > >> > > > > > > >> > >> > > > > > > >> > >> > > > > 1) Implement at the MXNet layer by > parsing > > >> the ONNX > > >> > >> model(in > > >> > >> > > protobuf > > >> > >> > > > > format) and turn into MXNet Symbolic > > >> operators and > > >> > build > > >> > >> MXNet > > >> > >> > > model > > >> > >> > > > > directly. Similarly, I can convert the > > MXNet > > >> model > > >> > to > > >> > >> ONNX format > > >> > >> > > at > > >> > >> > > > this > > >> > >> > > > > layer. > > >> > >> > > > > > > >> > >> > > > > > > >> > >> > > > > 2) The DMLC community has released the > > >> nnvm/tvm > > >> > complier > > >> > >> and an > > >> > >> > > > > intermediate representation of the > models, > > >> refer: > > >> > >> > > > > http://www.tvmlang.org/2017/ > > >> > 10/06/nnvm/tvm-compiler- > > >> > >> > > > announcement.html > > >> > >> > > > > > >> 0/06/nnvm-compiler- > > >> > >> > announcement.html > > >> > >> > > > > > >> > >> > > > > > > >> > >> > > > > Based on the conversation on the GitHub > > issue > > >> > >> > > > > > >> > incubator-mxnet/issues/8319> I > > >> > >> > opened, > > >> > >> > > Mu > > >> > >> > > > > mentioned that MXNet would use nnvm/tvm > as > > >> the > > >> > backend in > > >> > >> the > > >> > >> > > future. > > >> > >> > > > > > > >> > >> > > > > > > >> > >> > > > > We could hook into this layer to > implement > > >> the > > >> > >> import/export > > >> > >> > > > functionality. > > >> > >> > > > > nnvm/tvm has ONNX 0.1 version import > > >> implemented. > > >> > >> > > > > > > >> > >> > > > > For import, > > >> > >> > > > > > > >> > >> > > > > 1. > > >> > >> > > > > > > >> > >> > > > > I will need to enhance nnvm/tvm=E2= =80=99s > > >> importer to > > >> > support > > >> > >> ONNX 0.2 > > >> > >> > > > > 2. > > >> > >> > > > > > > >> > >> > > > > Implement nnvm/tvm->mxnet symbolic > > >> operators. > > >> > >> > > > > > > >> > >> > > > > For export: > > >> > >> > > > > > > >> > >> > > > > > > >> > >> > > > > 1. > > >> > >> > > > > > > >> > >> > > > > mxnet->nnvm/tvm ( nnvm/tvm provides > this > > >> > implementation > > >> > >> > already) > > >> > >> > > > > 2. > > >> > >> > > > > > > >> > >> > > > > I will need to Implement > nnvm/tvm>onnx. > > >> > >> > > > > > > >> > >> > > > > > > >> > >> > > > > These are the pros and cons I see in th= e > > >> above > > >> > approaches: > > >> > >> > > > > > > >> > >> > > > > 1. > > >> > >> > > > > > > >> > >> > > > > Import/export at mxnet layer > > >> > >> > > > > > > >> > >> > > > > Pros: > > >> > >> > > > > > > >> > >> > > > > 1. > > >> > >> > > > > > > >> > >> > > > > Stable APIs currently used by users. > > >> > >> > > > > 2. > > >> > >> > > > > > > >> > >> > > > > Larger Apache MXNet community of > > >> contributors. > > >> > >> > > > > 3. > > >> > >> > > > > > > >> > >> > > > > CI pipeline to catch bugs. > > >> > >> > > > > 4. > > >> > >> > > > > > > >> > >> > > > > Comparatively less time to implement > and > > >> put it > > >> > in the > > >> > >> hands > > >> > >> > of > > >> > >> > > > the > > >> > >> > > > > users. > > >> > >> > > > > > > >> > >> > > > > Cons: > > >> > >> > > > > > > >> > >> > > > > 1. > > >> > >> > > > > > > >> > >> > > > > In the future we may have to > reimplement > > >> at the > > >> > >> nnvm/tvm > > >> > >> > layer, > > >> > >> > > > in case > > >> > >> > > > > MXNet moves to the nnvm/tvm > > >> backend(assuming it > > >> > will > > >> > >> move). > > >> > >> > > > > > > >> > >> > > > > > > >> > >> > > > > > > >> > >> > > > > 1. > > >> > >> > > > > > > >> > >> > > > > Import/export at nnvm/tvm layer > > >> > >> > > > > > > >> > >> > > > > Pros: > > >> > >> > > > > > > >> > >> > > > > 1. > > >> > >> > > > > > > >> > >> > > > > Less engineering work in case mxnet > > moves > > >> to > > >> > nnvm/tvm > > >> > >> > > > > 2. > > >> > >> > > > > > > >> > >> > > > > nnvm/tvm would become a hub to conve= rt > > to > > >> > different > > >> > >> formats. > > >> > >> > > > > 3. > > >> > >> > > > > > > >> > >> > > > > nnvm operators are more in parity wi= th > > >> mxnet=E2=80=99s > > >> > gluon > > >> > >> APIs this > > >> > >> > > > could be > > >> > >> > > > > useful in case Gluon becomes the onl= y > > >> standard > > >> > that > > >> > >> MXNet will > > >> > >> > > > support. > > >> > >> > > > > > > >> > >> > > > > Cons: > > >> > >> > > > > > > >> > >> > > > > 1. > > >> > >> > > > > > > >> > >> > > > > Nascent project with few contributor= s > > >> > >> > > > > 2. > > >> > >> > > > > > > >> > >> > > > > Does not support all operators that > > exist > > >> in > > >> > MXNet > > >> > >> Symbolic > > >> > >> > API > > >> > >> > > > > 3. > > >> > >> > > > > > > >> > >> > > > > No CI Pipeline > > >> > >> > > > > 4. > > >> > >> > > > > > > >> > >> > > > > Current Apache MXNet project does no= t > > use > > >> > nnvm/tvm > > >> > >> backend > > >> > >> > > > > 5. > > >> > >> > > > > > > >> > >> > > > > mxnet->nnvm/tvm backend needs more > > >> testing and > > >> > user > > >> > >> feedback. > > >> > >> > > > > > > >> > >> > > > > > > >> > >> > > > > Any suggestions on both of these > > approaches? > > >> From > > >> > user's > > >> > >> > > > perspective, this > > >> > >> > > > > will be an implementation detail that i= s > > not > > >> > exposed. > > >> > >> > > > > > > >> > >> > > > > Thanks, > > >> > >> > > > > > > >> > >> > > > > Roshani > > >> > >> > > > > > > >> > >> > > > > > >> > >> > > > > > >> > >> > > > > > >> > >> > > > > > >> > >> > > > > >> > >> > > > >> > >> > > >> > >> > > >> > >> > > >> > >> > > >> > > > > >> > > > >> > > > >> > > > >> > > > >> > > >> > > >> > > >> > > > > > > --001a1140ab605a56fb055beb299b--