mxnet-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From sandeep krishnamurthy <sandeep.krishn...@gmail.com>
Subject Re: Nightly/Weekly tests for examples
Date Tue, 13 Nov 2018 00:24:11 GMT
Thanks, Ankit for bringing this up. @Anirudh - All the concerns you raised
are very valid. Here are my thoughts:
1. There were several examples that were crashing or had compiler errors.
This is a very bad user experience. All example scripts should be at
least runnable!
2. While I agree examples are too diverse (python scripts, notebooks,
epochs, print statements etc..) We can always start small, we can start
with 5 examples. We can use this to streamline all examples to be python
scripts, print statements, with the main function invoker that can take
params like epoch, dataset etc.
3. We can start with running weekly tests to avoid too long nightly test
pipeline.
4. One possible issue can be on a few examples that depend on a large or
controlled dataset. I am not sure yet, how to solve this, but, we can think.

Any suggestions?
Best,
Sandeep



On Mon, Nov 12, 2018 at 10:38 AM Anirudh Acharya <anirudhkrec@gmail.com>
wrote:

> Hi Ankit,
>
> I have a few concerns about testing examples. Before writing tests for
> examples,
>
>    - you will need to first decide what constitutes a test for an example,
>    because examples are not API calls, which will have return statements
> and
>    the test can just call the API and assert for certain values. Just
> testing
>    if an example is a compilable python script will not add much value in
> my
>    opinion.
>    - And testing for example output and results will require a re-write of
>    many of the examples, because many of them currently just have print
>    statements as outputs and does not return any value as such. I am not
> sure
>    if it is worth the dev-effort.
>    - the current set of examples in the mxnet repo are very diverse - some
>    are written as python notebooks, some are just python scripts with paper
>    implementations, and some are just illustrations of certain mxnet
> features.
>    I am curious to know how you will write tests for these things.
>
>
> Looking forward to seeing the design of this test bed/framework.
>
>
> Thanks
> Anirudh Acharya
>
> On Fri, Nov 9, 2018 at 2:39 PM Marco de Abreu
> <marco.g.abreu@googlemail.com.invalid> wrote:
>
> > Hello Ankit,
> >
> > that's a great idea! Using the tutorial tests as reference is a great
> > starting point. If you are interested, please don't hesitate to attend
> the
> > Berlin user group in case you would like to discuss your first thoughts
> > in-person before drafting a design.
> >
> > -Marco
> >
> >
> > Am Fr., 9. Nov. 2018, 23:23 hat khedia.ankit@gmail.com <
> > khedia.ankit@gmail.com> geschrieben:
> >
> > > Hi MXNet community,
> > >
> > > Recently, I and a few other contributors focussed on fixing examples in
> > > our repository which were not working out of the box as expected.
> > > https://github.com/apache/incubator-mxnet/issues/12800
> > > https://github.com/apache/incubator-mxnet/issues/11895
> > > https://github.com/apache/incubator-mxnet/pull/13196
> > >
> > > Some of the examples failed after API changes and remained uncaught
> until
> > > a user reported the issue. While the community is actively working on
> > > fixing it, it might re-occur after few days if we don’t have a proper
> > > mechanism to catch regressions.
> > >
> > > So, I would like to propose to enable nightly/weekly tests for the
> > > examples similar to what we have for tutorials to catch any such
> > > regressions. The test could check only basic functionalities/working of
> > the
> > > examples. It can run small examples completely whereas it can run long
> > > training examples for only few epochs.
> > >
> > > Any thoughts from the community? Any other suggestions for fixing the
> > same?
> > >
> > > Regards,
> > > Ankit Khedia
> > >
> >
>


-- 
Sandeep Krishnamurthy

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message