mxnet-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From GitBox <...@apache.org>
Subject [GitHub] [incubator-mxnet] IvyBazan commented on a change in pull request #15137: 1.5.0 news
Date Mon, 24 Jun 2019 21:35:37 GMT
IvyBazan commented on a change in pull request #15137: 1.5.0 news
URL: https://github.com/apache/incubator-mxnet/pull/15137#discussion_r296868256
 
 

 ##########
 File path: NEWS.md
 ##########
 @@ -17,6 +17,855 @@
 
 MXNet Change Log
 ================
+## 1.5.0
+
+### New Features
+
+#### Automatic Mixed Precision(experimental)
+Training Deep Learning networks is a very computationally intensive task. Novel model architectures
tend to have increasing number of layers and parameters, which slows down training. Fortunately,
new generations of training hardware as well as software optimizations, make it a feasible
task. 
+However, where most of the (both hardware and software) optimization opportunities exists
is in exploiting lower precision (like FP16) to, for example, utilize Tensor Cores available
on new Volta and Turing GPUs. While training in FP16 showed great success in image classification
tasks, other more complicated neural networks typically stayed in FP32 due to difficulties
in applying the FP16 training guidelines.
+That is where AMP (Automatic Mixed Precision) comes into play. It automatically applies the
guidelines of FP16 training, using FP16 precision where it provides the most benefit, while
conservatively keeping in full FP32 precision operations unsafe to do in FP16. To learn more
about AMP, checkout this [tutorial](https://github.com/apache/incubator-mxnet/blob/master/docs/tutorials/amp/amp_tutorial.md).

+
+#### MKL-DNN Reduced precision inference and RNN API support
+Two advanced features, fused computation and reduced-precision kernels, are introduced by
MKL-DNN in the recent version. These features can significantly speed up the inference performance
on CPU for a broad range of deep learning topologies. MXNet MKL-DNN backend provides optimized
implementations for various operators covering a broad range of applications including image
classification, object detection, natural language processing. Refer to the [MKL-DNN operator
documentation](https://github.com/apache/incubator-mxnet/blob/v1.5.x/docs/tutorials/mkldnn/operator_list.md)
for more information.
+
+#### Dynamic Shape(experimental)
+MXNet now supports Dynamic Shape in both imperative and symbolic mode. MXNet used to require
that operators statically infer the output shapes from the input shapes. However, there exist
some operators that don't meet this requirement. Examples are:
+* while_loop: its output size depends on the number of iterations in the loop.
+* boolean indexing: its output size depends on the value of the input data.
+* many operators can be extended to take a shape symbol as input and the shape symbol can
determine the output shape of these operators (with this extension, the symbol interface of
MXNet can fully support shape).
+To support dynamic shape and such operators, we have modified MXNet backend including graph
binding, the MXNet executor and the operator interface. Now MXNet supports operators with
dynamic shape such as [`contrib.while_loop`](https://mxnet.incubator.apache.org/api/python/ndarray/contrib.html#mxnet.ndarray.contrib.while_loop),
[`contrib.cond`](https://mxnet.incubator.apache.org/api/python/ndarray/contrib.html#mxnet.ndarray.contrib.cond),
and [`mxnet.ndarray.contrib.boolean_mask`](https://mxnet.incubator.apache.org/api/python/ndarray/contrib.html#contrib)
+Note: Currently dynamic shape does not work with Gluon defferred initialization.
+
+#### Large Tensor Support
+Current MXNet only supports maximal tensors size around 4 billon (2^32). This is because
uint32_t is used as default data type for tensor size as well as indexing variables. 
+This limitation has created many problems when larger tensors are used in the model. 
+A naive solution to this problem is to replace all uint32_t in the MXNet backend source code
by int64_t. 
 
 Review comment:
   "A naive solution to this problem is to replace all uint32_t in the MXNet backend source
code to int64_t."

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

Mime
View raw message