systemml-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Berthold Reinwald" <reinw...@us.ibm.com>
Subject Re: [DISCUSS] Support for lower precision in SystemML
Date Wed, 25 Oct 2017 17:16:00 GMT
+1.

Staging it to gain experience, and starting with the GPU backend is the 
right approach.

Regards,
Berthold Reinwald
IBM Almaden Research Center
office: (408) 927 2208; T/L: 457 2208
e-mail: reinwald@us.ibm.com



From:   Matthias Boehm <mboehm7@googlemail.com>
To:     dev@systemml.apache.org
Date:   10/24/2017 09:07 PM
Subject:        Re: [DISCUSS] Support for lower precision in SystemML



+1 this is really great. I like the semantics of a best-effort use of
single-precision (when configured we're free to use single precision but
can fall back to double precision if certain operations or backends don't
support it yet). This allows us to add single precision support
incrementally one component at-a-time. For the java backends, this will
entail creating a good internal abstraction and well, reimplementing most
of the operations. Hence, it's probably a good idea to create an umbrella
jira and add the individual design docs as we go.

Regards,
Matthias

On Tue, Oct 24, 2017 at 8:26 AM, Niketan Pansare <npansar@us.ibm.com> 
wrote:

> Hi all,
>
> We are in process of adding support for lower precision and wanted to 
give
> everyone heads up. By lower precision, I mean support storing matrices 
in
> float array (or half precision array) and performing operations using 
float
> kernels. Initial experiments suggest that we can get up to 2x 
improvements
> in terms of performance for Deep Learning algorithms. Also, this reduces
> the memory requirements by 2x.
>
> Please provide any concerns or suggestions.
>
> The high-level plan is as follows:
> 1. Support lower precision on GPU. Please see 
https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_apache_&d=DwIBaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=rKpXCxhAHRimLTQtp6EVnl3INiKbhTT17qTa33Vf73k&m=zWUKaXp5UoL0_2Ox7fxvfE2N6H37Pcy0xssylnuRwww&s=l7YQ4c3HAYq4nDXiuQkxWXkZr1NPxLN6c50WflVo9OY&e=

> systemml/pull/688
> 2. Support lower precision with native BLAS.
> 3. Support lower precision on CP/Spark. This includes writing float 
matrix
> in binary format and updating memory estimation in hops.
> 4. Extend Python APIs to support lower precision.
>
> The first two steps requires the conversion of double array to 
float/half
> precision array.
>
> Thanks
>
> Niketan.
>





Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message