mahout-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From mohsen jadidi <mohsen.jad...@gmail.com>
Subject Re: mahout on GPU
Date Mon, 09 Jul 2012 17:07:51 GMT
yes it makes sense .
but I am more interested to get faster computation by combining the Mahout
and GPU capabilities. I just wanted to know if   people involve in Mahout
have thought about it or is it at all possible or not.for example speed up
the Map and Reduce phases by parallelise computations on nodes. Of course I
am not aware of communication cost.

On Mon, Jul 9, 2012 at 6:48 PM, Ted Dunning <ted.dunning@gmail.com> wrote:

> Dot products are an example of something that gpu can't help with. The
> problem is that there the same number of flops as memory operations and
> memory is slow.
>
> To get acceleration you need lots of flops per memory fetch. Usually you
> need at least matrix by matrix multiply with both dense. Scalable
> algorithms depend on sparsity in many cases so you are left with a problem.
>
> Sent from my iPhone
>
> On Jul 9, 2012, at 9:31 AM, mohsen jadidi <mohsen.jadidi@gmail.com> wrote:
>
> > Thanks for clarifications and comments.
> >
> >
> > On Mon, Jul 9, 2012 at 10:18 AM, Sean Owen <srowen@gmail.com> wrote:
> >
> >> The factorization is the heavy number crunching. The client of a
> >> recommender needs to do very little computation in comparison, like a
> >> vector-matrix product. While a GPU might make this happen faster, it's
> >> already on the order of microseconds. Compare with the cost of
> >> downloading the whole factored matrix which may run into gigabytes
> >> though.
> >>
> >> On Mon, Jul 9, 2012 at 9:11 AM, Dan Brickley <danbri@danbri.org> wrote:
> >>> Just a quick and possible innumerate thought re WebGL (which is OpenGL
> >>> exposed as Web browser content via Javascript).
> >>>
> >>> Perhaps the big heavy number-crunching can be done on server-side
> >>> Mahout / Hadoop, but with a role for *delivery* of computed matrices
> >>> in the browser? The memory concerns are still relevant, but if you can
> >>> get data into GPU shaders (via texture) there might be modern Web
> >>> application scenarios where it's worth doing some computations locally
> >>> on GPU is worthwhile. Last time i looked, getting floats off of the
> >>> graphics card wasn't easy with standard WebGL btw, though there's a
> >>> WebCL looming too.
> >>>
> >>> Dan
> >>
> >
> >
> >
> > --
> > Mohsen Jadidi
>



-- 
Mohsen Jadidi

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message