mahout-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ted Dunning <ted.dunn...@gmail.com>
Subject Re: Hadoop should target C++/LLVM not Java
Date Wed, 13 May 2009 17:40:09 GMT
The conclusion as stated was "It is just almost always worthwhile...".  I
think we both agree that anymore that the conclusion by be "There still
exist a few instances where it is worthwhile..".  The question is when.

My take on the issue is that Hadoop would be completely moribund if it had
been developed in C++ because it would have been non-portable and would now
be stuck in a morass of segment faults.  Not to mention that using C++ would
have meant that Hadoop would have had to make do without Doug C.  Java's
virtues in terms of safety are particularly valuable in a community
project.  Conversely, C++'s defects are particularly egregious and dangerous
in the same setting.

On Wed, May 13, 2009 at 8:49 AM, Sean Owen <srowen@gmail.com> wrote:

> Er, isn't it right fact, conclusion that was really right then and
> remains a little right now? it is the same reason indeed.
>
> On Wed, May 13, 2009 at 4:01 PM, Ted Dunning <ted.dunning@gmail.com>
> wrote:
> > Right fact (google based their map-reduce on c++), wrong conclusion.
> >
> > A simpler motivating factor was simply when Google did it.  In 2001 or
> so,
> > Java was definitely much less competitive.
> >
> > On Wed, May 13, 2009 at 6:18 AM, Sean Owen <srowen@gmail.com> wrote:
> >
> >> For reference, of course, Google operates at such a scale that they
> >> use a C++-based MapReduce framework. It is just almost always
> >> worthwhile to spend the time to beat Java performance.
> >>
> >
>



-- 
Ted Dunning, CTO
DeepDyve

111 West Evelyn Ave. Ste. 202
Sunnyvale, CA 94086
www.deepdyve.com
858-414-0013 (m)
408-773-0220 (fax)

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message