hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Andrzej Bialecki ...@getopt.org>
Subject Re: [jira] Commented: (HADOOP-849) randomwriter fails with 'java.lang.OutOfMemoryError: Java heap space' in the 'reduce' task
Date Wed, 03 Jan 2007 18:55:24 GMT
Doug Cutting wrote:
> Andrzej Bialecki (JIRA) wrote:
>> Also, speaking with my Nutch hat on, if there are plans for 
>> substantial API changes in trunk/ it would be good to have a bugfix 
>> release, which is still API compatible, and which Nutch could use - 
>> there have been tons of fixes since 0.9.2 ...
>
> The current plan is to make the Hadoop 0.10.0 release this Friday, 
> barring objections.
>
> There is one significant incompatible change in this release:
>
> https://issues.apache.org/jira/browse/HADOOP-451

Do you think this causes compatibility problems when reading/writing 
existing Nutch data? I.e. if we upgrade Nutch to 0.10, is there an issue 
here(apart from API changes) that could cause older data to become 
unreadable?

>
> We could make a Hadoop 0.9.3 release containing the patch for 
> HADOOP-849 (more work for me).  Instead, Nutch could simply build & 
> commit a patched version of 0.9.2, or Nutch could upgrade to Hadoop 
> 0.10.0 (less work for me).  Thoughts?

Hmm. Let me check how much work is involved in upgrading Nutch to 0.10 
on the API level ... sooner or later Nutch will have to follow these 
changes anyway, the question is if we have enough resources to do it now.

-- 
Best regards,
Andrzej Bialecki     <><
 ___. ___ ___ ___ _ _   __________________________________
[__ || __|__/|__||\/|  Information Retrieval, Semantic Web
___|||__||  \|  ||  |  Embedded Unix, System Integration
http://www.sigram.com  Contact: info at sigram dot com



Mime
View raw message