hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Oleg Zhurakousky <oleg.zhurakou...@gmail.com>
Subject Re: Why the official Hadoop Documents are so messy?
Date Tue, 08 Jan 2013 13:28:51 GMT
No, I was not talking about wrappers of ASF projects. I was referring to non-ASF Open Source
projects all together (e.g., GitHub, SourceForge, Google code etc.). 


On Jan 8, 2013, at 8:20 AM, Glen Mazza <gmazza@talend.com> wrote:

> quote: "Obviously in the second there is a vested interested by such individual or company
to promote the product therefore things like documentation tend to be much crispier then its
ASF counterparts." -- I'm not so sure about that; in cases where companies provide commercial
wraps of products but pool their resources with other companies in maintaining the open-souce
product they're wrapping, their financial incentive would be in keeping their commercial wrap
documentation top-notch to lure people to their wraps but less so the Apache website documentation.
> I think the original poster just needs to help out with the documentation, check it out
from SVN and submit patches to improve it (or at least submit a JIRA as Mohammad mentioned).
 I cleaned up much of the Hadoop Wiki as I was learning from it.
> Glen
> On 01/08/2013 07:13 AM, Oleg Zhurakousky wrote:
>> Just a little clarification
>> This is NOT "how open source works" by any means as there are many Open Source projects
with  well written and maintained documentation. 
>> It all comes down to the 2 Open Source models
>> 1. ASF Open Source - which is a pure democracy or may be even anarchy without any
governing (individual or corporate) other then the ASF procedures/guidelines themselves
>> 2. Stewardship-based Open Source - controlled and managed by an individual or company
>> Obviously in the second there is a vested interested by such individual or company
to promote the product therefore things like documentation tend to be much crispier then its
ASF counterparts. However the Stewardship-based Open Source model is much tighter with regard
to control of what goes in, quality of code etc., then its ASF counterpart which allows a
greater flow to free ideas from the community, so both are valid both are open source and
both needs to exist and we developers just need to deal with it. After all its Open Source
and the code is always a good source of documentation
>> Cheers
>> Oleg
>> On Jan 8, 2013, at 6:59 AM, Mohammad Tariq <dontariq@gmail.com> wrote:
>>> Hello there,
>>>      Thank you for the comments. But, just to let you know, 
>>> it's a community work and no one in particular can be held
>>> responsible for these kind of small things. This is how open
>>> source works. Guys who are working on Hadoop have a lot
>>> of things to do. In spite of that, they are giving their best. In
>>> the process sometimes these kinda things might happen.
>>> I really appreciate your effort. But rather than this you can
>>> raise a JIRA if you find something wrong somewhere and
>>> fix it or let somebody else fix it.
>>> Many thanks.
>>> P.S. : Don't take it otherwise.
>>> Best Regards,
>>> Tariq
>>> +91-9741563634
>>> https://mtariq.jux.com/
>>> On Tue, Jan 8, 2013 at 5:05 PM, javaLee <wuaner@gmail.com> wrote:
>>> For example,look at the documents about HDFS shell guide:
>>> In 0.17, the prefix of HDFS shell is hadoop dfs:
>>> http://hadoop.apache.org/docs/r0.17.2/hdfs_shell.html
>>> In 0.19, the prefix of HDFS shell is hadoop fs:
>>> http://hadoop.apache.org/docs/r0.19.1/hdfs_shell.html#lsr
>>> In 1.0.4,the prefix of HDFS shell is hdfs dfs:
>>> http://hadoop.apache.org/docs/r1.0.4/file_system_shell.html#ls
>>> Reading official Hadoop ducuments is such a suffering.
>>> As a end user, I am confused...
> -- 
> Glen Mazza
> Talend Community Coders - coders.talend.com
> blog: www.jroller.com/gmazza

View raw message