hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Steve Loughran <ste...@hortonworks.com>
Subject Re: profiling hdfs write path
Date Sat, 08 Dec 2012 12:38:40 GMT
On 8 December 2012 04:39, Radim Kolar <hsn@filez.com> wrote:

>
> if you want to keep code in safe state you need:
> 1. good unit test
> 2. high unit test coverage
> 3. clean code
> 4. documented code
> 5. good javadoc


+ good functional tests, which explores the deployment state of the world,
especially different networks. Once you get into HA you also need the
ability to trigger server failures and network partitions as part of a test
run.

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message