hadoop-common-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Wiki <wikidi...@apache.org>
Subject [Hadoop Wiki] Update of "HowToUseInjectionFramework" by KonstantinBoudnik
Date Fri, 13 Nov 2009 22:48:58 GMT
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change notification.

The "HowToUseInjectionFramework" page has been changed by KonstantinBoudnik.
http://wiki.apache.org/hadoop/HowToUseInjectionFramework?action=diff&rev1=5&rev2=6

--------------------------------------------------

  The current implementation of the FI framework assumes that the faults it will be emulating
are of non-deterministic nature. That is, the moment of a fault's happening isn't known in
advance and is a coin-flip based.
  
  ==== Architecture of the Injection Framework ====
- [[attachment:arch-view.gif]]
+ {{attachment:arch-view.gif}}
  
  ==== Configuration Management ====
  Currently only configuration for injected faults is available. Configuration management
allows you to set expectations for faults to happen. The settings can be applied either statically
(in advance) or in runtime. The desired level of faults in the framework can be configured
two ways:
@@ -22, +22 @@

   * editing {{{src/aop/fi-site.xml}}} configuration file. This file is similar to other Hadoop's
config files
   * setting system properties of JVM through VM startup parameters or in {{{build.properties}}}
file
  
- 
  ==== Probability Model ====
  This is essentially a coin flipper to regulate faults occurrence. The methods of this class
are getting a random number between {{{0.0}}} and {{{1.0}}} and then checking if a new number
has happened to be in the range of {{{0.0}}} and a configured level for the fault in question.
If that condition is true then the fault will occur.
  
  Thus, to guarantee the happening of a fault one needs to set an appropriate level to {{{1.0}}}.
To completely prevent a fault from happening its probability level has to be set to {{{0.0}}}.
The default probability level is set to {{{0}}} unless the level is changed explicitly through
the configuration file or in the runtime. The name of the default level's configuration parameter
is {{{fi.*}}}
- 
  
  ==== Injection mechanism: AOP and AspectJ ====
  The foundation of Hadoop's FI includes a cross-cutting concept implemented by AspectJ. The
following basic terms are important to remember:
@@ -36, +34 @@

   * In AOP, the '''aspects''' provide a mechanism by which a cross-cutting concern can be
specified in a modular way
   * '''Advice''' is the code that is executed when an aspect is invoked
   * '''Join point''' (or pointcut) is a specific point within the application that may or
not invoke some advice
- 
  
  ==== Predefined Join Points ====
  The following readily available join points are provided by AspectJ:
@@ -54, +51 @@

   * when a handler is executed
  
  ==== Aspect Example ====
+ This is fault injection example:
  {{{#!java
  package org.apache.hadoop.hdfs.server.datanode;
  
@@ -97, +95 @@

  
   * A call to the advice - {{{before () throws IOException : callReceivepacket()}}} - will
be injected (see ''Putting It All Together'' below) before that specific spot of the application's
code.
  
- 
  The pointcut identifies an invocation of class' {{{java.io.OutputStream write()}}} method
with any number of parameters and any return type. This invoke should take place within the
body of method {{{receivepacket()}}} from class {{{BlockReceiver}}}. The method can have any
parameters and any return type. Possible invocations of {{{write()}}} method happening anywhere
within the aspect {{{BlockReceiverAspects}}} or its heirs will be ignored.
  
- '''Note 1''': This short example doesn't illustrate the fact that you can have more than
a single injection point per class. In such a case the names of the faults have to be different
if a developer wants to trigger them separately. 
+ '''Note 1''': This short example doesn't illustrate the fact that you can have more than
a single injection point per class. In such a case the names of the faults have to be different
if a developer wants to trigger them separately.
  
  '''Note 2''': After the injection step (see ''Putting It All Together'' below) you can verify
that the faults were properly injected by searching for {{{ajc}}} keywords in a disassembled
class file.
  
+ Here's code injection example
+ {{{#!java
+ package org.apache.hadoop.security;
+ 
+ import java.io.ByteArrayInputStream;
+ import java.io.DataInputStream;
+ import java.io.IOException;
+ 
+ import org.apache.hadoop.io.WritableUtils;
+ 
+ privileged aspect AccessTokenHandlerAspects {
+   /** check if a token is expired. for unit test only.
+    *  return true when token is expired, false otherwise */
+   static boolean AccessTokenHandler.isTokenExpired(AccessToken token) throws IOException
{
+     ByteArrayInputStream buf = new ByteArrayInputStream(token.getTokenID()
+         .getBytes());
+     DataInputStream in = new DataInputStream(buf);
+     long expiryDate = WritableUtils.readVLong(in);
+     return isExpired(expiryDate);
+   }
+   
+   /** set token lifetime. for unit test only */
+   synchronized void AccessTokenHandler.setTokenLifetime(long tokenLifetime) {
+     this.tokenLifetime = tokenLifetime;
+   }
+ }
+ }}}
  ==== Fault Naming Convention and Namespaces ====
  For the sake of a unified naming convention the following two types of names are recommended
for a new aspects development:
  
   * Activity specific notation (when we don't care about a particular location of a fault's
happening). In this case the name of the fault is rather abstract {{{fi.hdfs.DiskError}}}
   * Location specific notation. Here, the fault's name is mnemonic as in {{{fi.hdfs.datanode.BlockReceiver[optional
location details]}}}
- 
  
  ==== Development Tools ====
   * The Eclipse [[http://www.eclipse.org/ajdt/|AspectJ Development Toolkit]] may help you
when developing aspects
   * IntelliJ IDEA provides AspectJ weaver and Spring-AOP plugins
  
  <a name="alltogether">
- 
  
  ==== Putting It All Together ====
  Faults (aspects) have to injected (or woven) together before they can be used. Follow these
instructions:
@@ -135, +157 @@

  advice defined in org.apache.hadoop.hdfs.server.datanode.BlockReceiverAspects
  has not been applied [Xlint:adviceDidNotMatch]
  }}}
- 
  It isn't an error from AspectJ point of view, however Hadoop's build will fail to preserve
the integrity of the source code.
  
-    * To prepare dev.jar file with all your faults weaved in place use:
+  * To prepare dev.jar file with all your faults weaved in place use:
  
  {{{
  % ant jar-fault-inject
  }}}
- 
   * To create test jars use:
  
  {{{
@@ -209, +229 @@

  ==== Additional Information and Contacts ====
  These two sources of information are particularly interesting and worth reading:
  
-  *
-  [[http://www.eclipse.org/aspectj/doc/next/devguide|http://www.eclipse.org/aspectj/doc/next/devguide/]]
+  * [[http://www.eclipse.org/aspectj/doc/next/devguide|http://www.eclipse.org/aspectj/doc/next/devguide/]]
  
   * AspectJ Cookbook (ISBN-13: 978-0-596-00654-9)
  

Mime
View raw message