hadoop-common-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From w...@apache.org
Subject svn commit: r1618700 [1/2] - in /hadoop/common/branches/fs-encryption/hadoop-common-project: hadoop-common/ hadoop-common/src/main/java/ hadoop-common/src/main/java/org/apache/hadoop/crypto/key/ hadoop-common/src/main/java/org/apache/hadoop/crypto/key/...
Date Mon, 18 Aug 2014 18:41:35 GMT
Author: wang
Date: Mon Aug 18 18:41:31 2014
New Revision: 1618700

URL: http://svn.apache.org/r1618700
Log:
Merge from trunk to branch.

Added:
    hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/DecayRpcScheduler.java
      - copied unchanged from r1618693, hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/DecayRpcScheduler.java
    hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/DecayRpcSchedulerMXBean.java
      - copied unchanged from r1618693, hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/DecayRpcSchedulerMXBean.java
    hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RpcScheduler.java
      - copied unchanged from r1618693, hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RpcScheduler.java
    hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/WhitelistBasedResolver.java
      - copied unchanged from r1618693, hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/WhitelistBasedResolver.java
    hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/CacheableIPList.java
      - copied unchanged from r1618693, hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/CacheableIPList.java
    hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/CombinedIPWhiteList.java
      - copied unchanged from r1618693, hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/CombinedIPWhiteList.java
    hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/FileBasedIPList.java
      - copied unchanged from r1618693, hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/FileBasedIPList.java
    hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/IPList.java
      - copied unchanged from r1618693, hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/IPList.java
    hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestContentSummary.java
      - copied unchanged from r1618693, hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestContentSummary.java
    hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestCount.java
      - copied unchanged from r1618693, hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestCount.java
    hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestDecayRpcScheduler.java
      - copied unchanged from r1618693, hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestDecayRpcScheduler.java
    hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestNetgroupCache.java
      - copied unchanged from r1618693, hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestNetgroupCache.java
    hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestWhitelistBasedResolver.java
      - copied unchanged from r1618693, hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestWhitelistBasedResolver.java
    hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestCacheableIPList.java
      - copied unchanged from r1618693, hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestCacheableIPList.java
    hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestFileBasedIPList.java
      - copied unchanged from r1618693, hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestFileBasedIPList.java
Modified:
    hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/CHANGES.txt   (contents, props changed)
    hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/   (props changed)
    hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java
    hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderDelegationTokenExtension.java
    hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
    hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
    hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ContentSummary.java
    hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Count.java
    hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/metrics/RpcMetrics.java
    hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopologyWithNodeGroup.java
    hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/NetgroupCache.java
    hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/ServiceAuthorizationManager.java
    hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticationHandler.java
    hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/DataChecksum.java
    hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/GenericOptionsParser.java
    hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/MachineList.java
    hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeCrc32.java
    hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
    hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCrc32.c
    hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/bulk_crc32.c
    hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/bulk_crc32.h
    hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/native/src/test/org/apache/hadoop/util/test_bulk_crc32.c
    hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/site/apt/CommandsManual.apt.vm
    hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/site/apt/FileSystemShell.apt.vm
    hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/site/apt/NativeLibraries.apt.vm
    hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/site/apt/ServiceLevelAuth.apt.vm
    hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java
    hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/authorize/TestServiceAuthorization.java
    hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestDataChecksum.java
    hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestGenericOptionsParser.java
    hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/test/resources/testConf.xml
    hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMS.java
    hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSACLs.java
    hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSAudit.java
    hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSAuthenticationFilter.java
    hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSExceptionsProvider.java
    hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSMDCFilter.java
    hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-kms/src/site/apt/index.apt.vm
    hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
    hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMSACLs.java
    hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMSAudit.java

Modified: hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/CHANGES.txt
URL: http://svn.apache.org/viewvc/hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/CHANGES.txt?rev=1618700&r1=1618699&r2=1618700&view=diff
==============================================================================
--- hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/CHANGES.txt (original)
+++ hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/CHANGES.txt Mon Aug 18 18:41:31 2014
@@ -202,6 +202,10 @@ Trunk (Unreleased)
     HADOOP-10224. JavaKeyStoreProvider has to protect against corrupting 
     underlying store. (asuresh via tucu)
 
+    HADOOP-10770. KMS add delegation token support. (tucu)
+
+    HADOOP-10698. KMS, add proxyuser support. (tucu)
+
   BUG FIXES
 
     HADOOP-9451. Fault single-layer config if node group topology is enabled.
@@ -427,6 +431,9 @@ Trunk (Unreleased)
     HADOOP-10862. Miscellaneous trivial corrections to KMS classes. 
     (asuresh via tucu)
 
+    HADOOP-10967. Improve DefaultCryptoExtension#generateEncryptedKey 
+    performance. (hitliuyi via tucu)
+
   OPTIMIZATIONS
 
     HADOOP-7761. Improve the performance of raw comparisons. (todd)
@@ -502,8 +509,31 @@ Release 2.6.0 - UNRELEASED
     HADOOP-10835. Implement HTTP proxyuser support in HTTP authentication 
     client/server libraries. (tucu)
 
+    HADOOP-10820. Throw an exception in GenericOptionsParser when passed
+    an empty Path. (Alex Holmes and Zhihai Xu via wang)
+
+    HADOOP-10281. Create a scheduler, which assigns schedulables a priority
+    level. (Chris Li via Arpit Agarwal)
+
+    HADOOP-8944. Shell command fs -count should include human readable option 
+    (Jonathan Allen via aw)
+
+    HADOOP-10231. Add some components in Native Libraries document (Akira 
+    AJISAKA via aw)
+
+    HADOOP-10650. Add ability to specify a reverse ACL (black list) of users
+    and groups. (Benoy Antony via Arpit Agarwal)
+
+    HADOOP-10335. An ip whilelist based implementation to resolve Sasl
+    properties per connection. (Benoy Antony via Arpit Agarwal)
+
+    HADOOP-10975. org.apache.hadoop.util.DataChecksum should support calculating
+    checksums in native code (James Thomas via Colin Patrick McCabe)
+
   OPTIMIZATIONS
 
+    HADOOP-10838. Byte array native checksumming. (James Thomas via todd)
+
   BUG FIXES
 
     HADOOP-10781. Unportable getgrouplist() usage breaks FreeBSD (Dmitry
@@ -560,6 +590,31 @@ Release 2.6.0 - UNRELEASED
     HADOOP-10402. Configuration.getValByRegex does not substitute for
     variables. (Robert Kanter via kasha)
 
+    HADOOP-10851. NetgroupCache does not remove group memberships. (Benoy
+    Antony via Arpit Agarwal)
+
+    HADOOP-10962. Flags for posix_fadvise are not valid in some architectures
+    (David Villegas via Colin Patrick McCabe)
+
+    HADOOP-10966. Hadoop Common native compilation broken in windows.
+    (David Villegas via Arpit Agarwal)
+
+    HADOOP-10843. TestGridmixRecord unit tests failure on PowerPC (Jinghui Wang
+    via Colin Patrick McCabe)
+
+    HADOOP-10121. Fix javadoc spelling for HadoopArchives#writeTopLevelDirs
+    (Akira AJISAKA via aw)
+
+    HADOOP-10964. Small fix for NetworkTopologyWithNodeGroup#sortByDistance.
+    (Yi Liu via wang)
+
+    HADOOP-10059. RPC authentication and authorization metrics overflow to
+    negative values on busy clusters (Tsuyoshi OZAWA and Akira AJISAKA
+    via jlowe)
+
+    HADOOP-10973. Native Libraries Guide contains format error. (Peter Klavins
+    via Arpit Agarwal)
+
 Release 2.5.0 - UNRELEASED
 
   INCOMPATIBLE CHANGES

Propchange: hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/CHANGES.txt
------------------------------------------------------------------------------
  Merged /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt:r1617528-1618693

Propchange: hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/
------------------------------------------------------------------------------
  Merged /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java:r1617528-1618693

Modified: hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java
URL: http://svn.apache.org/viewvc/hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java?rev=1618700&r1=1618699&r2=1618700&view=diff
==============================================================================
--- hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java (original)
+++ hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java Mon Aug 18 18:41:31 2014
@@ -219,6 +219,13 @@ public class KeyProviderCryptoExtension 
   private static class DefaultCryptoExtension implements CryptoExtension {
 
     private final KeyProvider keyProvider;
+    private static final ThreadLocal<SecureRandom> RANDOM = 
+        new ThreadLocal<SecureRandom>() {
+      @Override
+      protected SecureRandom initialValue() {
+        return new SecureRandom();
+      }
+    };
 
     private DefaultCryptoExtension(KeyProvider keyProvider) {
       this.keyProvider = keyProvider;
@@ -233,10 +240,10 @@ public class KeyProviderCryptoExtension 
           "No KeyVersion exists for key '%s' ", encryptionKeyName);
       // Generate random bytes for new key and IV
       Cipher cipher = Cipher.getInstance("AES/CTR/NoPadding");
-      SecureRandom random = SecureRandom.getInstance("SHA1PRNG");
       final byte[] newKey = new byte[encryptionKey.getMaterial().length];
-      random.nextBytes(newKey);
-      final byte[] iv = random.generateSeed(cipher.getBlockSize());
+      RANDOM.get().nextBytes(newKey);
+      final byte[] iv = new byte[cipher.getBlockSize()];
+      RANDOM.get().nextBytes(iv);
       // Encryption key IV is derived from new key's IV
       final byte[] encryptionIV = EncryptedKeyVersion.deriveIV(iv);
       // Encrypt the new key

Modified: hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderDelegationTokenExtension.java
URL: http://svn.apache.org/viewvc/hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderDelegationTokenExtension.java?rev=1618700&r1=1618699&r2=1618700&view=diff
==============================================================================
--- hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderDelegationTokenExtension.java (original)
+++ hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderDelegationTokenExtension.java Mon Aug 18 18:41:31 2014
@@ -20,6 +20,8 @@ package org.apache.hadoop.crypto.key;
 import org.apache.hadoop.security.Credentials;
 import org.apache.hadoop.security.token.Token;
 
+import java.io.IOException;
+
 /**
  * A KeyProvider extension with the ability to add a renewer's Delegation 
  * Tokens to the provided Credentials.
@@ -45,9 +47,10 @@ public class KeyProviderDelegationTokenE
      * @param renewer the user allowed to renew the delegation tokens
      * @param credentials cache in which to add new delegation tokens
      * @return list of new delegation tokens
+     * @throws IOException thrown if IOException if an IO error occurs.
      */
     public Token<?>[] addDelegationTokens(final String renewer, 
-        Credentials credentials);
+        Credentials credentials) throws IOException;
   }
   
   /**
@@ -76,9 +79,10 @@ public class KeyProviderDelegationTokenE
    * @param renewer the user allowed to renew the delegation tokens
    * @param credentials cache in which to add new delegation tokens
    * @return list of new delegation tokens
+   * @throws IOException thrown if IOException if an IO error occurs.
    */
   public Token<?>[] addDelegationTokens(final String renewer, 
-      Credentials credentials) {
+      Credentials credentials) throws IOException {
     return getExtension().addDelegationTokens(renewer, credentials);
   }
   

Modified: hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
URL: http://svn.apache.org/viewvc/hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java?rev=1618700&r1=1618699&r2=1618700&view=diff
==============================================================================
--- hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java (original)
+++ hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java Mon Aug 18 18:41:31 2014
@@ -22,15 +22,18 @@ import org.apache.hadoop.classification.
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.crypto.key.KeyProvider;
 import org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.EncryptedKeyVersion;
+import org.apache.hadoop.crypto.key.KeyProviderDelegationTokenExtension;
 import org.apache.hadoop.crypto.key.KeyProviderFactory;
 import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
 import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.security.Credentials;
 import org.apache.hadoop.security.ProviderUtils;
-import org.apache.hadoop.security.authentication.client.AuthenticatedURL;
+import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.security.authentication.client.AuthenticationException;
 import org.apache.hadoop.security.authentication.client.ConnectionConfigurator;
-import org.apache.hadoop.security.authentication.client.PseudoAuthenticator;
 import org.apache.hadoop.security.ssl.SSLFactory;
+import org.apache.hadoop.security.token.Token;
+import org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL;
 import org.apache.http.client.utils.URIBuilder;
 import org.codehaus.jackson.map.ObjectMapper;
 
@@ -50,6 +53,7 @@ import java.net.URL;
 import java.net.URLEncoder;
 import java.security.GeneralSecurityException;
 import java.security.NoSuchAlgorithmException;
+import java.security.PrivilegedExceptionAction;
 import java.text.MessageFormat;
 import java.util.ArrayList;
 import java.util.Date;
@@ -69,7 +73,10 @@ import com.google.common.base.Preconditi
  * KMS client <code>KeyProvider</code> implementation.
  */
 @InterfaceAudience.Private
-public class KMSClientProvider extends KeyProvider implements CryptoExtension {
+public class KMSClientProvider extends KeyProvider implements CryptoExtension,
+    KeyProviderDelegationTokenExtension.DelegationTokenExtension {
+
+  public static final String TOKEN_KIND = "kms-dt";
 
   public static final String SCHEME_NAME = "kms";
 
@@ -229,6 +236,8 @@ public class KMSClientProvider extends K
   private String kmsUrl;
   private SSLFactory sslFactory;
   private ConnectionConfigurator configurator;
+  private DelegationTokenAuthenticatedURL.Token authToken;
+  private UserGroupInformation loginUgi;
 
   @Override
   public String toString() {
@@ -309,6 +318,8 @@ public class KMSClientProvider extends K
                 CommonConfigurationKeysPublic.
                     KMS_CLIENT_ENC_KEY_CACHE_NUM_REFILL_THREADS_DEFAULT),
             new EncryptedQueueRefiller());
+    authToken = new DelegationTokenAuthenticatedURL.Token();
+    loginUgi = UserGroupInformation.getCurrentUser();
   }
 
   private String createServiceURL(URL url) throws IOException {
@@ -325,12 +336,14 @@ public class KMSClientProvider extends K
     try {
       StringBuilder sb = new StringBuilder();
       sb.append(kmsUrl);
-      sb.append(collection);
-      if (resource != null) {
-        sb.append("/").append(URLEncoder.encode(resource, UTF8));
-      }
-      if (subResource != null) {
-        sb.append("/").append(subResource);
+      if (collection != null) {
+        sb.append(collection);
+        if (resource != null) {
+          sb.append("/").append(URLEncoder.encode(resource, UTF8));
+          if (subResource != null) {
+            sb.append("/").append(subResource);
+          }
+        }
       }
       URIBuilder uriBuilder = new URIBuilder(sb.toString());
       if (parameters != null) {
@@ -365,14 +378,29 @@ public class KMSClientProvider extends K
     return conn;
   }
 
-  private HttpURLConnection createConnection(URL url, String method)
+  private HttpURLConnection createConnection(final URL url, String method)
       throws IOException {
     HttpURLConnection conn;
     try {
-      AuthenticatedURL authUrl = new AuthenticatedURL(new PseudoAuthenticator(),
-          configurator);
-      conn = authUrl.openConnection(url, new AuthenticatedURL.Token());
-    } catch (AuthenticationException ex) {
+      // if current UGI is different from UGI at constructor time, behave as
+      // proxyuser
+      UserGroupInformation currentUgi = UserGroupInformation.getCurrentUser();
+      final String doAsUser =
+          (loginUgi.getShortUserName().equals(currentUgi.getShortUserName()))
+          ? null : currentUgi.getShortUserName();
+
+      // creating the HTTP connection using the current UGI at constructor time
+      conn = loginUgi.doAs(new PrivilegedExceptionAction<HttpURLConnection>() {
+        @Override
+        public HttpURLConnection run() throws Exception {
+          DelegationTokenAuthenticatedURL authUrl =
+              new DelegationTokenAuthenticatedURL(configurator);
+          return authUrl.openConnection(url, authToken, doAsUser);
+        }
+      });
+    } catch (IOException ex) {
+      throw ex;
+    } catch (Exception ex) {
       throw new IOException(ex);
     }
     conn.setUseCaches(false);
@@ -403,20 +431,27 @@ public class KMSClientProvider extends K
     if (status != expected) {
       InputStream es = null;
       try {
-        es = conn.getErrorStream();
-        ObjectMapper mapper = new ObjectMapper();
-        Map json = mapper.readValue(es, Map.class);
-        String exClass = (String) json.get(
-            KMSRESTConstants.ERROR_EXCEPTION_JSON);
-        String exMsg = (String)
-            json.get(KMSRESTConstants.ERROR_MESSAGE_JSON);
         Exception toThrow;
-        try {
-          ClassLoader cl = KMSClientProvider.class.getClassLoader();
-          Class klass = cl.loadClass(exClass);
-          Constructor constr = klass.getConstructor(String.class);
-          toThrow = (Exception) constr.newInstance(exMsg);
-        } catch (Exception ex) {
+        String contentType = conn.getHeaderField(CONTENT_TYPE);
+        if (contentType != null &&
+            contentType.toLowerCase().startsWith(APPLICATION_JSON_MIME)) {
+          es = conn.getErrorStream();
+          ObjectMapper mapper = new ObjectMapper();
+          Map json = mapper.readValue(es, Map.class);
+          String exClass = (String) json.get(
+              KMSRESTConstants.ERROR_EXCEPTION_JSON);
+          String exMsg = (String)
+              json.get(KMSRESTConstants.ERROR_MESSAGE_JSON);
+          try {
+            ClassLoader cl = KMSClientProvider.class.getClassLoader();
+            Class klass = cl.loadClass(exClass);
+            Constructor constr = klass.getConstructor(String.class);
+            toThrow = (Exception) constr.newInstance(exMsg);
+          } catch (Exception ex) {
+            toThrow = new IOException(MessageFormat.format(
+                "HTTP status [{0}], {1}", status, conn.getResponseMessage()));
+          }
+        } else {
           toThrow = new IOException(MessageFormat.format(
               "HTTP status [{0}], {1}", status, conn.getResponseMessage()));
         }
@@ -729,4 +764,25 @@ public class KMSClientProvider extends K
     }
   }
 
+  @Override
+  public Token<?>[] addDelegationTokens(String renewer,
+      Credentials credentials) throws IOException {
+    Token<?>[] tokens;
+    URL url = createURL(null, null, null, null);
+    DelegationTokenAuthenticatedURL authUrl =
+        new DelegationTokenAuthenticatedURL(configurator);
+    try {
+      Token<?> token = authUrl.getDelegationToken(url, authToken, renewer);
+      if (token != null) {
+        credentials.addToken(token.getService(), token);
+        tokens = new Token<?>[] { token };
+      } else {
+        throw new IOException("Got NULL as delegation token");
+      }
+    } catch (AuthenticationException ex) {
+      throw new IOException(ex);
+    }
+    return tokens;
+  }
+
 }

Modified: hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
URL: http://svn.apache.org/viewvc/hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java?rev=1618700&r1=1618699&r2=1618700&view=diff
==============================================================================
--- hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java (original)
+++ hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java Mon Aug 18 18:41:31 2014
@@ -134,6 +134,9 @@ public class CommonConfigurationKeys ext
   HADOOP_SECURITY_SERVICE_AUTHORIZATION_DEFAULT_ACL = 
       "security.service.authorization.default.acl";
   public static final String 
+  HADOOP_SECURITY_SERVICE_AUTHORIZATION_DEFAULT_BLOCKED_ACL =
+      "security.service.authorization.default.acl.blocked";
+  public static final String
   HADOOP_SECURITY_SERVICE_AUTHORIZATION_REFRESH_POLICY = 
       "security.refresh.policy.protocol.acl";
   public static final String 

Modified: hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ContentSummary.java
URL: http://svn.apache.org/viewvc/hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ContentSummary.java?rev=1618700&r1=1618699&r2=1618700&view=diff
==============================================================================
--- hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ContentSummary.java (original)
+++ hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ContentSummary.java Mon Aug 18 18:41:31 2014
@@ -24,6 +24,7 @@ import java.io.IOException;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.util.StringUtils;
 
 /** Store the summary of a content (a directory or a file). */
 @InterfaceAudience.Public
@@ -102,7 +103,7 @@ public class ContentSummary implements W
    * <----12----> <----12----> <-------18------->
    *    DIR_COUNT   FILE_COUNT       CONTENT_SIZE FILE_NAME    
    */
-  private static final String STRING_FORMAT = "%12d %12d %18d ";
+  private static final String STRING_FORMAT = "%12s %12s %18s ";
   /** 
    * Output format:
    * <----12----> <----15----> <----15----> <----15----> <----12----> <----12----> <-------18------->
@@ -117,7 +118,7 @@ public class ContentSummary implements W
 
   private static final String QUOTA_HEADER = String.format(
       QUOTA_STRING_FORMAT + SPACE_QUOTA_STRING_FORMAT, 
-      "quota", "remaining quota", "space quota", "reamaining quota") +
+      "name quota", "rem name quota", "space quota", "rem space quota") +
       HEADER;
   
   /** Return the header of the output.
@@ -139,11 +140,25 @@ public class ContentSummary implements W
   /** Return the string representation of the object in the output format.
    * if qOption is false, output directory count, file count, and content size;
    * if qOption is true, output quota and remaining quota as well.
+   *
+   * @param qOption a flag indicating if quota needs to be printed or not
+   * @return the string representation of the object
+  */
+  public String toString(boolean qOption) {
+    return toString(qOption, false);
+  }
+
+  /** Return the string representation of the object in the output format.
+   * if qOption is false, output directory count, file count, and content size;
+   * if qOption is true, output quota and remaining quota as well.
+   * if hOption is false file sizes are returned in bytes
+   * if hOption is true file sizes are returned in human readable 
    * 
    * @param qOption a flag indicating if quota needs to be printed or not
+   * @param hOption a flag indicating if human readable output if to be used
    * @return the string representation of the object
    */
-  public String toString(boolean qOption) {
+  public String toString(boolean qOption, boolean hOption) {
     String prefix = "";
     if (qOption) {
       String quotaStr = "none";
@@ -152,19 +167,32 @@ public class ContentSummary implements W
       String spaceQuotaRem = "inf";
       
       if (quota>0) {
-        quotaStr = Long.toString(quota);
-        quotaRem = Long.toString(quota-(directoryCount+fileCount));
+        quotaStr = formatSize(quota, hOption);
+        quotaRem = formatSize(quota-(directoryCount+fileCount), hOption);
       }
       if (spaceQuota>0) {
-        spaceQuotaStr = Long.toString(spaceQuota);
-        spaceQuotaRem = Long.toString(spaceQuota - spaceConsumed);        
+        spaceQuotaStr = formatSize(spaceQuota, hOption);
+        spaceQuotaRem = formatSize(spaceQuota - spaceConsumed, hOption);
       }
       
       prefix = String.format(QUOTA_STRING_FORMAT + SPACE_QUOTA_STRING_FORMAT, 
                              quotaStr, quotaRem, spaceQuotaStr, spaceQuotaRem);
     }
     
-    return prefix + String.format(STRING_FORMAT, directoryCount, 
-                                  fileCount, length);
+    return prefix + String.format(STRING_FORMAT,
+     formatSize(directoryCount, hOption),
+     formatSize(fileCount, hOption),
+     formatSize(length, hOption));
+  }
+  /**
+   * Formats a size to be human readable or in bytes
+   * @param size value to be formatted
+   * @param humanReadable flag indicating human readable or not
+   * @return String representation of the size
+  */
+  private String formatSize(long size, boolean humanReadable) {
+    return humanReadable
+      ? StringUtils.TraditionalBinaryPrefix.long2String(size, "", 1)
+      : String.valueOf(size);
   }
 }

Modified: hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Count.java
URL: http://svn.apache.org/viewvc/hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Count.java?rev=1618700&r1=1618699&r2=1618700&view=diff
==============================================================================
--- hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Count.java (original)
+++ hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Count.java Mon Aug 18 18:41:31 2014
@@ -42,16 +42,22 @@ public class Count extends FsCommand {
     factory.addClass(Count.class, "-count");
   }
 
+  private static final String OPTION_QUOTA = "q";
+  private static final String OPTION_HUMAN = "h";
+
   public static final String NAME = "count";
-  public static final String USAGE = "[-q] <path> ...";
+  public static final String USAGE =
+      "[-" + OPTION_QUOTA + "] [-" + OPTION_HUMAN + "] <path> ...";
   public static final String DESCRIPTION = 
       "Count the number of directories, files and bytes under the paths\n" +
       "that match the specified file pattern.  The output columns are:\n" +
       "DIR_COUNT FILE_COUNT CONTENT_SIZE FILE_NAME or\n" +
       "QUOTA REMAINING_QUOTA SPACE_QUOTA REMAINING_SPACE_QUOTA \n" +
-      "      DIR_COUNT FILE_COUNT CONTENT_SIZE FILE_NAME";
+      "      DIR_COUNT FILE_COUNT CONTENT_SIZE FILE_NAME\n" +
+      "The -h option shows file sizes in human readable format.";
   
   private boolean showQuotas;
+  private boolean humanReadable;
 
   /** Constructor */
   public Count() {}
@@ -70,17 +76,37 @@ public class Count extends FsCommand {
 
   @Override
   protected void processOptions(LinkedList<String> args) {
-    CommandFormat cf = new CommandFormat(1, Integer.MAX_VALUE, "q");
+    CommandFormat cf = new CommandFormat(1, Integer.MAX_VALUE,
+      OPTION_QUOTA, OPTION_HUMAN);
     cf.parse(args);
     if (args.isEmpty()) { // default path is the current working directory
       args.add(".");
     }
-    showQuotas = cf.getOpt("q");
+    showQuotas = cf.getOpt(OPTION_QUOTA);
+    humanReadable = cf.getOpt(OPTION_HUMAN);
   }
 
   @Override
   protected void processPath(PathData src) throws IOException {
     ContentSummary summary = src.fs.getContentSummary(src.path);
-    out.println(summary.toString(showQuotas) + src);
+    out.println(summary.toString(showQuotas, isHumanReadable()) + src);
+  }
+  
+  /**
+   * Should quotas get shown as part of the report?
+   * @return if quotas should be shown then true otherwise false
+   */
+  @InterfaceAudience.Private
+  boolean isShowQuotas() {
+    return showQuotas;
+  }
+  
+  /**
+   * Should sizes be shown in human readable format rather than bytes?
+   * @return true if human readable format
+   */
+  @InterfaceAudience.Private
+  boolean isHumanReadable() {
+    return humanReadable;
   }
 }

Modified: hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/metrics/RpcMetrics.java
URL: http://svn.apache.org/viewvc/hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/metrics/RpcMetrics.java?rev=1618700&r1=1618699&r2=1618700&view=diff
==============================================================================
--- hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/metrics/RpcMetrics.java (original)
+++ hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/metrics/RpcMetrics.java Mon Aug 18 18:41:31 2014
@@ -88,13 +88,13 @@ public class RpcMetrics {
   @Metric("Processsing time") MutableRate rpcProcessingTime;
   MutableQuantiles[] rpcProcessingTimeMillisQuantiles;
   @Metric("Number of authentication failures")
-  MutableCounterInt rpcAuthenticationFailures;
+  MutableCounterLong rpcAuthenticationFailures;
   @Metric("Number of authentication successes")
-  MutableCounterInt rpcAuthenticationSuccesses;
+  MutableCounterLong rpcAuthenticationSuccesses;
   @Metric("Number of authorization failures")
-  MutableCounterInt rpcAuthorizationFailures;
+  MutableCounterLong rpcAuthorizationFailures;
   @Metric("Number of authorization sucesses")
-  MutableCounterInt rpcAuthorizationSuccesses;
+  MutableCounterLong rpcAuthorizationSuccesses;
 
   @Metric("Number of open connections") public int numOpenConnections() {
     return server.getNumOpenConnections();

Modified: hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopologyWithNodeGroup.java
URL: http://svn.apache.org/viewvc/hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopologyWithNodeGroup.java?rev=1618700&r1=1618699&r2=1618700&view=diff
==============================================================================
--- hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopologyWithNodeGroup.java (original)
+++ hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopologyWithNodeGroup.java Mon Aug 18 18:41:31 2014
@@ -293,7 +293,7 @@ public class NetworkTopologyWithNodeGrou
         return;
       }
     }
-    super.sortByDistance(reader, nodes, nodes.length, seed,
+    super.sortByDistance(reader, nodes, activeLen, seed,
         randomizeBlockLocationsPerBlock);
   }
 

Modified: hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/NetgroupCache.java
URL: http://svn.apache.org/viewvc/hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/NetgroupCache.java?rev=1618700&r1=1618699&r2=1618700&view=diff
==============================================================================
--- hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/NetgroupCache.java (original)
+++ hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/NetgroupCache.java Mon Aug 18 18:41:31 2014
@@ -27,12 +27,9 @@ import java.util.concurrent.ConcurrentHa
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-
 /**
  * Class that caches the netgroups and inverts group-to-user map
- * to user-to-group map, primarily intented for use with
+ * to user-to-group map, primarily intended for use with
  * netgroups (as returned by getent netgrgoup) which only returns
  * group to user mapping.
  */
@@ -69,9 +66,7 @@ public class NetgroupCache {
       }
     }
     if(userToNetgroupsMap.containsKey(user)) {
-      for(String netgroup : userToNetgroupsMap.get(user)) {
-        groups.add(netgroup);
-      }
+      groups.addAll(userToNetgroupsMap.get(user));
     }
   }
 
@@ -99,6 +94,7 @@ public class NetgroupCache {
    */
   public static void clear() {
     netgroupToUsersMap.clear();
+    userToNetgroupsMap.clear();
   }
 
   /**
@@ -108,12 +104,7 @@ public class NetgroupCache {
    * @param users list of users for a given group
    */
   public static void add(String group, List<String> users) {
-    if(!isCached(group)) {
-      netgroupToUsersMap.put(group, new HashSet<String>());
-      for(String user: users) {
-        netgroupToUsersMap.get(group).add(user);
-      }
-    }
+    netgroupToUsersMap.put(group, new HashSet<String>(users));
     netgroupToUsersMapUpdated = true; // at the end to avoid race
   }
 }

Modified: hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/ServiceAuthorizationManager.java
URL: http://svn.apache.org/viewvc/hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/ServiceAuthorizationManager.java?rev=1618700&r1=1618699&r2=1618700&view=diff
==============================================================================
--- hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/ServiceAuthorizationManager.java (original)
+++ hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/ServiceAuthorizationManager.java Mon Aug 18 18:41:31 2014
@@ -43,10 +43,14 @@ import com.google.common.annotations.Vis
 @InterfaceAudience.LimitedPrivate({"HDFS", "MapReduce"})
 @InterfaceStability.Evolving
 public class ServiceAuthorizationManager {
+  static final String BLOCKED = ".blocked";
+
   private static final String HADOOP_POLICY_FILE = "hadoop-policy.xml";
 
-  private volatile Map<Class<?>, AccessControlList> protocolToAcl =
-    new IdentityHashMap<Class<?>, AccessControlList>();
+  // For each class, first ACL in the array specifies the allowed entries
+  // and second ACL specifies blocked entries.
+  private volatile Map<Class<?>, AccessControlList[]> protocolToAcls =
+    new IdentityHashMap<Class<?>, AccessControlList[]>();
   
   /**
    * Configuration key for controlling service-level authorization for Hadoop.
@@ -80,8 +84,8 @@ public class ServiceAuthorizationManager
                                Configuration conf,
                                InetAddress addr
                                ) throws AuthorizationException {
-    AccessControlList acl = protocolToAcl.get(protocol);
-    if (acl == null) {
+    AccessControlList[] acls = protocolToAcls.get(protocol);
+    if (acls == null) {
       throw new AuthorizationException("Protocol " + protocol + 
                                        " is not known.");
     }
@@ -104,7 +108,7 @@ public class ServiceAuthorizationManager
       }
     }
     if((clientPrincipal != null && !clientPrincipal.equals(user.getUserName())) || 
-        !acl.isUserAllowed(user)) {
+       acls.length != 2  || !acls[0].isUserAllowed(user) || acls[1].isUserAllowed(user)) {
       AUDITLOG.warn(AUTHZ_FAILED_FOR + user + " for protocol=" + protocol
           + ", expected client Kerberos principal is " + clientPrincipal);
       throw new AuthorizationException("User " + user + 
@@ -129,13 +133,16 @@ public class ServiceAuthorizationManager
   @Private
   public void refreshWithLoadedConfiguration(Configuration conf,
       PolicyProvider provider) {
-    final Map<Class<?>, AccessControlList> newAcls =
-        new IdentityHashMap<Class<?>, AccessControlList>();
+    final Map<Class<?>, AccessControlList[]> newAcls =
+      new IdentityHashMap<Class<?>, AccessControlList[]>();
     
     String defaultAcl = conf.get(
         CommonConfigurationKeys.HADOOP_SECURITY_SERVICE_AUTHORIZATION_DEFAULT_ACL,
         AccessControlList.WILDCARD_ACL_VALUE);
 
+    String defaultBlockedAcl = conf.get(
+      CommonConfigurationKeys.HADOOP_SECURITY_SERVICE_AUTHORIZATION_DEFAULT_BLOCKED_ACL, "");
+
     // Parse the config file
     Service[] services = provider.getServices();
     if (services != null) {
@@ -145,21 +152,30 @@ public class ServiceAuthorizationManager
                 conf.get(service.getServiceKey(),
                     defaultAcl)
             );
-        newAcls.put(service.getProtocol(), acl);
+        AccessControlList blockedAcl =
+           new AccessControlList(
+           conf.get(service.getServiceKey() + BLOCKED,
+           defaultBlockedAcl));
+        newAcls.put(service.getProtocol(), new AccessControlList[] {acl, blockedAcl});
       }
     }
 
     // Flip to the newly parsed permissions
-    protocolToAcl = newAcls;
+    protocolToAcls = newAcls;
   }
 
   @VisibleForTesting
   public Set<Class<?>> getProtocolsWithAcls() {
-    return protocolToAcl.keySet();
+    return protocolToAcls.keySet();
   }
 
   @VisibleForTesting
   public AccessControlList getProtocolsAcls(Class<?> className) {
-    return protocolToAcl.get(className);
+    return protocolToAcls.get(className)[0];
+  }
+
+  @VisibleForTesting
+  public AccessControlList getProtocolsBlockedAcls(Class<?> className) {
+    return protocolToAcls.get(className)[1];
   }
 }

Modified: hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticationHandler.java
URL: http://svn.apache.org/viewvc/hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticationHandler.java?rev=1618700&r1=1618699&r2=1618700&view=diff
==============================================================================
--- hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticationHandler.java (original)
+++ hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticationHandler.java Mon Aug 18 18:41:31 2014
@@ -75,7 +75,7 @@ public abstract class DelegationTokenAut
 
   public static final String PREFIX = "delegation-token.";
 
-  public static final String TOKEN_KIND = PREFIX + "token-kind.sec";
+  public static final String TOKEN_KIND = PREFIX + "token-kind";
 
   public static final String UPDATE_INTERVAL = PREFIX + "update-interval.sec";
   public static final long UPDATE_INTERVAL_DEFAULT = 24 * 60 * 60;

Modified: hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/DataChecksum.java
URL: http://svn.apache.org/viewvc/hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/DataChecksum.java?rev=1618700&r1=1618699&r2=1618700&view=diff
==============================================================================
--- hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/DataChecksum.java (original)
+++ hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/DataChecksum.java Mon Aug 18 18:41:31 2014
@@ -339,6 +339,12 @@ public class DataChecksum implements Che
       byte[] data, int dataOff, int dataLen,
       byte[] checksums, int checksumsOff, String fileName,
       long basePos) throws ChecksumException {
+
+    if (NativeCrc32.isAvailable()) {
+      NativeCrc32.verifyChunkedSumsByteArray(bytesPerChecksum, type.id,
+          checksums, checksumsOff, data, dataOff, dataLen, fileName, basePos);
+      return;
+    }
     
     int remaining = dataLen;
     int dataPos = 0;
@@ -384,6 +390,12 @@ public class DataChecksum implements Che
           checksums.array(), checksums.arrayOffset() + checksums.position());
       return;
     }
+
+    if (NativeCrc32.isAvailable()) {
+      NativeCrc32.calculateChunkedSums(bytesPerChecksum, type.id,
+          checksums, data);
+      return;
+    }
     
     data.mark();
     checksums.mark();
@@ -406,10 +418,16 @@ public class DataChecksum implements Che
    * Implementation of chunked calculation specifically on byte arrays. This
    * is to avoid the copy when dealing with ByteBuffers that have array backing.
    */
-  private void calculateChunkedSums(
+  public void calculateChunkedSums(
       byte[] data, int dataOffset, int dataLength,
       byte[] sums, int sumsOffset) {
 
+    if (NativeCrc32.isAvailable()) {
+      NativeCrc32.calculateChunkedSumsByteArray(bytesPerChecksum, type.id,
+          sums, sumsOffset, data, dataOffset, dataLength);
+      return;
+    }
+
     int remaining = dataLength;
     while (remaining > 0) {
       int n = Math.min(remaining, bytesPerChecksum);

Modified: hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/GenericOptionsParser.java
URL: http://svn.apache.org/viewvc/hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/GenericOptionsParser.java?rev=1618700&r1=1618699&r2=1618700&view=diff
==============================================================================
--- hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/GenericOptionsParser.java (original)
+++ hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/GenericOptionsParser.java Mon Aug 18 18:41:31 2014
@@ -378,9 +378,15 @@ public class GenericOptionsParser {
     if (files == null) 
       return null;
     String[] fileArr = files.split(",");
+    if (fileArr.length == 0) {
+      throw new IllegalArgumentException("File name can't be empty string");
+    }
     String[] finalArr = new String[fileArr.length];
     for (int i =0; i < fileArr.length; i++) {
       String tmp = fileArr[i];
+      if (tmp.isEmpty()) {
+        throw new IllegalArgumentException("File name can't be empty string");
+      }
       String finalPath;
       URI pathURI;
       try {

Modified: hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/MachineList.java
URL: http://svn.apache.org/viewvc/hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/MachineList.java?rev=1618700&r1=1618699&r2=1618700&view=diff
==============================================================================
--- hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/MachineList.java (original)
+++ hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/MachineList.java Mon Aug 18 18:41:31 2014
@@ -37,7 +37,7 @@ import com.google.common.net.InetAddress
 /**
  * Container class which holds a list of ip/host addresses and 
  * answers membership queries.
- * .
+ *
  * Accepts list of ip addresses, ip addreses in CIDR format and/or 
  * host addresses.
  */
@@ -71,8 +71,15 @@ public class MachineList {
    * @param hostEntries comma separated ip/cidr/host addresses
    */
   public MachineList(String hostEntries) {
-    this(StringUtils.getTrimmedStringCollection(hostEntries),
-        InetAddressFactory.S_INSTANCE);
+    this(StringUtils.getTrimmedStringCollection(hostEntries));
+  }
+
+  /**
+   *
+   * @param hostEntries collection of separated ip/cidr/host addresses
+   */
+  public MachineList(Collection<String> hostEntries) {
+    this(hostEntries, InetAddressFactory.S_INSTANCE);
   }
 
   /**

Modified: hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeCrc32.java
URL: http://svn.apache.org/viewvc/hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeCrc32.java?rev=1618700&r1=1618699&r2=1618700&view=diff
==============================================================================
--- hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeCrc32.java (original)
+++ hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeCrc32.java Mon Aug 18 18:41:31 2014
@@ -54,17 +54,50 @@ class NativeCrc32 {
   public static void verifyChunkedSums(int bytesPerSum, int checksumType,
       ByteBuffer sums, ByteBuffer data, String fileName, long basePos)
       throws ChecksumException {
-    nativeVerifyChunkedSums(bytesPerSum, checksumType,
+    nativeComputeChunkedSums(bytesPerSum, checksumType,
         sums, sums.position(),
         data, data.position(), data.remaining(),
-        fileName, basePos);
+        fileName, basePos, true);
+  }
+
+  public static void verifyChunkedSumsByteArray(int bytesPerSum,
+      int checksumType, byte[] sums, int sumsOffset, byte[] data,
+      int dataOffset, int dataLength, String fileName, long basePos)
+      throws ChecksumException {
+    nativeComputeChunkedSumsByteArray(bytesPerSum, checksumType,
+        sums, sumsOffset,
+        data, dataOffset, dataLength,
+        fileName, basePos, true);
+  }
+
+  public static void calculateChunkedSums(int bytesPerSum, int checksumType,
+      ByteBuffer sums, ByteBuffer data) {
+    nativeComputeChunkedSums(bytesPerSum, checksumType,
+        sums, sums.position(),
+        data, data.position(), data.remaining(),
+        "", 0, false);
+  }
+
+  public static void calculateChunkedSumsByteArray(int bytesPerSum,
+      int checksumType, byte[] sums, int sumsOffset, byte[] data,
+      int dataOffset, int dataLength) {
+    nativeComputeChunkedSumsByteArray(bytesPerSum, checksumType,
+        sums, sumsOffset,
+        data, dataOffset, dataLength,
+        "", 0, false);
   }
   
-    private static native void nativeVerifyChunkedSums(
+    private static native void nativeComputeChunkedSums(
       int bytesPerSum, int checksumType,
       ByteBuffer sums, int sumsOffset,
       ByteBuffer data, int dataOffset, int dataLength,
-      String fileName, long basePos);
+      String fileName, long basePos, boolean verify);
+
+    private static native void nativeComputeChunkedSumsByteArray(
+      int bytesPerSum, int checksumType,
+      byte[] sums, int sumsOffset,
+      byte[] data, int dataOffset, int dataLength,
+      String fileName, long basePos, boolean verify);
 
   // Copy the constants over from DataChecksum so that javah will pick them up
   // and make them available in the native code header.

Modified: hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
URL: http://svn.apache.org/viewvc/hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c?rev=1618700&r1=1618699&r2=1618700&view=diff
==============================================================================
--- hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c (original)
+++ hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c Mon Aug 18 18:41:31 2014
@@ -172,6 +172,39 @@ static void nioe_deinit(JNIEnv *env) {
 }
 
 /*
+ * Compatibility mapping for fadvise flags. Return the proper value from fnctl.h.
+ * If the value is not known, return the argument unchanged.
+ */
+static int map_fadvise_flag(jint flag) {
+#ifdef HAVE_POSIX_FADVISE
+  switch(flag) {
+    case org_apache_hadoop_io_nativeio_NativeIO_POSIX_POSIX_FADV_NORMAL:
+      return POSIX_FADV_NORMAL;
+      break;
+    case org_apache_hadoop_io_nativeio_NativeIO_POSIX_POSIX_FADV_RANDOM:
+      return POSIX_FADV_RANDOM;
+      break;
+    case org_apache_hadoop_io_nativeio_NativeIO_POSIX_POSIX_FADV_SEQUENTIAL:
+      return POSIX_FADV_SEQUENTIAL;
+      break;
+    case org_apache_hadoop_io_nativeio_NativeIO_POSIX_POSIX_FADV_WILLNEED:
+      return POSIX_FADV_WILLNEED;
+      break;
+    case org_apache_hadoop_io_nativeio_NativeIO_POSIX_POSIX_FADV_DONTNEED:
+      return POSIX_FADV_DONTNEED;
+      break;
+    case org_apache_hadoop_io_nativeio_NativeIO_POSIX_POSIX_FADV_NOREUSE:
+      return POSIX_FADV_NOREUSE;
+      break;
+    default:
+      return flag;
+  }
+#else
+  return flag;
+#endif
+}
+
+/*
  * private static native void initNative();
  *
  * We rely on this function rather than lazy initialization because
@@ -303,7 +336,7 @@ Java_org_apache_hadoop_io_nativeio_Nativ
   PASS_EXCEPTIONS(env);
 
   int err = 0;
-  if ((err = posix_fadvise(fd, (off_t)offset, (off_t)len, flags))) {
+  if ((err = posix_fadvise(fd, (off_t)offset, (off_t)len, map_fadvise_flag(flags)))) {
 #ifdef __FreeBSD__
     throw_ioe(env, errno);
 #else

Modified: hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCrc32.c
URL: http://svn.apache.org/viewvc/hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCrc32.c?rev=1618700&r1=1618699&r2=1618700&view=diff
==============================================================================
--- hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCrc32.c (original)
+++ hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCrc32.c Mon Aug 18 18:41:31 2014
@@ -34,6 +34,10 @@
 
 #include "bulk_crc32.h"
 
+#define MBYTE 1048576
+#define MIN(X,Y) ((X) < (Y) ? (X) : (Y))
+#define MAX(X,Y) ((X) > (Y) ? (X) : (Y))
+
 static void throw_checksum_exception(JNIEnv *env,
     uint32_t got_crc, uint32_t expected_crc,
     jstring j_filename, jlong pos) {
@@ -113,12 +117,12 @@ static int convert_java_crc_type(JNIEnv 
   }
 }
 
-JNIEXPORT void JNICALL Java_org_apache_hadoop_util_NativeCrc32_nativeVerifyChunkedSums
+JNIEXPORT void JNICALL Java_org_apache_hadoop_util_NativeCrc32_nativeComputeChunkedSums
   (JNIEnv *env, jclass clazz,
     jint bytes_per_checksum, jint j_crc_type,
     jobject j_sums, jint sums_offset,
     jobject j_data, jint data_offset, jint data_len,
-    jstring j_filename, jlong base_pos)
+    jstring j_filename, jlong base_pos, jboolean verify)
 {
   uint8_t *sums_addr;
   uint8_t *data_addr;
@@ -162,19 +166,97 @@ JNIEXPORT void JNICALL Java_org_apache_h
   if (crc_type == -1) return; // exception already thrown
 
   // Setup complete. Actually verify checksums.
-  ret = bulk_verify_crc(data, data_len, sums, crc_type,
-                            bytes_per_checksum, &error_data);
-  if (likely(ret == CHECKSUMS_VALID)) {
+  ret = bulk_crc(data, data_len, sums, crc_type,
+                            bytes_per_checksum, verify ? &error_data : NULL);
+  if (likely(verify && ret == CHECKSUMS_VALID || !verify && ret == 0)) {
     return;
-  } else if (unlikely(ret == INVALID_CHECKSUM_DETECTED)) {
+  } else if (unlikely(verify && ret == INVALID_CHECKSUM_DETECTED)) {
     long pos = base_pos + (error_data.bad_data - data);
     throw_checksum_exception(
       env, error_data.got_crc, error_data.expected_crc,
       j_filename, pos);
   } else {
     THROW(env, "java/lang/AssertionError",
-      "Bad response code from native bulk_verify_crc");
+      "Bad response code from native bulk_crc");
+  }
+}
+
+JNIEXPORT void JNICALL Java_org_apache_hadoop_util_NativeCrc32_nativeComputeChunkedSumsByteArray
+  (JNIEnv *env, jclass clazz,
+    jint bytes_per_checksum, jint j_crc_type,
+    jarray j_sums, jint sums_offset,
+    jarray j_data, jint data_offset, jint data_len,
+    jstring j_filename, jlong base_pos, jboolean verify)
+{
+  uint8_t *sums_addr;
+  uint8_t *data_addr;
+  uint32_t *sums;
+  uint8_t *data;
+  int crc_type;
+  crc32_error_t error_data;
+  int ret;
+  int numChecksumsPerIter;
+  int checksumNum;
+
+  if (unlikely(!j_sums || !j_data)) {
+    THROW(env, "java/lang/NullPointerException",
+      "input byte arrays must not be null");
+    return;
   }
+  if (unlikely(sums_offset < 0 || data_offset < 0 || data_len < 0)) {
+    THROW(env, "java/lang/IllegalArgumentException",
+      "bad offsets or lengths");
+    return;
+  }
+  if (unlikely(bytes_per_checksum) <= 0) {
+    THROW(env, "java/lang/IllegalArgumentException",
+      "invalid bytes_per_checksum");
+    return;
+  }
+
+  // Convert to correct internal C constant for CRC type
+  crc_type = convert_java_crc_type(env, j_crc_type);
+  if (crc_type == -1) return; // exception already thrown
+
+  numChecksumsPerIter = MAX(1, MBYTE / bytes_per_checksum);
+  checksumNum = 0;
+  while (checksumNum * bytes_per_checksum < data_len) {
+    // Convert byte arrays to C pointers
+    sums_addr = (*env)->GetPrimitiveArrayCritical(env, j_sums, NULL);
+    data_addr = (*env)->GetPrimitiveArrayCritical(env, j_data, NULL);
+
+    if (unlikely(!sums_addr || !data_addr)) {
+      if (data_addr) (*env)->ReleasePrimitiveArrayCritical(env, j_data, data_addr, 0);
+      if (sums_addr) (*env)->ReleasePrimitiveArrayCritical(env, j_sums, sums_addr, 0);
+      THROW(env, "java/lang/OutOfMemoryError",
+        "not enough memory for byte arrays in JNI code");
+      return;
+    }
+
+    sums = (uint32_t *)(sums_addr + sums_offset) + checksumNum;
+    data = data_addr + data_offset + checksumNum * bytes_per_checksum;
+
+    // Setup complete. Actually verify checksums.
+    ret = bulk_crc(data, MIN(numChecksumsPerIter * bytes_per_checksum,
+                             data_len - checksumNum * bytes_per_checksum),
+                   sums, crc_type, bytes_per_checksum, verify ? &error_data : NULL);
+    (*env)->ReleasePrimitiveArrayCritical(env, j_data, data_addr, 0);
+    (*env)->ReleasePrimitiveArrayCritical(env, j_sums, sums_addr, 0);
+    if (unlikely(verify && ret == INVALID_CHECKSUM_DETECTED)) {
+      long pos = base_pos + (error_data.bad_data - data) + checksumNum *
+        bytes_per_checksum;
+      throw_checksum_exception(
+        env, error_data.got_crc, error_data.expected_crc,
+        j_filename, pos);
+      return;
+    } else if (unlikely(verify && ret != CHECKSUMS_VALID || !verify && ret != 0)) {
+      THROW(env, "java/lang/AssertionError",
+        "Bad response code from native bulk_crc");
+      return;
+    }
+    checksumNum += numChecksumsPerIter;
+  }
+
 }
 
 /**

Modified: hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/bulk_crc32.c
URL: http://svn.apache.org/viewvc/hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/bulk_crc32.c?rev=1618700&r1=1618699&r2=1618700&view=diff
==============================================================================
--- hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/bulk_crc32.c (original)
+++ hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/bulk_crc32.c Mon Aug 18 18:41:31 2014
@@ -55,40 +55,23 @@ static void pipelined_crc32c(uint32_t *c
 static int cached_cpu_supports_crc32; // initialized by constructor below
 static uint32_t crc32c_hardware(uint32_t crc, const uint8_t* data, size_t length);
 
-int bulk_calculate_crc(const uint8_t *data, size_t data_len,
-                    uint32_t *sums, int checksum_type,
-                    int bytes_per_checksum) {
-  uint32_t crc;
-  crc_update_func_t crc_update_func;
-
-  switch (checksum_type) {
-    case CRC32_ZLIB_POLYNOMIAL:
-      crc_update_func = crc32_zlib_sb8;
-      break;
-    case CRC32C_POLYNOMIAL:
-      crc_update_func = crc32c_sb8;
-      break;
-    default:
-      return -EINVAL;
-      break;
+static inline int store_or_verify(uint32_t *sums, uint32_t crc,
+                                   int is_verify) {
+  if (!is_verify) {
+    *sums = crc;
+    return 1;
+  } else {
+    return crc == *sums;
   }
-  while (likely(data_len > 0)) {
-    int len = likely(data_len >= bytes_per_checksum) ? bytes_per_checksum : data_len;
-    crc = CRC_INITIAL_VAL;
-    crc = crc_update_func(crc, data, len);
-    *sums = ntohl(crc_val(crc));
-    data += len;
-    data_len -= len;
-    sums++;
-  }
-  return 0;
 }
 
-int bulk_verify_crc(const uint8_t *data, size_t data_len,
-                    const uint32_t *sums, int checksum_type,
+int bulk_crc(const uint8_t *data, size_t data_len,
+                    uint32_t *sums, int checksum_type,
                     int bytes_per_checksum,
                     crc32_error_t *error_info) {
 
+  int is_verify = error_info != NULL;
+
 #ifdef USE_PIPELINED
   uint32_t crc1, crc2, crc3;
   int n_blocks = data_len / bytes_per_checksum;
@@ -112,7 +95,7 @@ int bulk_verify_crc(const uint8_t *data,
       }
       break;
     default:
-      return INVALID_CHECKSUM_TYPE;
+      return is_verify ? INVALID_CHECKSUM_TYPE : -EINVAL;
   }
 
 #ifdef USE_PIPELINED
@@ -122,16 +105,15 @@ int bulk_verify_crc(const uint8_t *data,
       crc1 = crc2 = crc3 = CRC_INITIAL_VAL;
       pipelined_crc32c(&crc1, &crc2, &crc3, data, bytes_per_checksum, 3);
 
-      crc = ntohl(crc_val(crc1));
-      if ((crc = ntohl(crc_val(crc1))) != *sums)
+      if (unlikely(!store_or_verify(sums, (crc = ntohl(crc_val(crc1))), is_verify)))
         goto return_crc_error;
       sums++;
       data += bytes_per_checksum;
-      if ((crc = ntohl(crc_val(crc2))) != *sums)
+      if (unlikely(!store_or_verify(sums, (crc = ntohl(crc_val(crc2))), is_verify)))
         goto return_crc_error;
       sums++;
       data += bytes_per_checksum;
-      if ((crc = ntohl(crc_val(crc3))) != *sums)
+      if (unlikely(!store_or_verify(sums, (crc = ntohl(crc_val(crc3))), is_verify)))
         goto return_crc_error;
       sums++;
       data += bytes_per_checksum;
@@ -143,12 +125,12 @@ int bulk_verify_crc(const uint8_t *data,
       crc1 = crc2 = crc3 = CRC_INITIAL_VAL;
       pipelined_crc32c(&crc1, &crc2, &crc3, data, bytes_per_checksum, n_blocks);
 
-      if ((crc = ntohl(crc_val(crc1))) != *sums)
+      if (unlikely(!store_or_verify(sums, (crc = ntohl(crc_val(crc1))), is_verify)))
         goto return_crc_error;
       data += bytes_per_checksum;
       sums++;
       if (n_blocks == 2) {
-        if ((crc = ntohl(crc_val(crc2))) != *sums)
+        if (unlikely(!store_or_verify(sums, (crc = ntohl(crc_val(crc2))), is_verify)))
           goto return_crc_error;
         sums++;
         data += bytes_per_checksum;
@@ -160,10 +142,10 @@ int bulk_verify_crc(const uint8_t *data,
       crc1 = crc2 = crc3 = CRC_INITIAL_VAL;
       pipelined_crc32c(&crc1, &crc2, &crc3, data, remainder, 1);
 
-      if ((crc = ntohl(crc_val(crc1))) != *sums)
+      if (unlikely(!store_or_verify(sums, (crc = ntohl(crc_val(crc1))), is_verify)))
         goto return_crc_error;
     }
-    return CHECKSUMS_VALID;
+    return is_verify ? CHECKSUMS_VALID : 0;
   }
 #endif
 
@@ -172,14 +154,14 @@ int bulk_verify_crc(const uint8_t *data,
     crc = CRC_INITIAL_VAL;
     crc = crc_update_func(crc, data, len);
     crc = ntohl(crc_val(crc));
-    if (unlikely(crc != *sums)) {
+    if (unlikely(!store_or_verify(sums, crc, is_verify))) {
       goto return_crc_error;
     }
     data += len;
     data_len -= len;
     sums++;
   }
-  return CHECKSUMS_VALID;
+  return is_verify ? CHECKSUMS_VALID : 0;
 
 return_crc_error:
   if (error_info != NULL) {

Modified: hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/bulk_crc32.h
URL: http://svn.apache.org/viewvc/hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/bulk_crc32.h?rev=1618700&r1=1618699&r2=1618700&view=diff
==============================================================================
--- hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/bulk_crc32.h (original)
+++ hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/bulk_crc32.h Mon Aug 18 18:41:31 2014
@@ -42,49 +42,32 @@ typedef struct crc32_error {
 
 
 /**
- * Verify a buffer of data which is checksummed in chunks
- * of bytes_per_checksum bytes. The checksums are each 32 bits
- * and are stored in sequential indexes of the 'sums' array.
+ * Either calculates checksums for or verifies a buffer of data.
+ * Checksums performed in chunks of bytes_per_checksum bytes. The checksums
+ * are each 32 bits and are stored in sequential indexes of the 'sums' array.
+ * Verification is done (sums is assumed to already contain the checksums)
+ * if error_info is non-null; otherwise calculation is done and checksums
+ * are stored into sums.
  *
  * @param data                  The data to checksum
  * @param dataLen               Length of the data buffer
- * @param sums                  (out param) buffer to write checksums into.
- *                              It must contain at least dataLen * 4 bytes.
+ * @param sums                  (out param) buffer to write checksums into or
+ *                              where checksums are already stored.
+ *                              It must contain at least
+ *                              ((dataLen - 1) / bytes_per_checksum + 1) * 4 bytes.
  * @param checksum_type         One of the CRC32 algorithm constants defined 
  *                              above
  * @param bytes_per_checksum    How many bytes of data to process per checksum.
- * @param error_info            If non-NULL, will be filled in if an error
- *                              is detected
+ * @param error_info            If non-NULL, verification will be performed and
+ *                              it will be filled in if an error
+ *                              is detected. Otherwise calculation is performed.
  *
  * @return                      0 for success, non-zero for an error, result codes
- *                              for which are defined above
+ *                              for verification are defined above
  */
-extern int bulk_verify_crc(const uint8_t *data, size_t data_len,
-    const uint32_t *sums, int checksum_type,
+extern int bulk_crc(const uint8_t *data, size_t data_len,
+    uint32_t *sums, int checksum_type,
     int bytes_per_checksum,
     crc32_error_t *error_info);
 
-/**
- * Calculate checksums for some data.
- *
- * The checksums are each 32 bits and are stored in sequential indexes of the
- * 'sums' array.
- *
- * This function is not (yet) optimized.  It is provided for testing purposes
- * only.
- *
- * @param data                  The data to checksum
- * @param dataLen               Length of the data buffer
- * @param sums                  (out param) buffer to write checksums into.
- *                              It must contain at least dataLen * 4 bytes.
- * @param checksum_type         One of the CRC32 algorithm constants defined 
- *                              above
- * @param bytesPerChecksum      How many bytes of data to process per checksum.
- *
- * @return                      0 for success, non-zero for an error
- */
-int bulk_calculate_crc(const uint8_t *data, size_t data_len,
-                    uint32_t *sums, int checksum_type,
-                    int bytes_per_checksum);
-
 #endif

Modified: hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/native/src/test/org/apache/hadoop/util/test_bulk_crc32.c
URL: http://svn.apache.org/viewvc/hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/native/src/test/org/apache/hadoop/util/test_bulk_crc32.c?rev=1618700&r1=1618699&r2=1618700&view=diff
==============================================================================
--- hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/native/src/test/org/apache/hadoop/util/test_bulk_crc32.c (original)
+++ hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/main/native/src/test/org/apache/hadoop/util/test_bulk_crc32.c Mon Aug 18 18:41:31 2014
@@ -48,9 +48,9 @@ static int testBulkVerifyCrc(int dataLen
   sums = calloc(sizeof(uint32_t),
                 (dataLen + bytesPerChecksum - 1) / bytesPerChecksum);
 
-  EXPECT_ZERO(bulk_calculate_crc(data, dataLen, sums, crcType,
-                                 bytesPerChecksum));
-  EXPECT_ZERO(bulk_verify_crc(data, dataLen, sums, crcType,
+  EXPECT_ZERO(bulk_crc(data, dataLen, sums, crcType,
+                                 bytesPerChecksum, NULL));
+  EXPECT_ZERO(bulk_crc(data, dataLen, sums, crcType,
                             bytesPerChecksum, &errorData));
   free(data);
   free(sums);

Modified: hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/site/apt/CommandsManual.apt.vm
URL: http://svn.apache.org/viewvc/hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/site/apt/CommandsManual.apt.vm?rev=1618700&r1=1618699&r2=1618700&view=diff
==============================================================================
--- hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/site/apt/CommandsManual.apt.vm (original)
+++ hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/site/apt/CommandsManual.apt.vm Mon Aug 18 18:41:31 2014
@@ -81,36 +81,15 @@ User Commands
 
 * <<<archive>>>
 
-   Creates a hadoop archive. More information can be found at Hadoop
-   Archives.
-
-   Usage: <<<hadoop archive -archiveName NAME <src>* <dest> >>>
-
-*-------------------+-------------------------------------------------------+
-||COMMAND_OPTION    ||                   Description
-*-------------------+-------------------------------------------------------+
-| -archiveName NAME |  Name of the archive to be created.
-*-------------------+-------------------------------------------------------+
-| src               | Filesystem pathnames which work as usual with regular
-                    | expressions.
-*-------------------+-------------------------------------------------------+
-| dest              | Destination directory which would contain the archive.
-*-------------------+-------------------------------------------------------+
+   Creates a hadoop archive. More information can be found at
+   {{{../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/HadoopArchives.html}
+   Hadoop Archives Guide}}.
 
 * <<<distcp>>>
 
    Copy file or directories recursively. More information can be found at
-   Hadoop DistCp Guide.
-
-   Usage: <<<hadoop distcp <srcurl> <desturl> >>>
-
-*-------------------+--------------------------------------------+
-||COMMAND_OPTION    || Description
-*-------------------+--------------------------------------------+
-| srcurl            | Source Url
-*-------------------+--------------------------------------------+
-| desturl           | Destination Url
-*-------------------+--------------------------------------------+
+   {{{../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/DistCp.html}
+   Hadoop DistCp Guide}}.
 
 * <<<fs>>>
 
@@ -142,103 +121,21 @@ User Commands
 
 * <<<job>>>
 
-   Command to interact with Map Reduce Jobs.
-
-   Usage: <<<hadoop job [GENERIC_OPTIONS] [-submit <job-file>] | [-status <job-id>] | [-counter <job-id> <group-name> <counter-name>] | [-kill <job-id>] | [-events <job-id> <from-event-#> <#-of-events>] | [-history [all] <jobOutputDir>] | [-list [all]] | [-kill-task <task-id>] | [-fail-task <task-id>] | [-set-priority <job-id> <priority>]>>>
-
-*------------------------------+---------------------------------------------+
-|| COMMAND_OPTION              || Description
-*------------------------------+---------------------------------------------+
-| -submit <job-file>           | Submits the job.
-*------------------------------+---------------------------------------------+
-| -status <job-id>             | Prints the map and reduce completion
-                               | percentage and all job counters.
-*------------------------------+---------------------------------------------+
-| -counter <job-id> <group-name> <counter-name> | Prints the counter value.
-*------------------------------+---------------------------------------------+
-| -kill <job-id>               | Kills the job.
-*------------------------------+---------------------------------------------+
-| -events <job-id> <from-event-#> <#-of-events> | Prints the events' details
-                               | received by jobtracker for the given range.
-*------------------------------+---------------------------------------------+
-| -history [all]<jobOutputDir> | Prints job details, failed and killed tip
-                               | details.  More details about the job such as
-                               | successful tasks and task attempts made for
-                               | each task can be viewed by specifying the [all]
-                               | option.
-*------------------------------+---------------------------------------------+
-| -list [all]                  | Displays jobs which are yet to complete.
-                               | <<<-list all>>> displays all jobs.
-*------------------------------+---------------------------------------------+
-| -kill-task <task-id>         | Kills the task. Killed tasks are NOT counted
-                               | against failed attempts.
-*------------------------------+---------------------------------------------+
-| -fail-task <task-id>         | Fails the task. Failed tasks are counted
-                               | against failed attempts.
-*------------------------------+---------------------------------------------+
-| -set-priority <job-id> <priority> | Changes the priority of the job. Allowed
-                               | priority values are VERY_HIGH, HIGH, NORMAL,
-                               | LOW, VERY_LOW
-*------------------------------+---------------------------------------------+
+   Deprecated. Use
+   {{{../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapredCommands.html#job}
+   <<<mapred job>>>}} instead.
 
 * <<<pipes>>>
 
-   Runs a pipes job.
-
-   Usage: <<<hadoop pipes [-conf <path>] [-jobconf <key=value>, <key=value>,
-   ...] [-input <path>] [-output <path>] [-jar <jar file>] [-inputformat
-   <class>] [-map <class>] [-partitioner <class>] [-reduce <class>] [-writer
-   <class>] [-program <executable>] [-reduces <num>]>>>
- 
-*----------------------------------------+------------------------------------+
-|| COMMAND_OPTION                        || Description
-*----------------------------------------+------------------------------------+
-| -conf <path>                           | Configuration for job
-*----------------------------------------+------------------------------------+
-| -jobconf <key=value>, <key=value>, ... | Add/override configuration for job
-*----------------------------------------+------------------------------------+
-| -input <path>                          | Input directory
-*----------------------------------------+------------------------------------+
-| -output <path>                         | Output directory
-*----------------------------------------+------------------------------------+
-| -jar <jar file>                        | Jar filename
-*----------------------------------------+------------------------------------+
-| -inputformat <class>                   | InputFormat class
-*----------------------------------------+------------------------------------+
-| -map <class>                           | Java Map class
-*----------------------------------------+------------------------------------+
-| -partitioner <class>                   | Java Partitioner
-*----------------------------------------+------------------------------------+
-| -reduce <class>                        | Java Reduce class
-*----------------------------------------+------------------------------------+
-| -writer <class>                        | Java RecordWriter
-*----------------------------------------+------------------------------------+
-| -program <executable>                  | Executable URI
-*----------------------------------------+------------------------------------+
-| -reduces <num>                         | Number of reduces
-*----------------------------------------+------------------------------------+
+   Deprecated. Use
+   {{{../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapredCommands.html#pipes}
+   <<<mapred pipes>>>}} instead.
 
 * <<<queue>>>
 
-   command to interact and view Job Queue information
-
-   Usage: <<<hadoop queue [-list] | [-info <job-queue-name> [-showJobs]] | [-showacls]>>>
-
-*-----------------+-----------------------------------------------------------+
-|| COMMAND_OPTION || Description
-*-----------------+-----------------------------------------------------------+
-| -list           | Gets list of Job Queues configured in the system.
-                  | Along with scheduling information associated with the job queues.
-*-----------------+-----------------------------------------------------------+
-| -info <job-queue-name> [-showJobs] | Displays the job queue information and
-                  | associated scheduling information of particular job queue.
-                  | If <<<-showJobs>>> options is present a list of jobs
-                  | submitted to the particular job queue is displayed.
-*-----------------+-----------------------------------------------------------+
-| -showacls       | Displays the queue name and associated queue operations
-                  | allowed for the current user. The list consists of only
-                  | those queues to which the user has access.
-*-----------------+-----------------------------------------------------------+
+   Deprecated. Use
+   {{{../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapredCommands.html#queue}
+   <<<mapred queue>>>}} instead.
 
 * <<<version>>>
 
@@ -314,35 +211,6 @@ Administration Commands
    Deprecated, use {{{../hadoop-hdfs/HDFSCommands.html#dfsadmin}
    <<<hdfs dfsadmin>>>}} instead.
 
-* <<<mradmin>>>
-
-   Runs MR admin client
-
-   Usage: <<<hadoop mradmin [ GENERIC_OPTIONS ] [-refreshQueueAcls]>>>
-
-*-------------------+-----------------------------------------------------------+
-|| COMMAND_OPTION   || Description
-*-------------------+-----------------------------------------------------------+
-| -refreshQueueAcls | Refresh the queue acls used by hadoop, to check access
-                    | during submissions and administration of the job by the
-                    | user. The properties present in mapred-queue-acls.xml is
-                    | reloaded by the queue manager.
-*-------------------+-----------------------------------------------------------+
-
-* <<<jobtracker>>>
-
-   Runs the MapReduce job Tracker node.
-
-   Usage: <<<hadoop jobtracker [-dumpConfiguration]>>>
-
-*--------------------+-----------------------------------------------------------+
-|| COMMAND_OPTION    || Description
-*--------------------+-----------------------------------------------------------+
-| -dumpConfiguration | Dumps the configuration used by the JobTracker alongwith
-                     | queue configuration in JSON format into Standard output
-                     | used by the jobtracker and exits.
-*--------------------+-----------------------------------------------------------+
-
 * <<<namenode>>>
 
    Deprecated, use {{{../hadoop-hdfs/HDFSCommands.html#namenode}
@@ -352,9 +220,3 @@ Administration Commands
 
    Deprecated, use {{{../hadoop-hdfs/HDFSCommands.html#secondarynamenode}
    <<<hdfs secondarynamenode>>>}} instead.
-
-* <<<tasktracker>>>
-
-   Runs a MapReduce task Tracker node.
-
-   Usage: <<<hadoop tasktracker>>>

Modified: hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/site/apt/FileSystemShell.apt.vm
URL: http://svn.apache.org/viewvc/hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/site/apt/FileSystemShell.apt.vm?rev=1618700&r1=1618699&r2=1618700&view=diff
==============================================================================
--- hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/site/apt/FileSystemShell.apt.vm (original)
+++ hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/site/apt/FileSystemShell.apt.vm Mon Aug 18 18:41:31 2014
@@ -138,7 +138,7 @@ copyToLocal
 
 count
 
-   Usage: <<<hdfs dfs -count [-q] <paths> >>>
+   Usage: <<<hdfs dfs -count [-q] [-h] <paths> >>>
 
    Count the number of directories, files and bytes under the paths that match
    the specified file pattern.  The output columns with -count are: DIR_COUNT,
@@ -147,12 +147,16 @@ count
    The output columns with -count -q are: QUOTA, REMAINING_QUATA, SPACE_QUOTA,
    REMAINING_SPACE_QUOTA, DIR_COUNT, FILE_COUNT, CONTENT_SIZE, FILE_NAME
 
+   The -h option shows sizes in human readable format.
+
    Example:
 
      * <<<hdfs dfs -count hdfs://nn1.example.com/file1 hdfs://nn2.example.com/file2>>>
 
      * <<<hdfs dfs -count -q hdfs://nn1.example.com/file1>>>
 
+     * <<<hdfs dfs -count -q -h hdfs://nn1.example.com/file1>>>
+
    Exit Code:
 
    Returns 0 on success and -1 on error.

Modified: hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/site/apt/NativeLibraries.apt.vm
URL: http://svn.apache.org/viewvc/hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/site/apt/NativeLibraries.apt.vm?rev=1618700&r1=1618699&r2=1618700&view=diff
==============================================================================
--- hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/site/apt/NativeLibraries.apt.vm (original)
+++ hadoop/common/branches/fs-encryption/hadoop-common-project/hadoop-common/src/site/apt/NativeLibraries.apt.vm Mon Aug 18 18:41:31 2014
@@ -30,6 +30,8 @@ Native Libraries Guide
    compression" could refer to all *.so's you need to compile that are
    specifically related to compression. Currently, however, this document
    only addresses the native hadoop library (<<<libhadoop.so>>>).
+   The document for libhdfs library (<<<libhdfs.so>>>) is
+   {{{../hadoop-hdfs/LibHdfs.html}here}}.
 
 * Native Hadoop Library
 
@@ -54,24 +56,28 @@ Native Libraries Guide
 
     [[4]] Install the compression codec development packages (>zlib-1.2,
        >gzip-1.2):
-          + If you download the library, install one or more development
+
+          * If you download the library, install one or more development
             packages - whichever compression codecs you want to use with
             your deployment.
-          + If you build the library, it is mandatory to install both
+
+          * If you build the library, it is mandatory to install both
             development packages.
 
     [[5]] Check the runtime log files.
 
 * Components
 
-   The native hadoop library includes two components, the zlib and gzip
-   compression codecs:
+   The native hadoop library includes various components:
 
-     * zlib
+   * Compression Codecs (bzip2, lz4, snappy, zlib)
 
-     * gzip
+   * Native IO utilities for {{{../hadoop-hdfs/ShortCircuitLocalReads.html}
+     HDFS Short-Circuit Local Reads}} and
+     {{{../hadoop-hdfs/CentralizedCacheManagement.html}Centralized Cache
+     Management in HDFS}}
 
-   The native hadoop library is imperative for gzip to work.
+   * CRC32 checksum implementation
 
 * Supported Platforms
 



Mime
View raw message