hadoop-common-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From szets...@apache.org
Subject svn commit: r1432796 - in /hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common: ./ dev-support/ src/main/docs/ src/main/java/ src/main/java/org/apache/hadoop/fs/ src/main/java/org/apache/hadoop/fs/ftp/ src/main/java/org/apache/hadoop/f...
Date Mon, 14 Jan 2013 03:44:41 GMT
Author: szetszwo
Date: Mon Jan 14 03:44:35 2013
New Revision: 1432796

URL: http://svn.apache.org/viewvc?rev=1432796&view=rev
Log:
Merge r1426019 through r1432788 from trunk.

Added:
    hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/IdGenerator.java
      - copied unchanged from r1432788, hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/IdGenerator.java
    hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/SequentialNumber.java
      - copied unchanged from r1432788, hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/SequentialNumber.java
    hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/proto/Security.proto
      - copied unchanged from r1432788, hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/proto/Security.proto
    hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCompressionStreamReuse.java
      - copied unchanged from r1432788, hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCompressionStreamReuse.java
Modified:
    hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/CHANGES.txt   (contents,
props changed)
    hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/dev-support/findbugsExcludeFile.xml
    hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/pom.xml
    hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/docs/  
(props changed)
    hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/  
(props changed)
    hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
    hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileStatus.java
    hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
    hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ftp/FTPFileSystem.java
    hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/local/RawLocalFs.java
    hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/FsPermission.java
    hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java
    hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer.java
    hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/GzipCodec.java
    hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/test/core/  
(props changed)
    hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileContextPermissionBase.java
    hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemContractBaseTest.java
    hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileStatus.java
    hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestLocalFSFileContextMainOperations.java
    hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestLocalFileSystemPermission.java
    hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/test/resources/kdc/killKdc.sh

Modified: hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/CHANGES.txt
URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/CHANGES.txt?rev=1432796&r1=1432795&r2=1432796&view=diff
==============================================================================
--- hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/CHANGES.txt (original)
+++ hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/CHANGES.txt Mon Jan
14 03:44:35 2013
@@ -148,6 +148,8 @@ Trunk (Unreleased)
 
   BUG FIXES
 
+    HADOOP-8419. Fixed GzipCode NPE reset for IBM JDK. (Yu Li via eyang)
+
     HADOOP-9041. FsUrlStreamHandlerFactory could cause an infinite loop in
     FileSystem initialization. (Yanbo Liang and Radim Kolar via llu)
 
@@ -309,6 +311,8 @@ Trunk (Unreleased)
     HADOOP-8957 AbstractFileSystem#IsValidName should be overridden for
     embedded file systems like ViewFs (Chris Nauroth via Sanjay Radia)
 
+    HADOOP-9139 improve killKdc.sh (Ivan A. Veselovsky via bobby)
+
   OPTIMIZATIONS
 
     HADOOP-7761. Improve the performance of raw comparisons. (todd)
@@ -420,6 +424,15 @@ Release 2.0.3-alpha - Unreleased 
 
     HADOOP-8427. Convert Forrest docs to APT, incremental. (adi2 via tucu)
 
+    HADOOP-9173. Add security token protobuf definition to common and
+    use it in hdfs. (suresh)
+
+    HADOOP-9119. Add test to FileSystemContractBaseTest to verify integrity
+    of overwritten files. (Steve Loughran via suresh)
+
+    HADOOP-9192. Move token related request/response messages to common.
+    (suresh)
+
   OPTIMIZATIONS
 
     HADOOP-8866. SampleQuantiles#query is O(N^2) instead of O(N). (Andrew Wang
@@ -516,6 +529,14 @@ Release 2.0.3-alpha - Unreleased 
     HADOOP-9153. Support createNonRecursive in ViewFileSystem.
     (Sandy Ryza via tomwhite)
 
+    HADOOP-9181. Set daemon flag for HttpServer's QueuedThreadPool.
+    (Liang Xie via suresh)
+
+    HADOOP-9155. FsPermission should have different default value, 777 for
+    directory and 666 for file. (Binglin Chang via atm)
+
+    HADOOP-9183. Potential deadlock in ActiveStandbyElector. (tomwhite)
+
 Release 2.0.2-alpha - 2012-09-07 
 
   INCOMPATIBLE CHANGES

Propchange: hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/CHANGES.txt
------------------------------------------------------------------------------
  Merged /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt:r1426019-1432788

Modified: hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/dev-support/findbugsExcludeFile.xml
URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/dev-support/findbugsExcludeFile.xml?rev=1432796&r1=1432795&r2=1432796&view=diff
==============================================================================
--- hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/dev-support/findbugsExcludeFile.xml
(original)
+++ hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/dev-support/findbugsExcludeFile.xml
Mon Jan 14 03:44:35 2013
@@ -282,6 +282,10 @@
       <!-- protobuf generated code -->
       <Class name="~org\.apache\.hadoop\.ha\.proto\.ZKFCProtocolProtos.*"/>
     </Match>
+    <Match>
+      <!-- protobuf generated code -->
+      <Class name="~org\.apache\.hadoop\.security\.proto\.SecurityProtos.*"/>
+    </Match>
 
     <!--
        Manually checked, misses child thread manually syncing on parent's intrinsic lock.

Modified: hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/pom.xml
URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/pom.xml?rev=1432796&r1=1432795&r2=1432796&view=diff
==============================================================================
--- hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/pom.xml (original)
+++ hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/pom.xml Mon Jan 14
03:44:35 2013
@@ -381,6 +381,7 @@
                 <argument>src/main/proto/RpcHeader.proto</argument>
                 <argument>src/main/proto/ZKFCProtocol.proto</argument>
                 <argument>src/main/proto/ProtobufRpcEngine.proto</argument>
+                <argument>src/main/proto/Security.proto</argument>
               </arguments>
             </configuration>
           </execution>

Propchange: hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/docs/
------------------------------------------------------------------------------
  Merged /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/docs:r1426019-1432788

Propchange: hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/
------------------------------------------------------------------------------
  Merged /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java:r1426019-1432788

Modified: hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java?rev=1432796&r1=1432795&r2=1432796&view=diff
==============================================================================
--- hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
(original)
+++ hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
Mon Jan 14 03:44:35 2013
@@ -172,7 +172,25 @@ import org.apache.hadoop.util.ShutdownHo
 public final class FileContext {
   
   public static final Log LOG = LogFactory.getLog(FileContext.class);
+  /**
+   * Default permission for directory and symlink
+   * In previous versions, this default permission was also used to
+   * create files, so files created end up with ugo+x permission.
+   * See HADOOP-9155 for detail. 
+   * Two new constants are added to solve this, please use 
+   * {@link FileContext#DIR_DEFAULT_PERM} for directory, and use
+   * {@link FileContext#FILE_DEFAULT_PERM} for file.
+   * This constant is kept for compatibility.
+   */
   public static final FsPermission DEFAULT_PERM = FsPermission.getDefault();
+  /**
+   * Default permission for directory
+   */
+  public static final FsPermission DIR_DEFAULT_PERM = FsPermission.getDirDefault();
+  /**
+   * Default permission for file
+   */
+  public static final FsPermission FILE_DEFAULT_PERM = FsPermission.getFileDefault();
 
   /**
    * Priority of the FileContext shutdown hook.
@@ -656,7 +674,7 @@ public final class FileContext {
     CreateOpts.Perms permOpt = 
       (CreateOpts.Perms) CreateOpts.getOpt(CreateOpts.Perms.class, opts);
     FsPermission permission = (permOpt != null) ? permOpt.getValue() :
-                                      FsPermission.getDefault();
+                                      FILE_DEFAULT_PERM;
     permission = permission.applyUMask(umask);
 
     final CreateOpts[] updatedOpts = 
@@ -704,7 +722,7 @@ public final class FileContext {
       IOException {
     final Path absDir = fixRelativePart(dir);
     final FsPermission absFerms = (permission == null ? 
-          FsPermission.getDefault() : permission).applyUMask(umask);
+          FsPermission.getDirDefault() : permission).applyUMask(umask);
     new FSLinkResolver<Void>() {
       @Override
       public Void next(final AbstractFileSystem fs, final Path p) 
@@ -2157,7 +2175,7 @@ public final class FileContext {
       FileStatus fs = FileContext.this.getFileStatus(qSrc);
       if (fs.isDirectory()) {
         checkDependencies(qSrc, qDst);
-        mkdir(qDst, FsPermission.getDefault(), true);
+        mkdir(qDst, FsPermission.getDirDefault(), true);
         FileStatus[] contents = listStatus(qSrc);
         for (FileStatus content : contents) {
           copy(makeQualified(content.getPath()), makeQualified(new Path(qDst,

Modified: hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileStatus.java
URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileStatus.java?rev=1432796&r1=1432795&r2=1432796&view=diff
==============================================================================
--- hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileStatus.java
(original)
+++ hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileStatus.java
Mon Jan 14 03:44:35 2013
@@ -79,8 +79,15 @@ public class FileStatus implements Writa
     this.blocksize = blocksize;
     this.modification_time = modification_time;
     this.access_time = access_time;
-    this.permission = (permission == null) ? 
-                      FsPermission.getDefault() : permission;
+    if (permission != null) {
+      this.permission = permission;
+    } else if (isdir) {
+      this.permission = FsPermission.getDirDefault();
+    } else if (symlink!=null) {
+      this.permission = FsPermission.getDefault();
+    } else {
+      this.permission = FsPermission.getFileDefault();
+    }
     this.owner = (owner == null) ? "" : owner;
     this.group = (group == null) ? "" : group;
     this.symlink = symlink;
@@ -217,7 +224,7 @@ public class FileStatus implements Writa
    */
   protected void setPermission(FsPermission permission) {
     this.permission = (permission == null) ? 
-                      FsPermission.getDefault() : permission;
+                      FsPermission.getFileDefault() : permission;
   }
   
   /**

Modified: hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java?rev=1432796&r1=1432795&r2=1432796&view=diff
==============================================================================
--- hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
(original)
+++ hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
Mon Jan 14 03:44:35 2013
@@ -850,7 +850,7 @@ public abstract class FileSystem extends
                                             long blockSize,
                                             Progressable progress
                                             ) throws IOException {
-    return this.create(f, FsPermission.getDefault().applyUMask(
+    return this.create(f, FsPermission.getFileDefault().applyUMask(
         FsPermission.getUMask(getConf())), overwrite, bufferSize,
         replication, blockSize, progress);
   }
@@ -1030,7 +1030,7 @@ public abstract class FileSystem extends
       boolean overwrite,
       int bufferSize, short replication, long blockSize,
       Progressable progress) throws IOException {
-    return this.createNonRecursive(f, FsPermission.getDefault(),
+    return this.createNonRecursive(f, FsPermission.getFileDefault(),
         overwrite, bufferSize, replication, blockSize, progress);
   }
 
@@ -1866,7 +1866,7 @@ public abstract class FileSystem extends
    * Call {@link #mkdirs(Path, FsPermission)} with default permission.
    */
   public boolean mkdirs(Path f) throws IOException {
-    return mkdirs(f, FsPermission.getDefault());
+    return mkdirs(f, FsPermission.getDirDefault());
   }
 
   /**

Modified: hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ftp/FTPFileSystem.java
URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ftp/FTPFileSystem.java?rev=1432796&r1=1432795&r2=1432796&view=diff
==============================================================================
--- hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ftp/FTPFileSystem.java
(original)
+++ hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ftp/FTPFileSystem.java
Mon Jan 14 03:44:35 2013
@@ -224,7 +224,7 @@ public class FTPFileSystem extends FileS
     }
     
     Path parent = absolute.getParent();
-    if (parent == null || !mkdirs(client, parent, FsPermission.getDefault())) {
+    if (parent == null || !mkdirs(client, parent, FsPermission.getDirDefault())) {
       parent = (parent == null) ? new Path("/") : parent;
       disconnect(client);
       throw new IOException("create(): Mkdirs failed to create: " + parent);
@@ -484,7 +484,7 @@ public class FTPFileSystem extends FileS
     if (!exists(client, absolute)) {
       Path parent = absolute.getParent();
       created = (parent == null || mkdirs(client, parent, FsPermission
-          .getDefault()));
+          .getDirDefault()));
       if (created) {
         String parentDir = parent.toUri().getPath();
         client.changeWorkingDirectory(parentDir);

Modified: hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/local/RawLocalFs.java
URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/local/RawLocalFs.java?rev=1432796&r1=1432795&r2=1432796&view=diff
==============================================================================
--- hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/local/RawLocalFs.java
(original)
+++ hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/local/RawLocalFs.java
Mon Jan 14 03:44:35 2013
@@ -85,7 +85,7 @@ public class RawLocalFs extends Delegate
                             "system: "+target.toString());
     }
     if (createParent) {
-      mkdir(link.getParent(), FsPermission.getDefault(), true);
+      mkdir(link.getParent(), FsPermission.getDirDefault(), true);
     }
     // NB: Use createSymbolicLink in java.nio.file.Path once available
     try {

Modified: hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/FsPermission.java
URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/FsPermission.java?rev=1432796&r1=1432795&r2=1432796&view=diff
==============================================================================
--- hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/FsPermission.java
(original)
+++ hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/FsPermission.java
Mon Jan 14 03:44:35 2013
@@ -275,12 +275,35 @@ public class FsPermission implements Wri
     conf.setInt(DEPRECATED_UMASK_LABEL, umask.toShort());
   }
 
-  /** Get the default permission. */
+  /**
+   * Get the default permission for directory and symlink.
+   * In previous versions, this default permission was also used to
+   * create files, so files created end up with ugo+x permission.
+   * See HADOOP-9155 for detail. 
+   * Two new methods are added to solve this, please use 
+   * {@link FsPermission#getDirDefault()} for directory, and use
+   * {@link FsPermission#getFileDefault()} for file.
+   * This method is kept for compatibility.
+   */
   public static FsPermission getDefault() {
     return new FsPermission((short)00777);
   }
 
   /**
+   * Get the default permission for directory.
+   */
+  public static FsPermission getDirDefault() {
+    return new FsPermission((short)00777);
+  }
+
+  /**
+   * Get the default permission for file.
+   */
+  public static FsPermission getFileDefault() {
+    return new FsPermission((short)00666);
+  }
+
+  /**
    * Create a FsPermission from a Unix symbolic permission string
    * @param unixSymbolicPermission e.g. "-rw-rw-rw-"
    */

Modified: hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java
URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java?rev=1432796&r1=1432795&r2=1432796&view=diff
==============================================================================
--- hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java
(original)
+++ hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java
Mon Jan 14 03:44:35 2013
@@ -613,7 +613,7 @@ public class ActiveStandbyElector implem
     // Unfortunately, the ZooKeeper constructor connects to ZooKeeper and
     // may trigger the Connected event immediately. So, if we register the
     // watcher after constructing ZooKeeper, we may miss that event. Instead,
-    // we construct the watcher first, and have it queue any events it receives
+    // we construct the watcher first, and have it block any events it receives
     // before we can set its ZooKeeper reference.
     WatcherWithClientRef watcher = new WatcherWithClientRef();
     ZooKeeper zk = new ZooKeeper(zkHostPort, zkSessionTimeout, watcher);
@@ -1002,19 +1002,17 @@ public class ActiveStandbyElector implem
     private CountDownLatch hasReceivedEvent = new CountDownLatch(1);
 
     /**
-     * If any events arrive before the reference to ZooKeeper is set,
-     * they get queued up and later forwarded when the reference is
-     * available.
+     * Latch used to wait until the reference to ZooKeeper is set.
      */
-    private final List<WatchedEvent> queuedEvents = Lists.newLinkedList();
+    private CountDownLatch hasSetZooKeeper = new CountDownLatch(1);
     
     private WatcherWithClientRef() {
     }
 
     private WatcherWithClientRef(ZooKeeper zk) {
-      this.zk = zk;
+      setZooKeeperRef(zk);
     }
-    
+
     /**
      * Waits for the next event from ZooKeeper to arrive.
      * 
@@ -1029,9 +1027,7 @@ public class ActiveStandbyElector implem
         if (!hasReceivedEvent.await(connectionTimeoutMs, TimeUnit.MILLISECONDS)) {
           LOG.error("Connection timed out: couldn't connect to ZooKeeper in "
               + connectionTimeoutMs + " milliseconds");
-          synchronized (this) {
-            zk.close();
-          }
+          zk.close();
           throw KeeperException.create(Code.CONNECTIONLOSS);
         }
       } catch (InterruptedException e) {
@@ -1041,29 +1037,18 @@ public class ActiveStandbyElector implem
       }
     }
 
-    private synchronized void setZooKeeperRef(ZooKeeper zk) {
+    private void setZooKeeperRef(ZooKeeper zk) {
       Preconditions.checkState(this.zk == null,
           "zk already set -- must be set exactly once");
       this.zk = zk;
-      
-      for (WatchedEvent e : queuedEvents) {
-        forwardEvent(e);
-      }
-      queuedEvents.clear();
+      hasSetZooKeeper.countDown();
     }
 
     @Override
-    public synchronized void process(WatchedEvent event) {
-      if (zk != null) {
-        forwardEvent(event);
-      } else {
-        queuedEvents.add(event);
-      }
-    }
-    
-    private void forwardEvent(WatchedEvent event) {
+    public void process(WatchedEvent event) {
       hasReceivedEvent.countDown();
       try {
+        hasSetZooKeeper.await(zkSessionTimeout, TimeUnit.MILLISECONDS);
         ActiveStandbyElector.this.processWatchEvent(
             zk, event);
       } catch (Throwable t) {

Modified: hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer.java
URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer.java?rev=1432796&r1=1432795&r2=1432796&view=diff
==============================================================================
--- hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer.java
(original)
+++ hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer.java
Mon Jan 14 03:44:35 2013
@@ -245,6 +245,7 @@ public class HttpServer implements Filte
     // default value (currently 250).
     QueuedThreadPool threadPool = maxThreads == -1 ?
         new QueuedThreadPool() : new QueuedThreadPool(maxThreads);
+    threadPool.setDaemon(true);
     webServer.setThreadPool(threadPool);
 
     final String appDir = getWebAppsPath(name);

Modified: hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/GzipCodec.java
URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/GzipCodec.java?rev=1432796&r1=1432795&r2=1432796&view=diff
==============================================================================
--- hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/GzipCodec.java
(original)
+++ hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/GzipCodec.java
Mon Jan 14 03:44:35 2013
@@ -40,14 +40,74 @@ public class GzipCodec extends DefaultCo
   protected static class GzipOutputStream extends CompressorStream {
 
     private static class ResetableGZIPOutputStream extends GZIPOutputStream {
-      
+      private static final int TRAILER_SIZE = 8;
+      public static final String JVMVendor= System.getProperty("java.vendor");
+      public static final String JVMVersion= System.getProperty("java.version");
+      private static final boolean HAS_BROKEN_FINISH =
+          (JVMVendor.contains("IBM") && JVMVersion.contains("1.6.0"));
+
       public ResetableGZIPOutputStream(OutputStream out) throws IOException {
         super(out);
       }
-      
+
       public void resetState() throws IOException {
         def.reset();
       }
+
+      /**
+       * Override this method for HADOOP-8419.
+       * Override because IBM implementation calls def.end() which
+       * causes problem when reseting the stream for reuse.
+       *
+       */
+      @Override
+      public void finish() throws IOException {
+        if (HAS_BROKEN_FINISH) {
+          if (!def.finished()) {
+            def.finish();
+            while (!def.finished()) {
+              int i = def.deflate(this.buf, 0, this.buf.length);
+              if ((def.finished()) && (i <= this.buf.length - TRAILER_SIZE)) {
+                writeTrailer(this.buf, i);
+                i += TRAILER_SIZE;
+                out.write(this.buf, 0, i);
+
+                return;
+              }
+              if (i > 0) {
+                out.write(this.buf, 0, i);
+              }
+            }
+
+            byte[] arrayOfByte = new byte[TRAILER_SIZE];
+            writeTrailer(arrayOfByte, 0);
+            out.write(arrayOfByte);
+          }
+        } else {
+          super.finish();
+        }
+      }
+
+      /** re-implement for HADOOP-8419 because the relative method in jdk is invisible */
+      private void writeTrailer(byte[] paramArrayOfByte, int paramInt)
+        throws IOException {
+        writeInt((int)this.crc.getValue(), paramArrayOfByte, paramInt);
+        writeInt(this.def.getTotalIn(), paramArrayOfByte, paramInt + 4);
+      }
+
+      /** re-implement for HADOOP-8419 because the relative method in jdk is invisible */
+      private void writeInt(int paramInt1, byte[] paramArrayOfByte, int paramInt2)
+        throws IOException {
+        writeShort(paramInt1 & 0xFFFF, paramArrayOfByte, paramInt2);
+        writeShort(paramInt1 >> 16 & 0xFFFF, paramArrayOfByte, paramInt2 + 2);
+      }
+
+      /** re-implement for HADOOP-8419 because the relative method in jdk is invisible */
+      private void writeShort(int paramInt1, byte[] paramArrayOfByte, int paramInt2)
+        throws IOException {
+        paramArrayOfByte[paramInt2] = (byte)(paramInt1 & 0xFF);
+        paramArrayOfByte[(paramInt2 + 1)] = (byte)(paramInt1 >> 8 & 0xFF);
+      }
     }
 
     public GzipOutputStream(OutputStream out) throws IOException {

Propchange: hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/test/core/
------------------------------------------------------------------------------
  Merged /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/core:r1426019-1432788

Modified: hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileContextPermissionBase.java
URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileContextPermissionBase.java?rev=1432796&r1=1432795&r2=1432796&view=diff
==============================================================================
--- hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileContextPermissionBase.java
(original)
+++ hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileContextPermissionBase.java
Mon Jan 14 03:44:35 2013
@@ -95,7 +95,7 @@ public abstract class FileContextPermiss
     String filename = "foo";
     Path f = getTestRootPath(fc, filename);
     createFile(fc, filename);
-    doFilePermissionCheck(FileContext.DEFAULT_PERM.applyUMask(fc.getUMask()),
+    doFilePermissionCheck(FileContext.FILE_DEFAULT_PERM.applyUMask(fc.getUMask()),
                         fc.getFileStatus(f).getPermission());
   }
   

Modified: hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemContractBaseTest.java
URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemContractBaseTest.java?rev=1432796&r1=1432795&r2=1432796&view=diff
==============================================================================
--- hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemContractBaseTest.java
(original)
+++ hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemContractBaseTest.java
Mon Jan 14 03:44:35 2013
@@ -23,12 +23,9 @@ import java.io.IOException;
 
 import junit.framework.TestCase;
 
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.FSDataInputStream;
-import org.apache.hadoop.fs.FSDataOutputStream;
-import org.apache.hadoop.fs.FileStatus;
-import org.apache.hadoop.fs.FileSystem;
-import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.permission.FsPermission;
 
 /**
@@ -45,15 +42,13 @@ import org.apache.hadoop.fs.permission.F
  * </p>
  */
 public abstract class FileSystemContractBaseTest extends TestCase {
+  private static final Log LOG =
+    LogFactory.getLog(FileSystemContractBaseTest.class);
+
   protected final static String TEST_UMASK = "062";
   protected FileSystem fs;
-  protected byte[] data = new byte[getBlockSize() * 2]; // two blocks of data
-  {
-    for (int i = 0; i < data.length; i++) {
-      data[i] = (byte) (i % 10);
-    }
-  }
-  
+  protected byte[] data = dataset(getBlockSize() * 2, 0, 255);
+
   @Override
   protected void tearDown() throws Exception {
     fs.delete(path("/test"), true);
@@ -235,35 +230,16 @@ public abstract class FileSystemContract
   public void testWriteReadAndDeleteTwoBlocks() throws Exception {
     writeReadAndDelete(getBlockSize() * 2);
   }
-  
+
+  /**
+   * Write a dataset, read it back in and verify that they match.
+   * Afterwards, the file is deleted.
+   * @param len length of data
+   * @throws IOException on IO failures
+   */
   protected void writeReadAndDelete(int len) throws IOException {
     Path path = path("/test/hadoop/file");
-    
-    fs.mkdirs(path.getParent());
-
-    FSDataOutputStream out = fs.create(path, false,
-        fs.getConf().getInt("io.file.buffer.size", 4096), 
-        (short) 1, getBlockSize());
-    out.write(data, 0, len);
-    out.close();
-
-    assertTrue("Exists", fs.exists(path));
-    assertEquals("Length", len, fs.getFileStatus(path).getLen());
-
-    FSDataInputStream in = fs.open(path);
-    byte[] buf = new byte[len];
-    in.readFully(0, buf);
-    in.close();
-
-    assertEquals(len, buf.length);
-    for (int i = 0; i < buf.length; i++) {
-      assertEquals("Position " + i, data[i], buf[i]);
-    }
-    
-    assertTrue("Deleted", fs.delete(path, false));
-    
-    assertFalse("No longer exists", fs.exists(path));
-
+    writeAndRead(path, data, len, false, true);
   }
   
   public void testOverwrite() throws IOException {
@@ -494,4 +470,147 @@ public abstract class FileSystemContract
     assertEquals("Source exists", srcExists, fs.exists(src));
     assertEquals("Destination exists", dstExists, fs.exists(dst));
   }
+
+  /**
+   * Verify that if you take an existing file and overwrite it, the new values
+   * get picked up.
+   * This is a test for the behavior of eventually consistent
+   * filesystems.
+   *
+   * @throws Exception on any failure
+   */
+
+  public void testOverWriteAndRead() throws Exception {
+    int blockSize = getBlockSize();
+
+    byte[] filedata1 = dataset(blockSize * 2, 'A', 26);
+    byte[] filedata2 = dataset(blockSize * 2, 'a', 26);
+    Path path = path("/test/hadoop/file-overwrite");
+    writeAndRead(path, filedata1, blockSize, true, false);
+    writeAndRead(path, filedata2, blockSize, true, false);
+    writeAndRead(path, filedata1, blockSize * 2, true, false);
+    writeAndRead(path, filedata2, blockSize * 2, true, false);
+    writeAndRead(path, filedata1, blockSize, true, false);
+    writeAndRead(path, filedata2, blockSize * 2, true, false);
+  }
+
+  /**
+   *
+   * Write a file and read it in, validating the result. Optional flags control
+   * whether file overwrite operations should be enabled, and whether the
+   * file should be deleted afterwards.
+   *
+   * If there is a mismatch between what was written and what was expected,
+   * a small range of bytes either side of the first error are logged to aid
+   * diagnosing what problem occurred -whether it was a previous file
+   * or a corrupting of the current file. This assumes that two
+   * sequential runs to the same path use datasets with different character
+   * moduli.
+   *
+   * @param path path to write to
+   * @param len length of data
+   * @param overwrite should the create option allow overwrites?
+   * @param delete should the file be deleted afterwards? -with a verification
+   * that it worked. Deletion is not attempted if an assertion has failed
+   * earlier -it is not in a <code>finally{}</code> block.
+   * @throws IOException IO problems
+   */
+  protected void writeAndRead(Path path, byte[] src, int len,
+                              boolean overwrite,
+                              boolean delete) throws IOException {
+    assertTrue("Not enough data in source array to write " + len + " bytes",
+               src.length >= len);
+    fs.mkdirs(path.getParent());
+
+    FSDataOutputStream out = fs.create(path, overwrite,
+                                       fs.getConf().getInt("io.file.buffer.size",
+                                                           4096),
+                                       (short) 1, getBlockSize());
+    out.write(src, 0, len);
+    out.close();
+
+    assertTrue("Exists", fs.exists(path));
+    assertEquals("Length", len, fs.getFileStatus(path).getLen());
+
+    FSDataInputStream in = fs.open(path);
+    byte[] buf = new byte[len];
+    in.readFully(0, buf);
+    in.close();
+
+    assertEquals(len, buf.length);
+    int errors = 0;
+    int first_error_byte = -1;
+    for (int i = 0; i < len; i++) {
+      if (src[i] != buf[i]) {
+        if (errors == 0) {
+          first_error_byte = i;
+        }
+        errors++;
+      }
+    }
+
+    if (errors > 0) {
+      String message = String.format(" %d errors in file of length %d",
+                                     errors, len);
+      LOG.warn(message);
+      // the range either side of the first error to print
+      // this is a purely arbitrary number, to aid user debugging
+      final int overlap = 10;
+      for (int i = Math.max(0, first_error_byte - overlap);
+           i < Math.min(first_error_byte + overlap, len);
+           i++) {
+        byte actual = buf[i];
+        byte expected = src[i];
+        String letter = toChar(actual);
+        String line = String.format("[%04d] %2x %s\n", i, actual, letter);
+        if (expected != actual) {
+          line = String.format("[%04d] %2x %s -expected %2x %s\n",
+                               i,
+                               actual,
+                               letter,
+                               expected,
+                               toChar(expected));
+        }
+        LOG.warn(line);
+      }
+      fail(message);
+    }
+
+    if (delete) {
+      boolean deleted = fs.delete(path, false);
+      assertTrue("Deleted", deleted);
+      assertFalse("No longer exists", fs.exists(path));
+    }
+  }
+
+  /**
+   * Convert a byte to a character for printing. If the
+   * byte value is < 32 -and hence unprintable- the byte is
+   * returned as a two digit hex value
+   * @param b byte
+   * @return the printable character string
+   */
+  protected String toChar(byte b) {
+    if (b >= 0x20) {
+      return Character.toString((char) b);
+    } else {
+      return String.format("%02x", b);
+    }
+  }
+
+  /**
+   * Create a dataset for use in the tests; all data is in the range
+   * base to (base+modulo-1) inclusive
+   * @param len length of data
+   * @param base base of the data
+   * @param modulo the modulo
+   * @return the newly generated dataset
+   */
+  protected byte[] dataset(int len, int base, int modulo) {
+    byte[] dataset = new byte[len];
+    for (int i = 0; i < len; i++) {
+      dataset[i] = (byte) (base + (i % modulo));
+    }
+    return dataset;
+  }
 }

Modified: hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileStatus.java
URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileStatus.java?rev=1432796&r1=1432795&r2=1432796&view=diff
==============================================================================
--- hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileStatus.java
(original)
+++ hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileStatus.java
Mon Jan 14 03:44:35 2013
@@ -121,7 +121,7 @@ public class TestFileStatus {
     FileStatus fileStatus = new FileStatus(LENGTH, isdir,
         REPLICATION, BLKSIZE, MTIME, PATH);   
     validateAccessors(fileStatus, LENGTH, isdir, REPLICATION, BLKSIZE, MTIME,
-        0, FsPermission.getDefault(), "", "", null, PATH);
+        0, FsPermission.getDirDefault(), "", "", null, PATH);
   }
 
   /**
@@ -131,7 +131,7 @@ public class TestFileStatus {
   public void constructorBlank() throws IOException {
     FileStatus fileStatus = new FileStatus();  
     validateAccessors(fileStatus, 0, false, 0, 0, 0,
-        0, FsPermission.getDefault(), "", "", null, null);
+        0, FsPermission.getFileDefault(), "", "", null, null);
   }
 
   /**

Modified: hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestLocalFSFileContextMainOperations.java
URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestLocalFSFileContextMainOperations.java?rev=1432796&r1=1432795&r2=1432796&view=diff
==============================================================================
--- hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestLocalFSFileContextMainOperations.java
(original)
+++ hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestLocalFSFileContextMainOperations.java
Mon Jan 14 03:44:35 2013
@@ -24,6 +24,8 @@ import org.apache.hadoop.conf.Configurat
 import org.junit.Assert;
 import org.junit.Before;
 import org.junit.Test;
+import org.apache.hadoop.fs.FileContextTestHelper;
+import org.apache.hadoop.fs.permission.FsPermission;
 
 public class TestLocalFSFileContextMainOperations extends FileContextMainOperationsBaseTest
{
 
@@ -47,4 +49,14 @@ public class TestLocalFSFileContextMainO
     FileContext fc1 = FileContext.getLocalFSFileContext();
     Assert.assertTrue(fc1 != fc);
   }
+
+  @Test
+  public void testDefaultFilePermission() throws IOException {
+    Path file = FileContextTestHelper.getTestRootPath(fc,
+        "testDefaultFilePermission");
+    FileContextTestHelper.createFile(fc, file);
+    FsPermission expect = FileContext.FILE_DEFAULT_PERM.applyUMask(fc.getUMask());
+    Assert.assertEquals(expect, fc.getFileStatus(file)
+        .getPermission());
+  }
 }

Modified: hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestLocalFileSystemPermission.java
URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestLocalFileSystemPermission.java?rev=1432796&r1=1432795&r2=1432796&view=diff
==============================================================================
--- hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestLocalFileSystemPermission.java
(original)
+++ hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestLocalFileSystemPermission.java
Mon Jan 14 03:44:35 2013
@@ -73,7 +73,7 @@ public class TestLocalFileSystemPermissi
     try {
       FsPermission initialPermission = getPermission(localfs, f);
       System.out.println(filename + ": " + initialPermission);
-      assertEquals(FsPermission.getDefault().applyUMask(FsPermission.getUMask(conf)), initialPermission);
+      assertEquals(FsPermission.getFileDefault().applyUMask(FsPermission.getUMask(conf)),
initialPermission);
     }
     catch(Exception e) {
       System.out.println(StringUtils.stringifyException(e));

Modified: hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/test/resources/kdc/killKdc.sh
URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/test/resources/kdc/killKdc.sh?rev=1432796&r1=1432795&r2=1432796&view=diff
==============================================================================
--- hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/test/resources/kdc/killKdc.sh
(original)
+++ hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/test/resources/kdc/killKdc.sh
Mon Jan 14 03:44:35 2013
@@ -1,3 +1,19 @@
 #!/bin/sh
-ps -ef | grep apacheds | grep -v grep | cut -f4 -d ' ' |xargs kill -9
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+ 
+ps -ef | grep apacheds | grep -v grep | awk '{printf $2"\n"}' | xargs -t --no-run-if-empty
kill -9
 



Mime
View raw message