hadoop-common-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From rake...@apache.org
Subject [04/50] [abbrv] hadoop git commit: Add release notes, changes, jdiff for 3.0.0-alpha4
Date Tue, 11 Jul 2017 16:24:50 GMT
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f10864a8/hadoop-common-project/hadoop-common/src/site/markdown/release/3.0.0-alpha4/RELEASENOTES.3.0.0-alpha4.md
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/release/3.0.0-alpha4/RELEASENOTES.3.0.0-alpha4.md
b/hadoop-common-project/hadoop-common/src/site/markdown/release/3.0.0-alpha4/RELEASENOTES.3.0.0-alpha4.md
new file mode 100644
index 0000000..3ad6cc6
--- /dev/null
+++ b/hadoop-common-project/hadoop-common/src/site/markdown/release/3.0.0-alpha4/RELEASENOTES.3.0.0-alpha4.md
@@ -0,0 +1,492 @@
+
+<!---
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+-->
+# "Apache Hadoop"  3.0.0-alpha4 Release Notes
+
+These release notes cover new developer and user-facing incompatibilities, important issues,
features, and major improvements.
+
+
+---
+
+* [HADOOP-13956](https://issues.apache.org/jira/browse/HADOOP-13956) | *Critical* | **Read
ADLS credentials from Credential Provider**
+
+The hadoop-azure-datalake file system now supports configuration of the Azure Data Lake Store
account credentials using the standard Hadoop Credential Provider API. For details, please
refer to the documentation on hadoop-azure-datalake and the Credential Provider API.
+
+
+---
+
+* [MAPREDUCE-6404](https://issues.apache.org/jira/browse/MAPREDUCE-6404) | *Major* | **Allow
AM to specify a port range for starting its webapp**
+
+Add a new configuration - "yarn.app.mapreduce.am.webapp.port-range" to specify port-range
for webapp launched by AM.
+
+
+---
+
+* [HDFS-10860](https://issues.apache.org/jira/browse/HDFS-10860) | *Blocker* | **Switch HttpFS
from Tomcat to Jetty**
+
+<!-- markdown -->
+
+The following environment variables are deprecated. Set the corresponding
+configuration properties instead.
+
+Environment Variable        | Configuration Property       | Configuration File
+----------------------------|------------------------------|--------------------
+HTTPFS_TEMP                 | hadoop.http.temp.dir         | httpfs-site.xml
+HTTPFS_HTTP_PORT            | hadoop.httpfs.http.port      | httpfs-site.xml
+HTTPFS_MAX_HTTP_HEADER_SIZE | hadoop.http.max.request.header.size and hadoop.http.max.response.header.size
| httpfs-site.xml
+HTTPFS_MAX_THREADS          | hadoop.http.max.threads      | httpfs-site.xml
+HTTPFS_SSL_ENABLED          | hadoop.httpfs.ssl.enabled    | httpfs-site.xml
+HTTPFS_SSL_KEYSTORE_FILE    | ssl.server.keystore.location | ssl-server.xml
+HTTPFS_SSL_KEYSTORE_PASS    | ssl.server.keystore.password | ssl-server.xml
+
+These default HTTP Services have been added.
+
+Name               | Description
+-------------------|------------------------------------
+/conf              | Display configuration properties
+/jmx               | Java JMX management interface
+/logLevel          | Get or set log level per class
+/logs              | Display log files
+/stacks            | Display JVM stacks
+/static/index.html | The static home page
+
+Script httpfs.sh has been deprecated, use `hdfs httpfs` instead. The new scripts are based
on the Hadoop shell scripting framework. `hadoop daemonlog` is supported. SSL configurations
are read from ssl-server.xml.
+
+
+---
+
+* [HDFS-11210](https://issues.apache.org/jira/browse/HDFS-11210) | *Major* | **Enhance key
rolling to guarantee new KeyVersion is returned from generateEncryptedKeys after a key is
rolled**
+
+<!-- markdown --> 
+
+An `invalidateCache` command has been added to the KMS.
+The `rollNewVersion` semantics of the KMS has been improved so that after a key's version
is rolled, `generateEncryptedKey` of that key guarantees to return the `EncryptedKeyVersion`
based on the new key version.
+
+
+---
+
+* [HADOOP-13075](https://issues.apache.org/jira/browse/HADOOP-13075) | *Major* | **Add support
for SSE-KMS and SSE-C in s3a filesystem**
+
+The new encryption options SSE-KMS and especially SSE-C must be considered experimental at
present. If you are using SSE-C, problems may arise if the bucket mixes encrypted and unencrypted
files. For SSE-KMS, there may be extra throttling of IO, especially with the fadvise=random
option. You may wish to request an increase in your KMS IOPs limits.
+
+
+---
+
+* [HDFS-11026](https://issues.apache.org/jira/browse/HDFS-11026) | *Major* | **Convert BlockTokenIdentifier
to use Protobuf**
+
+Changed the serialized format of BlockTokenIdentifier to protocol buffers. Includes logic
to decode both the old Writable format and the new PB format to support existing clients.
Client implementations in other languages will require similar functionality.
+
+
+---
+
+* [HADOOP-13929](https://issues.apache.org/jira/browse/HADOOP-13929) | *Major* | **ADLS connector
should not check in contract-test-options.xml**
+
+To run live unit tests, create src/test/resources/auth-keys.xml with the same properties
as in the deprecated contract-test-options.xml.
+
+
+---
+
+* [HDFS-11100](https://issues.apache.org/jira/browse/HDFS-11100) | *Critical* | **Recursively
deleting file protected by sticky bit should fail**
+
+Changed the behavior of removing directories with sticky bits, so that it is closer to what
most Unix/Linux users would expect.
+
+
+---
+
+* [YARN-6177](https://issues.apache.org/jira/browse/YARN-6177) | *Major* | **Yarn client
should exit with an informative error message if an incompatible Jersey library is used at
client**
+
+Let yarn client exit with an informative error message if an incompatible Jersey library
is used from client side.
+
+
+---
+
+* [HADOOP-13805](https://issues.apache.org/jira/browse/HADOOP-13805) | *Major* | **UGI.getCurrentUser()
fails if user does not have a keytab associated**
+
+Due to a remaining issue after HADOOP-13558, an UGI may still try to renew the TGT even though
the UGI is created from an existing Subject. The renewal would fail because of non-existing
keytab. 
+
+Fixing the issue means different behavior which is incompatible, however,  configuration
property "hadoop.treat.subject.external" is introduced to enable the fix (disabled by default).
The behavior is the same as before when the fix is not enabled.
+
+
+---
+
+* [HDFS-11405](https://issues.apache.org/jira/browse/HDFS-11405) | *Blocker* | **Rename "erasurecode"
CLI subcommand to "ec"**
+
+The "hdfs erasurecode" CLI command has been renamed to "hdfs ec" for ease-of-use.
+
+
+---
+
+* [HDFS-11426](https://issues.apache.org/jira/browse/HDFS-11426) | *Major* | **Refactor EC
CLI to be similar to storage policies CLI**
+
+The \`hdfs ec\` CLI command has been substantially reworked to make the calling patterns
more similar to the \`hdfs storagepolicies\` command. See \`hdfs ec -help\` and the HDFS erasure
coding documentation for more information.
+
+
+---
+
+* [HADOOP-13817](https://issues.apache.org/jira/browse/HADOOP-13817) | *Minor* | **Add a
finite shell command timeout to ShellBasedUnixGroupsMapping**
+
+A new introduced configuration key "hadoop.security.groups.shell.command.timeout" allows
applying a finite wait timeout over the 'id' commands launched by the ShellBasedUnixGroupsMapping
plugin. Values specified can be in any valid time duration units: https://hadoop.apache.org/docs/current/api/org/apache/hadoop/conf/Configuration.html#getTimeDuration-java.lang.String-long-java.util.concurrent.TimeUnit-
+
+Value defaults to 0, indicating infinite wait (preserving existing behaviour).
+
+
+---
+
+* [HDFS-11427](https://issues.apache.org/jira/browse/HDFS-11427) | *Major* | **Rename "rs-default"
to "rs"**
+
+The "rs-default" codec has been renamed to simply "rs" for simplicity. Previous configuration
keys like "io.erasurecode.codec.rs-default" have also been renamed to match.
+
+
+---
+
+* [HDFS-11382](https://issues.apache.org/jira/browse/HDFS-11382) | *Major* | **Persist Erasure
Coding Policy ID in a new optional field in INodeFile in FSImage**
+
+The FSImage on-disk format for INodeFile is changed to additionally include a field for Erasure
Coded files. This optional field 'erasureCodingPolicyID' which is unit32 type is available
for all Erasure Coded files and represents the Erasure Coding Policy ID. Previously, the 'replication'
field in INodeFile disk format was overloaded  to represent the same Erasure Coding Policy
ID.
+
+
+---
+
+* [HDFS-11428](https://issues.apache.org/jira/browse/HDFS-11428) | *Major* | **Change setErasureCodingPolicy
to take a required string EC policy name**
+
+{{HdfsAdmin#setErasureCodingPolicy}} now takes a String {{ecPolicyName}} rather than an ErasureCodingPolicy
object. The corresponding RPC's wire format has also been modified.
+
+
+---
+
+* [HADOOP-14138](https://issues.apache.org/jira/browse/HADOOP-14138) | *Critical* | **Remove
S3A ref from META-INF service discovery, rely on existing core-default entry**
+
+The classpath implementing the s3a filesystem is now defined in core-default.xml. Attempting
to instantiate an S3A filesystem instance using a Configuration instance which has not included
the default resorts will fail. Applications should not be doing this anyway, as it will lose
other critical  configuration options needed by the filesystem.
+
+
+---
+
+* [HADOOP-6801](https://issues.apache.org/jira/browse/HADOOP-6801) | *Minor* | **io.sort.mb
and io.sort.factor were renamed and moved to mapreduce but are still in CommonConfigurationKeysPublic.java
and used in SequenceFile.java**
+
+Two new configuration keys, seq.io.sort.mb and seq.io.sort.factor have been introduced for
the SequenceFile's Sorter feature to replace older, deprecated property keys of io.sort.mb
and io.sort.factor.
+
+This only affects direct users of the org.apache.hadoop.io.SequenceFile.Sorter Java class.
For controlling MR2's internal sorting instead, use the existing config keys of mapreduce.task.io.sort.mb
and mapreduce.task.io.sort.factor.
+
+
+---
+
+* [HDFS-8112](https://issues.apache.org/jira/browse/HDFS-8112) | *Blocker* | **Relax permission
checking for EC related operations**
+
+The HdfsAdmin erasure coding APIs (set, unset, get) are now usable by non-superusers based
on appropriate file and directory permissions.
+
+
+---
+
+* [HDFS-11498](https://issues.apache.org/jira/browse/HDFS-11498) | *Major* | **Make RestCsrfPreventionHandler
and WebHdfsHandler compatible with Netty 4.0**
+
+This JIRA sets the Netty 4 dependency to 4.0.23. This is an incompatible change for the 3.0
release line, as 3.0.0-alpha1 and 3.0.0-alpha2 depended on Netty 4.1.0.Beta5.
+
+
+---
+
+* [HDFS-11152](https://issues.apache.org/jira/browse/HDFS-11152) | *Blocker* | **Start erasure
coding policy ID number from 1 instead of 0 to void potential unexpected errors**
+
+The NameNode metadata for storing erasure coding policies has changed.
+
+
+---
+
+* [HDFS-11314](https://issues.apache.org/jira/browse/HDFS-11314) | *Blocker* | **Enforce
set of enabled EC policies on the NameNode**
+
+HDFS will now restrict the set of erasure coding policies that can be set by users. The set
of allowed policies can be configured via "dfs.namenode.ec.policies.enabled" on the NameNode.
Please see the documentation for more details.
+
+
+---
+
+* [HDFS-11499](https://issues.apache.org/jira/browse/HDFS-11499) | *Major* | **Decommissioning
stuck because of failing recovery**
+
+Allow a block to complete if the number of replicas on live nodes, decommissioning nodes
and nodes in maintenance mode satisfies minimum replication factor.
+The fix prevents block recovery failure if replica of last block is being decommissioned.
Vice versa, the decommissioning will be stuck, waiting for the last block to be completed.
In addition, file close() operation will not fail due to last block being decommissioned.
+
+
+---
+
+* [HDFS-11505](https://issues.apache.org/jira/browse/HDFS-11505) | *Major* | **Do not enable
any erasure coding policies by default**
+
+By default, none of the built-in erasure coding policies are enabled. Users have to explicitly
enable the erasure coding policy via the hdfs configuration 'dfs.namenode.ec.policies.enabled'
before setting the policy on any directories.
+
+
+---
+
+* [HADOOP-14213](https://issues.apache.org/jira/browse/HADOOP-14213) | *Major* | **Move Configuration
runtime check for hadoop-site.xml to initialization**
+
+Move the check for hadoop-site.xml to static initialization of the Configuration class.
+
+
+---
+
+* [HADOOP-10101](https://issues.apache.org/jira/browse/HADOOP-10101) | *Major* | **Update
guava dependency to the latest version**
+
+Guava is updated to version 21.0. 
+
+In the background of merging this patch into trunk, there is a work, shaded Hadoop client
artifacts and minicluster, on HADOOP-11804. hadoop-client has its own Guava which is shaded,
so we can update dependency with minimum effect compare to previous HADOOP-11804. 
+
+See also HADOOP-14238 as related problem.
+
+
+---
+
+* [HADOOP-14038](https://issues.apache.org/jira/browse/HADOOP-14038) | *Minor* | **Rename
ADLS credential properties**
+
+<!-- markdown --> 
+
+* Properties {{dfs.adls.*}} are renamed {{fs.adl.*}}
+* Property {{adl.dfs.enable.client.latency.tracker}} is renamed {{adl.enable.client.latency.tracker}}
+* Old properties are still supported
+
+
+---
+
+* [HADOOP-14267](https://issues.apache.org/jira/browse/HADOOP-14267) | *Major* | **Make DistCpOptions
class immutable**
+
+DistCpOptions has been changed to be constructed with a Builder pattern. This potentially
affects applications that invoke DistCp with the Java API.
+
+
+---
+
+* [HDFS-11596](https://issues.apache.org/jira/browse/HDFS-11596) | *Critical* | **hadoop-hdfs-client
jar is in the wrong directory in release tarball**
+
+The scope of hadoop-hdfs's dependency on hadoop-hdfs-client has changed from "compile" to
"provided". This may affect users who directly consume hadoop-hdfs, which is a private API.
These users need to add a new dependency on hadoop-hdfs-client, or better yet, switch from
hadoop-hdfs to hadoop-hdfs-client.
+
+
+---
+
+* [HADOOP-14202](https://issues.apache.org/jira/browse/HADOOP-14202) | *Major* | **fix jsvc/secure
user var inconsistencies**
+
+<!-- markdown -->
+
+The secure user variables have been changed to be consistent with the rest of the environment
variable changes:
+
+| Old | New |
+|:---- |:---- | 
+| HADOOP\_SECURE\_DN\_USER  | HDFS\_DATANODE\_SECURE\_USER |
+| HADOO\P_PRIVILEGED\_NFS\_USER | HDFS\_NFS3\_SECURE\_USER |
+
+
+---
+
+* [HADOOP-14174](https://issues.apache.org/jira/browse/HADOOP-14174) | *Major* | **Set default
ADLS access token provider type to ClientCredential**
+
+Switch the default ADLS access token provider type from Custom to ClientCredential.
+
+
+---
+
+* [YARN-6298](https://issues.apache.org/jira/browse/YARN-6298) | *Blocker* | **Metric preemptCall
is not used in new preemption**
+
+Metric preemptCall in FSOpDurations is no longer supported.
+
+
+---
+
+* [HADOOP-14285](https://issues.apache.org/jira/browse/HADOOP-14285) | *Major* | **Update
minimum version of Maven from 3.0 to 3.3**
+
+Minimum version of Apache Maven has been updated from 3.0 to 3.3.
+
+
+---
+
+* [HADOOP-14225](https://issues.apache.org/jira/browse/HADOOP-14225) | *Minor* | **Remove
xmlenc dependency**
+
+xmlenc dependency has been removed. If you rely on the transitive dependency, you need to
set the dependency explicitly in your code after this change.
+
+
+---
+
+* [HADOOP-13665](https://issues.apache.org/jira/browse/HADOOP-13665) | *Blocker* | **Erasure
Coding codec should support fallback coder**
+
+Use configuration properties io.erasurecode.codec.{rs-legacy,rs,xor}.rawcoders to control
erasure coding codec. These properties support codec fallback in case the previous codec is
not loaded.
+
+
+---
+
+* [HADOOP-14248](https://issues.apache.org/jira/browse/HADOOP-14248) | *Major* | **Retire
SharedInstanceProfileCredentialsProvider in trunk.**
+
+SharedInstanceProfileCredentialsProvider is removed after this change. Users should use InstanceProfileCredentialsProvider
provided by AWS SDK instead, which itself enforces a singleton instance to reduce calls to
AWS EC2 Instance Metadata Service.
+
+
+---
+
+* [HDFS-11565](https://issues.apache.org/jira/browse/HDFS-11565) | *Blocker* | **Use compact
identifiers for built-in ECPolicies in HdfsFileStatus**
+
+Some of the existing fields in ErasureCodingPolicyProto have changed from required to optional.
For system EC policies, these fields are populated from hardcoded values.
+
+
+---
+
+* [HADOOP-11794](https://issues.apache.org/jira/browse/HADOOP-11794) | *Major* | **Enable
distcp to copy blocks in parallel**
+
+If  a positive value is passed to command line switch -blocksperchunk, files with more blocks
than this value will be split into chunks of \`\<blocksperchunk\>\` blocks to be transferred
in parallel, and reassembled on the destination. By default, \`\<blocksperchunk\>\`
is 0 and the files will be transmitted in their entirety without splitting. This switch is
only applicable when both the source file system supports getBlockLocations and target supports
concat.
+
+
+---
+
+* [YARN-3427](https://issues.apache.org/jira/browse/YARN-3427) | *Blocker* | **Remove deprecated
methods from ResourceCalculatorProcessTree**
+
+The deprecated ProcessTree methods getCumulativeVmem
+ and getCumulativeRssmem have been removed.
+
+
+---
+
+* [HDFS-11402](https://issues.apache.org/jira/browse/HDFS-11402) | *Major* | **HDFS Snapshots
should capture point-in-time copies of OPEN files**
+
+When the config param "dfs.namenode.snapshot.capture.openfiles" is enabled, HDFS snapshots
taken will additionally capture point-in-time copies of the open files that have valid leases.
Even when the current version open files grow or shrink in size, the snapshot will always
retain the immutable versions of these open files, just as in for all other closed files.
Note: The file length captured for open files in the snapshot was the one recorded in NameNode
at the time of snapshot and it may be shorter than what the client has written till then.
In order to capture the latest length, the client can call hflush/hsync with the flag SyncFlag.UPDATE\_LENGTH
on the open files handles.
+
+
+---
+
+* [HDFS-6708](https://issues.apache.org/jira/browse/HDFS-6708) | *Major* | **StorageType
should be encoded in the block token**
+
+StorageTypes are now encoded in the BlockTokenIdentifier to ensure that the intended StorageType
for writes is not tampered with on it's way through the Client to the Datanode.
+
+
+---
+
+* [HADOOP-10105](https://issues.apache.org/jira/browse/HADOOP-10105) | *Blocker* | **remove
httpclient dependency**
+
+Apache Httpclient has been removed as a dependency. This library is End of Life: people using
it should move to its {{httpcore}} successor. If you cannot do that, you must add an explicit
dependency on {{httpclient}} in your classpath.
+
+
+---
+
+* [HADOOP-13200](https://issues.apache.org/jira/browse/HADOOP-13200) | *Blocker* | **Implement
customizable and configurable erasure coders**
+
+CodecRegistry uses ServiceLoader to dynamically load all implementations of RawErasureCoderFactory.
In Hadoop 3.0, there are several built-in implementations, and user can also provide self-defined
implementations with the corresponding resource files. 
+For each codec, user can configure the order of the implementations with the configuration
keys:
+\`io.erasurecode.codec.rs.rawcoders\` for the default RS codec,
+\`io.erasurecode.codec.rs-legacy.rawcoders\` for the legacy RS codec,
+\`io.erasurecode.codec.xor.rawcoders\` for the XOR codec.
+User can also configure self-defined codec with the configuration key like:
+\`io.erasurecode.codec.self-defined.rawcoders\`.
+For each codec, Hadoop will use the implementation according to the order configured. If
the former implementation fails, it will fall back to call the latter one. The order is defined
by a list of coder names separated by commas. The names for the built-in implementations are:
+\`rs\_native\` and \`rs\_java\` for the default RS codec, of which  the former is a native
implementation which leverages Intel ISA-L library, which is the default implementation and
the latter is the implementation in pure Java,
+\`rs-legacy\_java\` for the legacy RS codec, which is the default implementation in pure
Java,
+\`xor\_native\` and \`xor\_java\` for the XOR codec, of which the former is the Intel ISA-L
implementation which is the default one and the latter in pure Java.
+
+
+---
+
+* [YARN-2962](https://issues.apache.org/jira/browse/YARN-2962) | *Critical* | **ZKRMStateStore:
Limit the number of znodes under a znode**
+
+**WARNING: No release note provided for this change.**
+
+
+---
+
+* [HADOOP-14386](https://issues.apache.org/jira/browse/HADOOP-14386) | *Blocker* | **Rewind
trunk from Guava 21.0 back to Guava 11.0.2**
+
+YARN application tags can no longer contain non-printable ASCII characters.
+
+
+---
+
+* [HADOOP-14401](https://issues.apache.org/jira/browse/HADOOP-14401) | *Major* | **maven-project-info-reports-plugin
can be removed**
+
+hadoop-auth and hadoop-hdfs-httpfs modules no longer generate dependencies.html via maven-project-info-reports-plugin.
+
+
+---
+
+* [HADOOP-14375](https://issues.apache.org/jira/browse/HADOOP-14375) | *Minor* | **Remove
tomcat support from hadoop-functions.sh**
+
+This change removes the support in the shell scripts for Tomcat that was added in 3.0.0-alpha1.
+
+
+---
+
+* [HADOOP-14419](https://issues.apache.org/jira/browse/HADOOP-14419) | *Minor* | **Remove
findbugs report from docs profile**
+
+Findbugs report is no longer part of the documentation.
+
+
+---
+
+* [HDFS-11661](https://issues.apache.org/jira/browse/HDFS-11661) | *Blocker* | **GetContentSummary
uses excessive amounts of memory**
+
+Reverted HDFS-10797 to fix a scalability regression brought by the commit.
+
+
+---
+
+* [HADOOP-14426](https://issues.apache.org/jira/browse/HADOOP-14426) | *Blocker* | **Upgrade
Kerby version from 1.0.0-RC2 to 1.0.0**
+
+**WARNING: No release note provided for this change.**
+
+
+---
+
+* [HADOOP-14407](https://issues.apache.org/jira/browse/HADOOP-14407) | *Major* | **DistCp
- Introduce a configurable copy buffer size**
+
+The copy buffer size can be configured via the new parameter \<copybuffersize\>. By
default the \<copybuffersize\> is set to 8KB.
+
+
+---
+
+* [HADOOP-13921](https://issues.apache.org/jira/browse/HADOOP-13921) | *Critical* | **Remove
Log4j classes from JobConf**
+
+Changes the type of JobConf.DEFAULT\_LOG\_LEVEL from a Log4J Level to a String. Clients that
referenced this field will need to be recompiled and may need to alter their source to account
for the type change. The level itself remains conceptually at "INFO".
+
+
+---
+
+* [HADOOP-8143](https://issues.apache.org/jira/browse/HADOOP-8143) | *Minor* | **Change distcp
to have -pb on by default**
+
+If -p option of distcp command is unspecified, block size is preserved.
+
+
+---
+
+* [HADOOP-14502](https://issues.apache.org/jira/browse/HADOOP-14502) | *Minor* | **Confusion/name
conflict between NameNodeActivity#BlockReportNumOps and RpcDetailedActivity#BlockReportNumOps**
+
+**WARNING: No release note provided for this change.**
+
+
+---
+
+* [HDFS-11067](https://issues.apache.org/jira/browse/HDFS-11067) | *Major* | **DFS#listStatusIterator(..)
should throw FileNotFoundException if the directory deleted before fetching next batch of
entries**
+
+DistributedFileSystem#listStatusIterator(..) throws FileNotFoundException if directory got
deleted during iterating over large list beyond ls limit.
+
+
+---
+
+* [HDFS-11956](https://issues.apache.org/jira/browse/HDFS-11956) | *Blocker* | **Do not require
a storage ID or target storage IDs when writing a block**
+
+Hadoop 2.x clients do not pass the storage ID or target storage IDs when writing a block.
For backwards compatibility, the DataNode will not require the presence of these fields. This
means older clients are unable to write to a particular storage as chosen by the NameNode
(e.g. HDFS-9806).
+
+
+---
+
+* [HADOOP-14536](https://issues.apache.org/jira/browse/HADOOP-14536) | *Major* | **Update
azure-storage sdk to version 5.3.0**
+
+The WASB FileSystem now uses version 5.3.0 of the Azure Storage SDK.
+
+
+---
+
+* [HADOOP-14546](https://issues.apache.org/jira/browse/HADOOP-14546) | *Major* | **Azure:
Concurrent I/O does not work when secure.mode is enabled**
+
+Fix to wasb:// (Azure) file system that allows the concurrent I/O feature to be used with
the secure mode feature.
+
+
+

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f10864a8/hadoop-hdfs-project/hadoop-hdfs/dev-support/jdiff/Apache_Hadoop_HDFS_3.0.0-alpha4.xml
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/dev-support/jdiff/Apache_Hadoop_HDFS_3.0.0-alpha4.xml
b/hadoop-hdfs-project/hadoop-hdfs/dev-support/jdiff/Apache_Hadoop_HDFS_3.0.0-alpha4.xml
new file mode 100644
index 0000000..286b5fe
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs/dev-support/jdiff/Apache_Hadoop_HDFS_3.0.0-alpha4.xml
@@ -0,0 +1,322 @@
+<?xml version="1.0" encoding="iso-8859-1" standalone="no"?>
+<!-- Generated by the JDiff Javadoc doclet -->
+<!-- (http://www.jdiff.org) -->
+<!-- on Fri Jun 30 01:55:19 UTC 2017 -->
+
+<api
+  xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance'
+  xsi:noNamespaceSchemaLocation='api.xsd'
+  name="Apache Hadoop HDFS 3.0.0-alpha4"
+  jdversion="1.0.9">
+
+<!--  Command line arguments =  -doclet org.apache.hadoop.classification.tools.IncludePublicAnnotationsJDiffDoclet
-docletpath /build/source/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-annotations.jar:/build/source/hadoop-hdfs-project/hadoop-hdfs/target/jdiff.jar
-verbose -classpath /build/source/hadoop-hdfs-project/hadoop-hdfs/target/classes:/build/source/hadoop-common-project/hadoop-annotations/target/hadoop-annotations-3.0.0-alpha4.jar:/usr/lib/jvm/java-8-oracle/lib/tools.jar:/build/source/hadoop-common-project/hadoop-auth/target/hadoop-auth-3.0.0-alpha4.jar:/maven/org/slf4j/slf4j-api/1.7.25/slf4j-api-1.7.25.jar:/maven/org/apache/httpcomponents/httpclient/4.5.2/httpclient-4.5.2.jar:/maven/org/apache/httpcomponents/httpcore/4.4.4/httpcore-4.4.4.jar:/maven/com/nimbusds/nimbus-jose-jwt/3.9/nimbus-jose-jwt-3.9.jar:/maven/net/jcip/jcip-annotations/1.0/jcip-annotations-1.0.jar:/maven/net/minidev/json-smart/1.1.1/json-smart-1.1.1.jar:/maven/org/apache/zookeeper/zookeeper/3.4.9/zookeep
 er-3.4.9.jar:/maven/jline/jline/0.9.94/jline-0.9.94.jar:/maven/org/apache/curator/curator-framework/2.12.0/curator-framework-2.12.0.jar:/maven/org/apache/kerby/kerb-simplekdc/1.0.0/kerb-simplekdc-1.0.0.jar:/maven/org/apache/kerby/kerb-client/1.0.0/kerb-client-1.0.0.jar:/maven/org/apache/kerby/kerby-config/1.0.0/kerby-config-1.0.0.jar:/maven/org/apache/kerby/kerb-core/1.0.0/kerb-core-1.0.0.jar:/maven/org/apache/kerby/kerby-pkix/1.0.0/kerby-pkix-1.0.0.jar:/maven/org/apache/kerby/kerby-asn1/1.0.0/kerby-asn1-1.0.0.jar:/maven/org/apache/kerby/kerby-util/1.0.0/kerby-util-1.0.0.jar:/maven/org/apache/kerby/kerb-common/1.0.0/kerb-common-1.0.0.jar:/maven/org/apache/kerby/kerb-crypto/1.0.0/kerb-crypto-1.0.0.jar:/maven/org/apache/kerby/kerb-util/1.0.0/kerb-util-1.0.0.jar:/maven/org/apache/kerby/kerb-admin/1.0.0/kerb-admin-1.0.0.jar:/maven/org/apache/kerby/kerb-server/1.0.0/kerb-server-1.0.0.jar:/maven/org/apache/kerby/kerb-identity/1.0.0/kerb-identity-1.0.0.jar:/maven/org/apache/kerby/kerby-xdr
 /1.0.0/kerby-xdr-1.0.0.jar:/build/source/hadoop-common-project/hadoop-common/target/hadoop-common-3.0.0-alpha4.jar:/maven/org/apache/commons/commons-math3/3.1.1/commons-math3-3.1.1.jar:/maven/commons-net/commons-net/3.1/commons-net-3.1.jar:/maven/commons-collections/commons-collections/3.2.2/commons-collections-3.2.2.jar:/maven/org/eclipse/jetty/jetty-servlet/9.3.11.v20160721/jetty-servlet-9.3.11.v20160721.jar:/maven/org/eclipse/jetty/jetty-security/9.3.11.v20160721/jetty-security-9.3.11.v20160721.jar:/maven/org/eclipse/jetty/jetty-webapp/9.3.11.v20160721/jetty-webapp-9.3.11.v20160721.jar:/maven/org/eclipse/jetty/jetty-xml/9.3.11.v20160721/jetty-xml-9.3.11.v20160721.jar:/maven/javax/servlet/jsp/jsp-api/2.1/jsp-api-2.1.jar:/maven/com/sun/jersey/jersey-servlet/1.19/jersey-servlet-1.19.jar:/maven/com/sun/jersey/jersey-json/1.19/jersey-json-1.19.jar:/maven/org/codehaus/jettison/jettison/1.1/jettison-1.1.jar:/maven/com/sun/xml/bind/jaxb-impl/2.2.3-1/jaxb-impl-2.2.3-1.jar:/maven/javax/xml
 /bind/jaxb-api/2.2.11/jaxb-api-2.2.11.jar:/maven/org/codehaus/jackson/jackson-core-asl/1.9.13/jackson-core-asl-1.9.13.jar:/maven/org/codehaus/jackson/jackson-mapper-asl/1.9.13/jackson-mapper-asl-1.9.13.jar:/maven/org/codehaus/jackson/jackson-jaxrs/1.9.13/jackson-jaxrs-1.9.13.jar:/maven/org/codehaus/jackson/jackson-xc/1.9.13/jackson-xc-1.9.13.jar:/maven/commons-beanutils/commons-beanutils/1.9.3/commons-beanutils-1.9.3.jar:/maven/org/apache/commons/commons-configuration2/2.1/commons-configuration2-2.1.jar:/maven/org/apache/commons/commons-lang3/3.3.2/commons-lang3-3.3.2.jar:/maven/org/apache/avro/avro/1.7.4/avro-1.7.4.jar:/maven/com/thoughtworks/paranamer/paranamer/2.3/paranamer-2.3.jar:/maven/org/xerial/snappy/snappy-java/1.0.4.1/snappy-java-1.0.4.1.jar:/maven/com/google/re2j/re2j/1.0/re2j-1.0.jar:/maven/com/google/code/gson/gson/2.2.4/gson-2.2.4.jar:/maven/com/jcraft/jsch/0.1.54/jsch-0.1.54.jar:/maven/org/apache/curator/curator-client/2.12.0/curator-client-2.12.0.jar:/maven/org/apac
 he/curator/curator-recipes/2.12.0/curator-recipes-2.12.0.jar:/maven/com/google/code/findbugs/jsr305/3.0.0/jsr305-3.0.0.jar:/maven/org/apache/commons/commons-compress/1.4.1/commons-compress-1.4.1.jar:/maven/org/tukaani/xz/1.0/xz-1.0.jar:/maven/org/codehaus/woodstox/stax2-api/3.1.4/stax2-api-3.1.4.jar:/maven/com/fasterxml/woodstox/woodstox-core/5.0.3/woodstox-core-5.0.3.jar:/build/source/hadoop-hdfs-project/hadoop-hdfs-client/target/hadoop-hdfs-client-3.0.0-alpha4.jar:/maven/com/squareup/okhttp/okhttp/2.4.0/okhttp-2.4.0.jar:/maven/com/squareup/okio/okio/1.4.0/okio-1.4.0.jar:/maven/com/fasterxml/jackson/core/jackson-annotations/2.7.8/jackson-annotations-2.7.8.jar:/maven/com/google/guava/guava/11.0.2/guava-11.0.2.jar:/maven/org/eclipse/jetty/jetty-server/9.3.11.v20160721/jetty-server-9.3.11.v20160721.jar:/maven/org/eclipse/jetty/jetty-http/9.3.11.v20160721/jetty-http-9.3.11.v20160721.jar:/maven/org/eclipse/jetty/jetty-io/9.3.11.v20160721/jetty-io-9.3.11.v20160721.jar:/maven/org/eclipse/
 jetty/jetty-util/9.3.11.v20160721/jetty-util-9.3.11.v20160721.jar:/maven/org/eclipse/jetty/jetty-util-ajax/9.3.11.v20160721/jetty-util-ajax-9.3.11.v20160721.jar:/maven/com/sun/jersey/jersey-core/1.19/jersey-core-1.19.jar:/maven/javax/ws/rs/jsr311-api/1.1.1/jsr311-api-1.1.1.jar:/maven/com/sun/jersey/jersey-server/1.19/jersey-server-1.19.jar:/maven/commons-cli/commons-cli/1.2/commons-cli-1.2.jar:/maven/commons-codec/commons-codec/1.4/commons-codec-1.4.jar:/maven/commons-io/commons-io/2.4/commons-io-2.4.jar:/maven/commons-lang/commons-lang/2.6/commons-lang-2.6.jar:/maven/commons-logging/commons-logging/1.1.3/commons-logging-1.1.3.jar:/maven/commons-daemon/commons-daemon/1.0.13/commons-daemon-1.0.13.jar:/maven/log4j/log4j/1.2.17/log4j-1.2.17.jar:/maven/com/google/protobuf/protobuf-java/2.5.0/protobuf-java-2.5.0.jar:/maven/javax/servlet/javax.servlet-api/3.1.0/javax.servlet-api-3.1.0.jar:/maven/org/slf4j/slf4j-log4j12/1.7.25/slf4j-log4j12-1.7.25.jar:/maven/io/netty/netty/3.10.5.Final/net
 ty-3.10.5.Final.jar:/maven/io/netty/netty-all/4.0.23.Final/netty-all-4.0.23.Final.jar:/maven/xerces/xercesImpl/2.9.1/xercesImpl-2.9.1.jar:/maven/xml-apis/xml-apis/1.3.04/xml-apis-1.3.04.jar:/maven/org/apache/htrace/htrace-core4/4.1.0-incubating/htrace-core4-4.1.0-incubating.jar:/maven/org/fusesource/leveldbjni/leveldbjni-all/1.8/leveldbjni-all-1.8.jar:/maven/com/fasterxml/jackson/core/jackson-databind/2.7.8/jackson-databind-2.7.8.jar:/maven/com/fasterxml/jackson/core/jackson-core/2.7.8/jackson-core-2.7.8.jar
-sourcepath /build/source/hadoop-hdfs-project/hadoop-hdfs/src/main/java -doclet org.apache.hadoop.classification.tools.IncludePublicAnnotationsJDiffDoclet
-docletpath /build/source/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-annotations.jar:/build/source/hadoop-hdfs-project/hadoop-hdfs/target/jdiff.jar
-apidir /build/source/hadoop-hdfs-project/hadoop-hdfs/target/site/jdiff/xml -apiname Apache
Hadoop HDFS 3.0.0-alpha4 -->
+<package name="org.apache.hadoop.hdfs">
+  <doc>
+  <![CDATA[<p>A distributed implementation of {@link
+org.apache.hadoop.fs.FileSystem}.  This is loosely modelled after
+Google's <a href="http://research.google.com/archive/gfs.html">GFS</a>.</p>
+
+<p>The most important difference is that unlike GFS, Hadoop DFS files 
+have strictly one writer at any one time.  Bytes are always appended 
+to the end of the writer's stream.  There is no notion of "record appends"
+or "mutations" that are then checked or reordered.  Writers simply emit 
+a byte stream.  That byte stream is guaranteed to be stored in the 
+order written.</p>]]>
+  </doc>
+</package>
+<package name="org.apache.hadoop.hdfs.net">
+</package>
+<package name="org.apache.hadoop.hdfs.protocol">
+</package>
+<package name="org.apache.hadoop.hdfs.protocol.datatransfer">
+</package>
+<package name="org.apache.hadoop.hdfs.protocol.datatransfer.sasl">
+</package>
+<package name="org.apache.hadoop.hdfs.protocolPB">
+</package>
+<package name="org.apache.hadoop.hdfs.qjournal.client">
+</package>
+<package name="org.apache.hadoop.hdfs.qjournal.protocol">
+</package>
+<package name="org.apache.hadoop.hdfs.qjournal.protocolPB">
+</package>
+<package name="org.apache.hadoop.hdfs.qjournal.server">
+  <!-- start interface org.apache.hadoop.hdfs.qjournal.server.JournalNodeMXBean -->
+  <interface name="JournalNodeMXBean"    abstract="true"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <method name="getJournalsStatus" return="java.lang.String"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get status information (e.g., whether formatted) of JournalNode's journals.
+ 
+ @return A string presenting status for each journal]]>
+      </doc>
+    </method>
+    <doc>
+    <![CDATA[This is the JMX management interface for JournalNode information]]>
+    </doc>
+  </interface>
+  <!-- end interface org.apache.hadoop.hdfs.qjournal.server.JournalNodeMXBean -->
+</package>
+<package name="org.apache.hadoop.hdfs.security.token.block">
+</package>
+<package name="org.apache.hadoop.hdfs.security.token.delegation">
+</package>
+<package name="org.apache.hadoop.hdfs.server.balancer">
+</package>
+<package name="org.apache.hadoop.hdfs.server.blockmanagement">
+</package>
+<package name="org.apache.hadoop.hdfs.server.common">
+</package>
+<package name="org.apache.hadoop.hdfs.server.datanode">
+</package>
+<package name="org.apache.hadoop.hdfs.server.datanode.fsdataset">
+</package>
+<package name="org.apache.hadoop.hdfs.server.datanode.fsdataset.impl">
+</package>
+<package name="org.apache.hadoop.hdfs.server.datanode.metrics">
+</package>
+<package name="org.apache.hadoop.hdfs.server.datanode.web">
+</package>
+<package name="org.apache.hadoop.hdfs.server.datanode.web.webhdfs">
+</package>
+<package name="org.apache.hadoop.hdfs.server.diskbalancer">
+</package>
+<package name="org.apache.hadoop.hdfs.server.diskbalancer.command">
+</package>
+<package name="org.apache.hadoop.hdfs.server.diskbalancer.connectors">
+</package>
+<package name="org.apache.hadoop.hdfs.server.diskbalancer.datamodel">
+</package>
+<package name="org.apache.hadoop.hdfs.server.diskbalancer.planner">
+</package>
+<package name="org.apache.hadoop.hdfs.server.mover">
+</package>
+<package name="org.apache.hadoop.hdfs.server.namenode">
+  <!-- start interface org.apache.hadoop.hdfs.server.namenode.AuditLogger -->
+  <interface name="AuditLogger"    abstract="true"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <method name="initialize"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.conf.Configuration"/>
+      <doc>
+      <![CDATA[Called during initialization of the logger.
+
+ @param conf The configuration object.]]>
+      </doc>
+    </method>
+    <method name="logAuditEvent"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="succeeded" type="boolean"/>
+      <param name="userName" type="java.lang.String"/>
+      <param name="addr" type="java.net.InetAddress"/>
+      <param name="cmd" type="java.lang.String"/>
+      <param name="src" type="java.lang.String"/>
+      <param name="dst" type="java.lang.String"/>
+      <param name="stat" type="org.apache.hadoop.fs.FileStatus"/>
+      <doc>
+      <![CDATA[Called to log an audit event.
+ <p>
+ This method must return as quickly as possible, since it's called
+ in a critical section of the NameNode's operation.
+
+ @param succeeded Whether authorization succeeded.
+ @param userName Name of the user executing the request.
+ @param addr Remote address of the request.
+ @param cmd The requested command.
+ @param src Path of affected source file.
+ @param dst Path of affected destination file (if any).
+ @param stat File information for operations that change the file's
+             metadata (permissions, owner, times, etc).]]>
+      </doc>
+    </method>
+    <doc>
+    <![CDATA[Interface defining an audit logger.]]>
+    </doc>
+  </interface>
+  <!-- end interface org.apache.hadoop.hdfs.server.namenode.AuditLogger -->
+  <!-- start class org.apache.hadoop.hdfs.server.namenode.HdfsAuditLogger -->
+  <class name="HdfsAuditLogger" extends="java.lang.Object"
+    abstract="true"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <implements name="org.apache.hadoop.hdfs.server.namenode.AuditLogger"/>
+    <constructor name="HdfsAuditLogger"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <method name="logAuditEvent"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="succeeded" type="boolean"/>
+      <param name="userName" type="java.lang.String"/>
+      <param name="addr" type="java.net.InetAddress"/>
+      <param name="cmd" type="java.lang.String"/>
+      <param name="src" type="java.lang.String"/>
+      <param name="dst" type="java.lang.String"/>
+      <param name="status" type="org.apache.hadoop.fs.FileStatus"/>
+    </method>
+    <method name="logAuditEvent"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="succeeded" type="boolean"/>
+      <param name="userName" type="java.lang.String"/>
+      <param name="addr" type="java.net.InetAddress"/>
+      <param name="cmd" type="java.lang.String"/>
+      <param name="src" type="java.lang.String"/>
+      <param name="dst" type="java.lang.String"/>
+      <param name="stat" type="org.apache.hadoop.fs.FileStatus"/>
+      <param name="callerContext" type="org.apache.hadoop.ipc.CallerContext"/>
+      <param name="ugi" type="org.apache.hadoop.security.UserGroupInformation"/>
+      <param name="dtSecretManager" type="org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenSecretManager"/>
+      <doc>
+      <![CDATA[Same as
+ {@link #logAuditEvent(boolean, String, InetAddress, String, String, String,
+ FileStatus)} with additional parameters related to logging delegation token
+ tracking IDs.
+ 
+ @param succeeded Whether authorization succeeded.
+ @param userName Name of the user executing the request.
+ @param addr Remote address of the request.
+ @param cmd The requested command.
+ @param src Path of affected source file.
+ @param dst Path of affected destination file (if any).
+ @param stat File information for operations that change the file's metadata
+          (permissions, owner, times, etc).
+ @param callerContext Context information of the caller
+ @param ugi UserGroupInformation of the current user, or null if not logging
+          token tracking information
+ @param dtSecretManager The token secret manager, or null if not logging
+          token tracking information]]>
+      </doc>
+    </method>
+    <method name="logAuditEvent"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="succeeded" type="boolean"/>
+      <param name="userName" type="java.lang.String"/>
+      <param name="addr" type="java.net.InetAddress"/>
+      <param name="cmd" type="java.lang.String"/>
+      <param name="src" type="java.lang.String"/>
+      <param name="dst" type="java.lang.String"/>
+      <param name="stat" type="org.apache.hadoop.fs.FileStatus"/>
+      <param name="ugi" type="org.apache.hadoop.security.UserGroupInformation"/>
+      <param name="dtSecretManager" type="org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenSecretManager"/>
+      <doc>
+      <![CDATA[Same as
+ {@link #logAuditEvent(boolean, String, InetAddress, String, String,
+ String, FileStatus, CallerContext, UserGroupInformation,
+ DelegationTokenSecretManager)} without {@link CallerContext} information.]]>
+      </doc>
+    </method>
+    <doc>
+    <![CDATA[Extension of {@link AuditLogger}.]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.hdfs.server.namenode.HdfsAuditLogger -->
+  <!-- start class org.apache.hadoop.hdfs.server.namenode.INodeAttributeProvider -->
+  <class name="INodeAttributeProvider" extends="java.lang.Object"
+    abstract="true"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <constructor name="INodeAttributeProvider"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <method name="start"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Initialize the provider. This method is called at NameNode startup
+ time.]]>
+      </doc>
+    </method>
+    <method name="stop"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Shutdown the provider. This method is called at NameNode shutdown time.]]>
+      </doc>
+    </method>
+    <method name="getAttributes" return="org.apache.hadoop.hdfs.server.namenode.INodeAttributes"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="fullPath" type="java.lang.String"/>
+      <param name="inode" type="org.apache.hadoop.hdfs.server.namenode.INodeAttributes"/>
+    </method>
+    <method name="getAttributes" return="org.apache.hadoop.hdfs.server.namenode.INodeAttributes"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="pathElements" type="java.lang.String[]"/>
+      <param name="inode" type="org.apache.hadoop.hdfs.server.namenode.INodeAttributes"/>
+    </method>
+    <method name="getAttributes" return="org.apache.hadoop.hdfs.server.namenode.INodeAttributes"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="components" type="byte[][]"/>
+      <param name="inode" type="org.apache.hadoop.hdfs.server.namenode.INodeAttributes"/>
+    </method>
+    <method name="getExternalAccessControlEnforcer" return="org.apache.hadoop.hdfs.server.namenode.INodeAttributeProvider.AccessControlEnforcer"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="defaultEnforcer" type="org.apache.hadoop.hdfs.server.namenode.INodeAttributeProvider.AccessControlEnforcer"/>
+      <doc>
+      <![CDATA[Can be over-ridden by implementations to provide a custom Access Control
+ Enforcer that can provide an alternate implementation of the
+ default permission checking logic.
+ @param defaultEnforcer The Default AccessControlEnforcer
+ @return The AccessControlEnforcer to use]]>
+      </doc>
+    </method>
+  </class>
+  <!-- end class org.apache.hadoop.hdfs.server.namenode.INodeAttributeProvider -->
+</package>
+<package name="org.apache.hadoop.hdfs.server.namenode.ha">
+</package>
+<package name="org.apache.hadoop.hdfs.server.namenode.metrics">
+</package>
+<package name="org.apache.hadoop.hdfs.server.namenode.snapshot">
+</package>
+<package name="org.apache.hadoop.hdfs.server.namenode.top">
+</package>
+<package name="org.apache.hadoop.hdfs.server.namenode.top.metrics">
+</package>
+<package name="org.apache.hadoop.hdfs.server.namenode.top.window">
+</package>
+<package name="org.apache.hadoop.hdfs.server.namenode.web.resources">
+</package>
+<package name="org.apache.hadoop.hdfs.server.protocol">
+</package>
+<package name="org.apache.hadoop.hdfs.tools">
+</package>
+<package name="org.apache.hadoop.hdfs.tools.offlineEditsViewer">
+</package>
+<package name="org.apache.hadoop.hdfs.tools.offlineImageViewer">
+</package>
+<package name="org.apache.hadoop.hdfs.tools.snapshot">
+</package>
+<package name="org.apache.hadoop.hdfs.util">
+</package>
+<package name="org.apache.hadoop.hdfs.web">
+</package>
+<package name="org.apache.hadoop.hdfs.web.resources">
+</package>
+
+</api>


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


Mime
View raw message