hadoop-common-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From whe...@apache.org
Subject [07/19] hadoop git commit: HDFS-9170. Move libhdfs / fuse-dfs / libwebhdfs to hdfs-client. Contributed by Haohui Mai.
Date Wed, 07 Oct 2015 07:16:09 GMT
http://git-wip-us.apache.org/repos/asf/hadoop/blob/960b19ed/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/doc/README
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/doc/README b/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/doc/README
deleted file mode 100644
index 1744892..0000000
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/doc/README
+++ /dev/null
@@ -1,131 +0,0 @@
-#
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-#
-Fuse-DFS
-
-Supports reads, writes, and directory operations (e.g., cp, ls, more, cat, find, less, rm, mkdir, mv, rmdir).  Things like touch, chmod, chown, and permissions are in the works. Fuse-dfs currently shows all files as owned by nobody.
-
-Contributing
-
-It's pretty straightforward to add functionality to fuse-dfs as fuse makes things relatively simple. Some other tasks require also augmenting libhdfs to expose more hdfs functionality to C. See [http://issues.apache.org/jira/secure/IssueNavigator.jspa?reset=true&mode=hide&pid=12310240&sorter/order=DESC&sorter/field=priority&resolution=-1&component=12312376  contrib/fuse-dfs JIRAs]
-
-Requirements
-
- * Hadoop with compiled libhdfs.so
- * Linux kernel > 2.6.9 with fuse, which is the default or Fuse 2.7.x, 2.8.x installed. See: [http://fuse.sourceforge.net/]
- * modprobe fuse to load it
- * fuse-dfs executable (see below)
- * fuse_dfs_wrapper.sh installed in /bin or other appropriate location (see below)
-
-
-BUILDING
-
-   1. in HADOOP_PREFIX: `ant compile-libhdfs -Dlibhdfs=1
-   2. in HADOOP_PREFIX: `ant package` to deploy libhdfs
-   3. in HADOOP_PREFIX: `ant compile-contrib -Dlibhdfs=1 -Dfusedfs=1`
-
-NOTE: for amd64 architecture, libhdfs will not compile unless you edit
-the Makefile in src/c++/libhdfs/Makefile and set OS_ARCH=amd64
-(probably the same for others too). See [https://issues.apache.org/jira/browse/HADOOP-3344 HADOOP-3344]
-
-Common build problems include not finding the libjvm.so in JAVA_HOME/jre/lib/OS_ARCH/server or not finding fuse in FUSE_HOME or /usr/local.
-
-
-CONFIGURING
-
-Look at all the paths in fuse_dfs_wrapper.sh and either correct them or set them in your environment before running. (note for automount and mount as root, you probably cannot control the environment, so best to set them in the wrapper)
-
-INSTALLING
-
-1. `mkdir /export/hdfs` (or wherever you want to mount it)
-
-2. `fuse_dfs_wrapper.sh dfs://hadoop_server1.foo.com:9000 /export/hdfs -d` and from another terminal, try `ls /export/hdfs`
-
-If 2 works, try again dropping the debug mode, i.e., -d
-
-(note - common problems are that you don't have libhdfs.so or libjvm.so or libfuse.so on your LD_LIBRARY_PATH, and your CLASSPATH does not contain hadoop and other required jars.)
-
-Also note, fuse-dfs will write error/warn messages to the syslog - typically in /var/log/messages
-
-You can use fuse-dfs to mount multiple hdfs instances by just changing the server/port name and directory mount point above.
-
-DEPLOYING
-
-in a root shell do the following:
-
-1. add the following to /etc/fstab
-
-fuse_dfs#dfs://hadoop_server.foo.com:9000 /export/hdfs fuse -oallow_other,rw,-ousetrash,-oinitchecks 0 0
-
-
-2. Mount using: `mount /export/hdfs`. Expect problems with not finding fuse_dfs. You will need to probably add this to /sbin and then problems finding the above 3 libraries. Add these using ldconfig.
-
-
-Fuse DFS takes the following mount options (i.e., on the command line or the comma separated list of options in /etc/fstab:
-
--oserver=%s  (optional place to specify the server but in fstab use the format above)
--oport=%d (optional port see comment on server option)
--oentry_timeout=%d (how long directory entries are cached by fuse in seconds - see fuse docs)
--oattribute_timeout=%d (how long attributes are cached by fuse in seconds - see fuse docs)
--oprotected=%s (a colon separated list of directories that fuse-dfs should not allow to be deleted or moved - e.g., /user:/tmp)
--oprivate (not often used but means only the person who does the mount can use the filesystem - aka ! allow_others in fuse speak)
--ordbuffer=%d (in KBs how large a buffer should fuse-dfs use when doing hdfs reads)
-ro 
-rw
--ousetrash (should fuse dfs throw things in /Trash when deleting them)
--onotrash (opposite of usetrash)
--odebug (do not daemonize - aka -d in fuse speak)
--obig_writes (use fuse big_writes option so as to allow better performance of writes on kernels >= 2.6.26)
--initchecks - have fuse-dfs try to connect to hdfs to ensure all is ok upon startup. recommended to have this  on
-The defaults are:
-
-entry,attribute_timeouts = 60 seconds
-rdbuffer = 10 MB
-protected = null
-debug = 0
-notrash
-private = 0
-
-EXPORTING
-
-Add the following to /etc/exports:
-
-/export/hdfs *.foo.com(no_root_squash,rw,fsid=1,sync)
-
-NOTE - you cannot export this with a FUSE module built into the kernel
-- e.g., kernel 2.6.17. For info on this, refer to the FUSE wiki.
-
-
-RECOMMENDATIONS
-
-1. From /bin, `ln -s $HADOOP_PREFIX/contrib/fuse-dfs/fuse_dfs* .`
-
-2. Always start with debug on so you can see if you are missing a classpath or something like that.
-
-3. use -obig_writes
-
-4. use -initchecks
-
-KNOWN ISSUES 
-
-1. if you alias `ls` to `ls --color=auto` and try listing a directory with lots (over thousands) of files, expect it to be slow and at 10s of thousands, expect it to be very very slow.  This is because `--color=auto` causes ls to stat every file in the directory. Since fuse-dfs does not cache attribute entries when doing a readdir, 
-this is very slow. see [https://issues.apache.org/jira/browse/HADOOP-3797 HADOOP-3797]
-
-2. Writes are approximately 33% slower than the DFSClient. TBD how to optimize this. see: [https://issues.apache.org/jira/browse/HADOOP-3805 HADOOP-3805] - try using -obig_writes if on a >2.6.26 kernel, should perform much better since bigger writes implies less context switching.
-
-3. Reads are ~20-30% slower even with the read buffering. 
-
-4. fuse-dfs and underlying libhdfs have no support for permissions. See [https://issues.apache.org/jira/browse/HADOOP-3536 HADOOP-3536] 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/960b19ed/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_connect.c
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_connect.c b/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_connect.c
deleted file mode 100644
index 8a2a00b..0000000
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_connect.c
+++ /dev/null
@@ -1,644 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#include "fuse_connect.h"
-#include "fuse_dfs.h"
-#include "fuse_users.h" 
-#include "libhdfs/hdfs.h"
-#include "util/tree.h"
-
-#include <inttypes.h>
-#include <limits.h>
-#include <poll.h>
-#include <search.h>
-#include <stdio.h>
-#include <stdlib.h>
-#include <string.h>
-#include <sys/time.h>
-#include <sys/types.h>
-#include <utime.h>
-
-#define FUSE_CONN_DEFAULT_TIMER_PERIOD      5
-#define FUSE_CONN_DEFAULT_EXPIRY_PERIOD     (5 * 60)
-#define HADOOP_SECURITY_AUTHENTICATION      "hadoop.security.authentication"
-#define HADOOP_FUSE_CONNECTION_TIMEOUT      "hadoop.fuse.connection.timeout"
-#define HADOOP_FUSE_TIMER_PERIOD            "hadoop.fuse.timer.period"
-
-/** Length of the buffer needed by asctime_r */
-#define TIME_STR_LEN 26
-
-struct hdfsConn;
-
-static int hdfsConnCompare(const struct hdfsConn *a, const struct hdfsConn *b);
-static void hdfsConnExpiry(void);
-static void* hdfsConnExpiryThread(void *v);
-
-RB_HEAD(hdfsConnTree, hdfsConn);
-
-enum authConf {
-    AUTH_CONF_UNKNOWN,
-    AUTH_CONF_KERBEROS,
-    AUTH_CONF_OTHER,
-};
-
-struct hdfsConn {
-  RB_ENTRY(hdfsConn) entry;
-  /** How many threads are currently using this hdfsConnection object */
-  int64_t refcnt;
-  /** The username used to make this connection.  Dynamically allocated. */
-  char *usrname;
-  /** Kerberos ticket cache path, or NULL if this is not a kerberized
-   * connection.  Dynamically allocated. */
-  char *kpath;
-  /** mtime of the kpath, if the kpath is non-NULL */
-  time_t kPathMtime;
-  /** nanosecond component of the mtime of the kpath, if the kpath is non-NULL */
-  long kPathMtimeNs;
-  /** The cached libhdfs fs instance */
-  hdfsFS fs;
-  /** Nonzero if this hdfs connection needs to be closed as soon as possible.
-   * If this is true, the connection has been removed from the tree. */
-  int condemned;
-  /** Number of times we should run the expiration timer on this connection
-   * before removing it. */
-  int expirationCount;
-};
-
-RB_GENERATE(hdfsConnTree, hdfsConn, entry, hdfsConnCompare);
-
-/** Current cached libhdfs connections */
-static struct hdfsConnTree gConnTree;
-
-/** The URI used to make our connections.  Dynamically allocated. */
-static char *gUri;
-
-/** The port used to make our connections, or 0. */
-static int gPort;
-
-/** Lock which protects gConnTree and gConnectTimer->active */
-static pthread_mutex_t gConnMutex;
-
-/** Type of authentication configured */
-static enum authConf gHdfsAuthConf;
-
-/** FUSE connection timer expiration period */
-static int32_t gTimerPeriod;
-
-/** FUSE connection expiry period */
-static int32_t gExpiryPeriod;
-
-/** FUSE timer expiration thread */
-static pthread_t gTimerThread;
-
-/** 
- * Find out what type of authentication the system administrator
- * has configured.
- *
- * @return     the type of authentication, or AUTH_CONF_UNKNOWN on error.
- */
-static enum authConf discoverAuthConf(void)
-{
-  int ret;
-  char *val = NULL;
-  enum authConf authConf;
-
-  ret = hdfsConfGetStr(HADOOP_SECURITY_AUTHENTICATION, &val);
-  if (ret)
-    authConf = AUTH_CONF_UNKNOWN;
-  else if (!val)
-    authConf = AUTH_CONF_OTHER;
-  else if (!strcmp(val, "kerberos"))
-    authConf = AUTH_CONF_KERBEROS;
-  else
-    authConf = AUTH_CONF_OTHER;
-  free(val);
-  return authConf;
-}
-
-int fuseConnectInit(const char *nnUri, int port)
-{
-  int ret;
-
-  gTimerPeriod = FUSE_CONN_DEFAULT_TIMER_PERIOD;
-  ret = hdfsConfGetInt(HADOOP_FUSE_TIMER_PERIOD, &gTimerPeriod);
-  if (ret) {
-    fprintf(stderr, "Unable to determine the configured value for %s.",
-          HADOOP_FUSE_TIMER_PERIOD);
-    return -EINVAL;
-  }
-  if (gTimerPeriod < 1) {
-    fprintf(stderr, "Invalid value %d given for %s.\n",
-          gTimerPeriod, HADOOP_FUSE_TIMER_PERIOD);
-    return -EINVAL;
-  }
-  gExpiryPeriod = FUSE_CONN_DEFAULT_EXPIRY_PERIOD;
-  ret = hdfsConfGetInt(HADOOP_FUSE_CONNECTION_TIMEOUT, &gExpiryPeriod);
-  if (ret) {
-    fprintf(stderr, "Unable to determine the configured value for %s.",
-          HADOOP_FUSE_CONNECTION_TIMEOUT);
-    return -EINVAL;
-  }
-  if (gExpiryPeriod < 1) {
-    fprintf(stderr, "Invalid value %d given for %s.\n",
-          gExpiryPeriod, HADOOP_FUSE_CONNECTION_TIMEOUT);
-    return -EINVAL;
-  }
-  gHdfsAuthConf = discoverAuthConf();
-  if (gHdfsAuthConf == AUTH_CONF_UNKNOWN) {
-    fprintf(stderr, "Unable to determine the configured value for %s.",
-          HADOOP_SECURITY_AUTHENTICATION);
-    return -EINVAL;
-  }
-  gPort = port;
-  gUri = strdup(nnUri);
-  if (!gUri) {
-    fprintf(stderr, "fuseConnectInit: OOM allocting nnUri\n");
-    return -ENOMEM;
-  }
-  ret = pthread_mutex_init(&gConnMutex, NULL);
-  if (ret) {
-    free(gUri);
-    fprintf(stderr, "fuseConnectInit: pthread_mutex_init failed with error %d\n",
-            ret);
-    return -ret;
-  }
-  RB_INIT(&gConnTree);
-  ret = pthread_create(&gTimerThread, NULL, hdfsConnExpiryThread, NULL);
-  if (ret) {
-    free(gUri);
-    pthread_mutex_destroy(&gConnMutex);
-    fprintf(stderr, "fuseConnectInit: pthread_create failed with error %d\n",
-            ret);
-    return -ret;
-  }
-  fprintf(stderr, "fuseConnectInit: initialized with timer period %d, "
-          "expiry period %d\n", gTimerPeriod, gExpiryPeriod);
-  return 0;
-}
-
-/**
- * Compare two libhdfs connections by username
- *
- * @param a                The first libhdfs connection
- * @param b                The second libhdfs connection
- *
- * @return                 -1, 0, or 1 depending on a < b, a ==b, a > b
- */
-static int hdfsConnCompare(const struct hdfsConn *a, const struct hdfsConn *b)
-{
-  return strcmp(a->usrname, b->usrname);
-}
-
-/**
- * Find a libhdfs connection by username
- *
- * @param usrname         The username to look up
- *
- * @return                The connection, or NULL if none could be found
- */
-static struct hdfsConn* hdfsConnFind(const char *usrname)
-{
-  struct hdfsConn exemplar;
-
-  memset(&exemplar, 0, sizeof(exemplar));
-  exemplar.usrname = (char*)usrname;
-  return RB_FIND(hdfsConnTree, &gConnTree, &exemplar);
-}
-
-/**
- * Free the resource associated with a libhdfs connection.
- *
- * You must remove the connection from the tree before calling this function.
- *
- * @param conn            The libhdfs connection
- */
-static void hdfsConnFree(struct hdfsConn *conn)
-{
-  int ret;
-
-  ret = hdfsDisconnect(conn->fs);
-  if (ret) {
-    fprintf(stderr, "hdfsConnFree(username=%s): "
-      "hdfsDisconnect failed with error %d\n",
-      (conn->usrname ? conn->usrname : "(null)"), ret);
-  }
-  free(conn->usrname);
-  free(conn->kpath);
-  free(conn);
-}
-
-/**
- * Convert a time_t to a string.
- *
- * @param sec           time in seconds since the epoch
- * @param buf           (out param) output buffer
- * @param bufLen        length of output buffer
- *
- * @return              0 on success; ENAMETOOLONG if the provided buffer was
- *                      too short
- */
-static int timeToStr(time_t sec, char *buf, size_t bufLen)
-{
-  struct tm tm, *out;
-  size_t l;
-
-  if (bufLen < TIME_STR_LEN) {
-    return -ENAMETOOLONG;
-  }
-  out = localtime_r(&sec, &tm);
-  asctime_r(out, buf);
-  // strip trailing newline
-  l = strlen(buf);
-  if (l != 0)
-    buf[l - 1] = '\0';
-  return 0;
-}
-
-/** 
- * Check an HDFS connection's Kerberos path.
- *
- * If the mtime of the Kerberos ticket cache file has changed since we first
- * opened the connection, mark the connection as condemned and remove it from
- * the hdfs connection tree.
- *
- * @param conn      The HDFS connection
- */
-static int hdfsConnCheckKpath(const struct hdfsConn *conn)
-{
-  int ret;
-  struct stat st;
-  char prevTimeBuf[TIME_STR_LEN], newTimeBuf[TIME_STR_LEN];
-
-  if (stat(conn->kpath, &st) < 0) {
-    ret = errno;
-    if (ret == ENOENT) {
-      fprintf(stderr, "hdfsConnCheckKpath(conn.usrname=%s): the kerberos "
-              "ticket cache file '%s' has disappeared.  Condemning the "
-              "connection.\n", conn->usrname, conn->kpath);
-    } else {
-      fprintf(stderr, "hdfsConnCheckKpath(conn.usrname=%s): stat(%s) "
-              "failed with error code %d.  Pessimistically condemning the "
-              "connection.\n", conn->usrname, conn->kpath, ret);
-    }
-    return -ret;
-  }
-  if ((st.st_mtim.tv_sec != conn->kPathMtime) ||
-      (st.st_mtim.tv_nsec != conn->kPathMtimeNs)) {
-    timeToStr(conn->kPathMtime, prevTimeBuf, sizeof(prevTimeBuf));
-    timeToStr(st.st_mtim.tv_sec, newTimeBuf, sizeof(newTimeBuf));
-    fprintf(stderr, "hdfsConnCheckKpath(conn.usrname=%s): mtime on '%s' "
-            "has changed from '%s' to '%s'.  Condemning the connection "
-            "because our cached Kerberos credentials have probably "
-            "changed.\n", conn->usrname, conn->kpath, prevTimeBuf, newTimeBuf);
-    return -EINTERNAL;
-  }
-  return 0;
-}
-
-/**
- * Cache expiration logic.
- *
- * This function is called periodically by the cache expiration thread.  For
- * each FUSE connection not currently in use (refcnt == 0) it will decrement the
- * expirationCount for that connection.  Once the expirationCount reaches 0 for
- * a connection, it can be garbage collected.
- *
- * We also check to see if the Kerberos credentials have changed.  If so, the
- * connecton is immediately condemned, even if it is currently in use.
- */
-static void hdfsConnExpiry(void)
-{
-  struct hdfsConn *conn, *tmpConn;
-
-  pthread_mutex_lock(&gConnMutex);
-  RB_FOREACH_SAFE(conn, hdfsConnTree, &gConnTree, tmpConn) {
-    if (conn->kpath) {
-      if (hdfsConnCheckKpath(conn)) {
-        conn->condemned = 1;
-        RB_REMOVE(hdfsConnTree, &gConnTree, conn);
-        if (conn->refcnt == 0) {
-          /* If the connection is not in use by any threads, delete it
-           * immediately.  If it is still in use by some threads, the last
-           * thread using it will clean it up later inside hdfsConnRelease. */
-          hdfsConnFree(conn);
-          continue;
-        }
-      }
-    }
-    if (conn->refcnt == 0) {
-      /* If the connection is not currently in use by a thread, check to see if
-       * it ought to be removed because it's too old. */
-      conn->expirationCount--;
-      if (conn->expirationCount <= 0) {
-        if (conn->condemned) {
-          fprintf(stderr, "hdfsConnExpiry: LOGIC ERROR: condemned connection "
-                  "as %s is still in the tree!\n", conn->usrname);
-        }
-        fprintf(stderr, "hdfsConnExpiry: freeing and removing connection as "
-                "%s because it's now too old.\n", conn->usrname);
-        RB_REMOVE(hdfsConnTree, &gConnTree, conn);
-        hdfsConnFree(conn);
-      }
-    }
-  }
-  pthread_mutex_unlock(&gConnMutex);
-}
-
-// The Kerberos FILE: prefix.  This indicates that the kerberos ticket cache
-// specifier is a file.  (Note that we also assume that the specifier is a file
-// if no prefix is present.)
-#define KRB_FILE_PREFIX "FILE:"
-
-// Length of the Kerberos file prefix, which is equal to the string size in
-// bytes minus 1 (because we don't count the null terminator in the length.)
-#define KRB_FILE_PREFIX_LEN (sizeof(KRB_FILE_PREFIX) - 1)
-
-/**
- * Find the Kerberos ticket cache path.
- *
- * This function finds the Kerberos ticket cache path from the thread ID and
- * user ID of the process making the request.
- *
- * Normally, the ticket cache path is in a well-known location in /tmp.
- * However, it's possible that the calling process could set the KRB5CCNAME
- * environment variable, indicating that its Kerberos ticket cache is at a
- * non-default location.  We try to handle this possibility by reading the
- * process' environment here.  This will be allowed if we have root
- * capabilities, or if our UID is the same as the remote process' UID.
- *
- * Note that we don't check to see if the cache file actually exists or not.
- * We're just trying to find out where it would be if it did exist. 
- *
- * @param path          (out param) the path to the ticket cache file
- * @param pathLen       length of the path buffer
- */
-static void findKerbTicketCachePath(struct fuse_context *ctx,
-                                    char *path, size_t pathLen)
-{
-  FILE *fp = NULL;
-  static const char * const KRB5CCNAME = "\0KRB5CCNAME=";
-  int c = '\0', pathIdx = 0, keyIdx = 0;
-  size_t KRB5CCNAME_LEN = strlen(KRB5CCNAME + 1) + 1;
-
-  // /proc/<tid>/environ contains the remote process' environment.  It is
-  // exposed to us as a series of KEY=VALUE pairs, separated by NULL bytes.
-  snprintf(path, pathLen, "/proc/%d/environ", ctx->pid);
-  fp = fopen(path, "r");
-  if (!fp)
-    goto done;
-  while (1) {
-    if (c == EOF)
-      goto done;
-    if (keyIdx == KRB5CCNAME_LEN) {
-      if (pathIdx >= pathLen - 1)
-        goto done;
-      if (c == '\0')
-        goto done;
-      path[pathIdx++] = c;
-    } else if (KRB5CCNAME[keyIdx++] != c) {
-      keyIdx = 0;
-    }
-    c = fgetc(fp);
-  }
-
-done:
-  if (fp)
-    fclose(fp);
-  if (pathIdx == 0) {
-    snprintf(path, pathLen, "/tmp/krb5cc_%d", ctx->uid);
-  } else {
-    path[pathIdx] = '\0';
-  }
-  if (strncmp(path, KRB_FILE_PREFIX, KRB_FILE_PREFIX_LEN) == 0) {
-    fprintf(stderr, "stripping " KRB_FILE_PREFIX " from the front of "
-            "KRB5CCNAME.\n");
-    memmove(path, path + KRB_FILE_PREFIX_LEN,
-            strlen(path + KRB_FILE_PREFIX_LEN) + 1);
-  }
-}
-
-/**
- * Create a new libhdfs connection.
- *
- * @param usrname       Username to use for the new connection
- * @param ctx           FUSE context to use for the new connection
- * @param out           (out param) the new libhdfs connection
- *
- * @return              0 on success; error code otherwise
- */
-static int fuseNewConnect(const char *usrname, struct fuse_context *ctx,
-        struct hdfsConn **out)
-{
-  struct hdfsBuilder *bld = NULL;
-  char kpath[PATH_MAX] = { 0 };
-  struct hdfsConn *conn = NULL;
-  int ret;
-  struct stat st;
-
-  conn = calloc(1, sizeof(struct hdfsConn));
-  if (!conn) {
-    fprintf(stderr, "fuseNewConnect: OOM allocating struct hdfsConn\n");
-    ret = -ENOMEM;
-    goto error;
-  }
-  bld = hdfsNewBuilder();
-  if (!bld) {
-    fprintf(stderr, "Unable to create hdfs builder\n");
-    ret = -ENOMEM;
-    goto error;
-  }
-  /* We always want to get a new FileSystem instance here-- that's why we call
-   * hdfsBuilderSetForceNewInstance.  Otherwise the 'cache condemnation' logic
-   * in hdfsConnExpiry will not work correctly, since FileSystem might re-use the
-   * existing cached connection which we wanted to get rid of.
-   */
-  hdfsBuilderSetForceNewInstance(bld);
-  hdfsBuilderSetNameNode(bld, gUri);
-  if (gPort) {
-    hdfsBuilderSetNameNodePort(bld, gPort);
-  }
-  hdfsBuilderSetUserName(bld, usrname);
-  if (gHdfsAuthConf == AUTH_CONF_KERBEROS) {
-    findKerbTicketCachePath(ctx, kpath, sizeof(kpath));
-    if (stat(kpath, &st) < 0) {
-      fprintf(stderr, "fuseNewConnect: failed to find Kerberos ticket cache "
-        "file '%s'.  Did you remember to kinit for UID %d?\n",
-        kpath, ctx->uid);
-      ret = -EACCES;
-      goto error;
-    }
-    conn->kPathMtime = st.st_mtim.tv_sec;
-    conn->kPathMtimeNs = st.st_mtim.tv_nsec;
-    hdfsBuilderSetKerbTicketCachePath(bld, kpath);
-    conn->kpath = strdup(kpath);
-    if (!conn->kpath) {
-      fprintf(stderr, "fuseNewConnect: OOM allocating kpath\n");
-      ret = -ENOMEM;
-      goto error;
-    }
-  }
-  conn->usrname = strdup(usrname);
-  if (!conn->usrname) {
-    fprintf(stderr, "fuseNewConnect: OOM allocating usrname\n");
-    ret = -ENOMEM;
-    goto error;
-  }
-  conn->fs = hdfsBuilderConnect(bld);
-  bld = NULL;
-  if (!conn->fs) {
-    ret = errno;
-    fprintf(stderr, "fuseNewConnect(usrname=%s): Unable to create fs: "
-            "error code %d\n", usrname, ret);
-    goto error;
-  }
-  RB_INSERT(hdfsConnTree, &gConnTree, conn);
-  *out = conn;
-  return 0;
-
-error:
-  if (bld) {
-    hdfsFreeBuilder(bld);
-  }
-  if (conn) {
-    free(conn->kpath);
-    free(conn->usrname);
-    free(conn);
-  }
-  return ret;
-}
-
-int fuseConnect(const char *usrname, struct fuse_context *ctx,
-                struct hdfsConn **out)
-{
-  int ret;
-  struct hdfsConn* conn;
-
-  pthread_mutex_lock(&gConnMutex);
-  conn = hdfsConnFind(usrname);
-  if (!conn) {
-    ret = fuseNewConnect(usrname, ctx, &conn);
-    if (ret) {
-      pthread_mutex_unlock(&gConnMutex);
-      fprintf(stderr, "fuseConnect(usrname=%s): fuseNewConnect failed with "
-              "error code %d\n", usrname, ret);
-      return ret;
-    }
-  }
-  conn->refcnt++;
-  conn->expirationCount = (gExpiryPeriod + gTimerPeriod - 1) / gTimerPeriod;
-  if (conn->expirationCount < 2)
-    conn->expirationCount = 2;
-  pthread_mutex_unlock(&gConnMutex);
-  *out = conn;
-  return 0;
-}
-
-int fuseConnectAsThreadUid(struct hdfsConn **conn)
-{
-  struct fuse_context *ctx;
-  char *usrname;
-  int ret;
-  
-  ctx = fuse_get_context();
-  usrname = getUsername(ctx->uid);
-  ret = fuseConnect(usrname, ctx, conn);
-  free(usrname);
-  return ret;
-}
-
-int fuseConnectTest(void)
-{
-  int ret;
-  struct hdfsConn *conn;
-
-  if (gHdfsAuthConf == AUTH_CONF_KERBEROS) {
-    // TODO: call some method which can tell us whether the FS exists.  In order
-    // to implement this, we have to add a method to FileSystem in order to do
-    // this without valid Kerberos authentication.  See HDFS-3674 for details.
-    return 0;
-  }
-  ret = fuseNewConnect("root", NULL, &conn);
-  if (ret) {
-    fprintf(stderr, "fuseConnectTest failed with error code %d\n", ret);
-    return ret;
-  }
-  hdfsConnRelease(conn);
-  return 0;
-}
-
-struct hdfs_internal* hdfsConnGetFs(struct hdfsConn *conn)
-{
-  return conn->fs;
-}
-
-void hdfsConnRelease(struct hdfsConn *conn)
-{
-  pthread_mutex_lock(&gConnMutex);
-  conn->refcnt--;
-  if ((conn->refcnt == 0) && (conn->condemned)) {
-    fprintf(stderr, "hdfsConnRelease(usrname=%s): freeing condemend FS!\n",
-      conn->usrname);
-    /* Notice that we're not removing the connection from gConnTree here.
-     * If the connection is condemned, it must have already been removed from
-     * the tree, so that no other threads start using it.
-     */
-    hdfsConnFree(conn);
-  }
-  pthread_mutex_unlock(&gConnMutex);
-}
-
-/**
- * Get the monotonic time.
- *
- * Unlike the wall-clock time, monotonic time only ever goes forward.  If the
- * user adjusts the time, the monotonic time will not be affected.
- *
- * @return        The monotonic time
- */
-static time_t getMonotonicTime(void)
-{
-  int res;
-  struct timespec ts;
-       
-  res = clock_gettime(CLOCK_MONOTONIC, &ts);
-  if (res)
-    abort();
-  return ts.tv_sec;
-}
-
-/**
- * FUSE connection expiration thread
- *
- */
-static void* hdfsConnExpiryThread(void *v)
-{
-  time_t nextTime, curTime;
-  int waitTime;
-
-  nextTime = getMonotonicTime() + gTimerPeriod;
-  while (1) {
-    curTime = getMonotonicTime();
-    if (curTime >= nextTime) {
-      hdfsConnExpiry();
-      nextTime = curTime + gTimerPeriod;
-    }
-    waitTime = (nextTime - curTime) * 1000;
-    poll(NULL, 0, waitTime);
-  }
-  return NULL;
-}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/960b19ed/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_connect.h
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_connect.h b/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_connect.h
deleted file mode 100644
index 35645c6..0000000
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_connect.h
+++ /dev/null
@@ -1,90 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#ifndef __FUSE_CONNECT_H__
-#define __FUSE_CONNECT_H__
-
-struct fuse_context;
-struct hdfsConn;
-struct hdfs_internal;
-
-/**
- * Initialize the fuse connection subsystem.
- *
- * This must be called before any of the other functions in this module.
- *
- * @param nnUri      The NameNode URI
- * @param port       The NameNode port
- *
- * @return           0 on success; error code otherwise
- */
-int fuseConnectInit(const char *nnUri, int port);
-
-/**
- * Get a libhdfs connection.
- *
- * If there is an existing connection, it will be reused.  If not, a new one
- * will be created.
- *
- * You must call hdfsConnRelease on the connection you get back!
- *
- * @param usrname    The username to use
- * @param ctx        The FUSE context to use (contains UID, PID of requestor)
- * @param conn       (out param) The HDFS connection
- *
- * @return           0 on success; error code otherwise
- */
-int fuseConnect(const char *usrname, struct fuse_context *ctx,
-                struct hdfsConn **out);
-
-/**
- * Get a libhdfs connection.
- *
- * The same as fuseConnect, except the username will be determined from the FUSE
- * thread context.
- *
- * @param conn       (out param) The HDFS connection
- *
- * @return           0 on success; error code otherwise
- */
-int fuseConnectAsThreadUid(struct hdfsConn **conn);
-
-/**
- * Test whether we can connect to the HDFS cluster
- *
- * @return           0 on success; error code otherwise
- */
-int fuseConnectTest(void);
-
-/**
- * Get the hdfsFS associated with an hdfsConn.
- *
- * @param conn       The hdfsConn
- *
- * @return           the hdfsFS
- */
-struct hdfs_internal* hdfsConnGetFs(struct hdfsConn *conn);
-
-/**
- * Release an hdfsConn when we're done with it.
- *
- * @param conn       The hdfsConn
- */
-void hdfsConnRelease(struct hdfsConn *conn);
-
-#endif

http://git-wip-us.apache.org/repos/asf/hadoop/blob/960b19ed/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_context_handle.h
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_context_handle.h b/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_context_handle.h
deleted file mode 100644
index 6929062..0000000
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_context_handle.h
+++ /dev/null
@@ -1,40 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#ifndef __FUSE_CONTEXT_HANDLE_H__
-#define __FUSE_CONTEXT_HANDLE_H__
-
-#include <hdfs.h>
-#include <stddef.h>
-#include <sys/types.h>
-
-//
-// Structure to store fuse_dfs specific data
-// this will be created and passed to fuse at startup
-// and fuse will pass it back to us via the context function
-// on every operation.
-//
-typedef struct dfs_context_struct {
-  int debug;
-  int usetrash;
-  int direct_io;
-  char **protectedpaths;
-  size_t rdbuffer_size;
-} dfs_context;
-
-#endif

http://git-wip-us.apache.org/repos/asf/hadoop/blob/960b19ed/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_dfs.c
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_dfs.c b/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_dfs.c
deleted file mode 100644
index f693032..0000000
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_dfs.c
+++ /dev/null
@@ -1,136 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#include "fuse_dfs.h"
-#include "fuse_options.h"
-#include "fuse_impls.h"
-#include "fuse_init.h"
-#include "fuse_connect.h"
-
-#include <string.h>
-#include <stdlib.h>
-#include <unistd.h>
-
-int is_protected(const char *path) {
-
-  dfs_context *dfs = (dfs_context*)fuse_get_context()->private_data;
-  assert(dfs != NULL);
-  assert(dfs->protectedpaths);
-
-  int i ;
-  for (i = 0; dfs->protectedpaths[i]; i++) {
-    if (strcmp(path, dfs->protectedpaths[i]) == 0) {
-      return 1;
-    }
-  }
-  return 0;
-}
-
-static struct fuse_operations dfs_oper = {
-  .getattr  = dfs_getattr,
-  .access   = dfs_access,
-  .readdir  = dfs_readdir,
-  .destroy  = dfs_destroy,
-  .init     = dfs_init,
-  .open     = dfs_open,
-  .read     = dfs_read,
-  .symlink  = dfs_symlink,
-  .statfs   = dfs_statfs,
-  .mkdir    = dfs_mkdir,
-  .rmdir    = dfs_rmdir,
-  .rename   = dfs_rename,
-  .unlink   = dfs_unlink,
-  .release  = dfs_release,
-  .create   = dfs_create,
-  .write    = dfs_write,
-  .flush    = dfs_flush,
-  .mknod    = dfs_mknod,
-  .utimens  = dfs_utimens,
-  .chmod    = dfs_chmod,
-  .chown    = dfs_chown,
-  .truncate = dfs_truncate,
-};
-
-int main(int argc, char *argv[])
-{
-  int ret;
-
-  umask(0);
-
-  extern const char *program;  
-  program = argv[0];
-  struct fuse_args args = FUSE_ARGS_INIT(argc, argv);
-
-  memset(&options, 0, sizeof(struct options));
-
-  options.rdbuffer_size = 10*1024*1024; 
-  options.attribute_timeout = 60; 
-  options.entry_timeout = 60;
-
-  if (-1 == fuse_opt_parse(&args, &options, dfs_opts, dfs_options)) {
-    return -1;
-  }
-
-  if (!options.private) {
-    fuse_opt_add_arg(&args, "-oallow_other");
-  }
-
-  if (!options.no_permissions) {
-    fuse_opt_add_arg(&args, "-odefault_permissions");
-  }
-  /*
-   * FUSE already has a built-in parameter for mounting the filesystem as
-   * read-only, -r.  We defined our own parameter for doing this called -oro.
-   * We support it by translating it into -r internally.
-   * The kernel intercepts and returns an error message for any "write"
-   * operations that the user attempts to perform on a read-only filesystem.
-   * That means that we don't have to write any code to handle read-only mode.
-   * See HDFS-4139 for more details.
-   */
-  if (options.read_only) {
-    fuse_opt_add_arg(&args, "-r");
-  }
-
-  {
-    char buf[80];
-
-    snprintf(buf, sizeof buf, "-oattr_timeout=%d",options.attribute_timeout);
-    fuse_opt_add_arg(&args, buf);
-
-    snprintf(buf, sizeof buf, "-oentry_timeout=%d",options.entry_timeout);
-    fuse_opt_add_arg(&args, buf);
-  }
-
-  if (options.nn_uri == NULL) {
-    print_usage(argv[0]);
-    exit(EXIT_SUCCESS);
-  }
-
-  /* Note: do not call any libhdfs functions until fuse_main has been invoked.
-   *
-   * fuse_main will daemonize this process, by calling fork().  This will cause
-   * any extant threads to be destroyed, which could cause problems if 
-   * libhdfs has started some Java threads.
-   *
-   * Most initialization code should go in dfs_init, which is invoked after the
-   * fork.  See HDFS-3808 for details.
-   */
-  ret = fuse_main(args.argc, args.argv, &dfs_oper, NULL);
-  fuse_opt_free_args(&args);
-  return ret;
-}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/960b19ed/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_dfs.h
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_dfs.h b/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_dfs.h
deleted file mode 100644
index 4554dbd..0000000
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_dfs.h
+++ /dev/null
@@ -1,81 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#ifndef __FUSE_DFS_H__
-#define __FUSE_DFS_H__
-
-#define FUSE_USE_VERSION 26
-
-#include <stdio.h>
-#include <string.h>
-#include <errno.h>
-#include <assert.h>
-#include <strings.h>
-#include <syslog.h>
-
-#include <fuse.h>
-#include <fuse/fuse_opt.h>
-
-#include <sys/xattr.h>
-
-#include "config.h"
-
-//
-// Check if a path is in the mount option supplied protected paths.
-//
-int is_protected(const char *path);
-
-#undef INFO
-#define INFO(_fmt, ...) {                       \
-  fprintf(stdout, "INFO %s:%d " _fmt "\n",      \
-          __FILE__, __LINE__, ## __VA_ARGS__);  \
-  syslog(LOG_INFO, "INFO %s:%d " _fmt "\n",     \
-          __FILE__, __LINE__, ## __VA_ARGS__);  \
-}
-
-#undef DEBUG
-#define DEBUG(_fmt, ...) {                      \
-  fprintf(stdout, "DEBUG %s:%d " _fmt "\n",     \
-          __FILE__, __LINE__, ## __VA_ARGS__);  \
-  syslog(LOG_DEBUG, "DEBUG %s:%d " _fmt "\n",   \
-          __FILE__, __LINE__, ## __VA_ARGS__);  \
-}
-
-#undef ERROR
-#define ERROR(_fmt, ...) {                      \
-  fprintf(stderr, "ERROR %s:%d " _fmt "\n",     \
-          __FILE__, __LINE__, ## __VA_ARGS__);  \
-  syslog(LOG_ERR, "ERROR %s:%d " _fmt "\n",     \
-          __FILE__, __LINE__, ## __VA_ARGS__);  \
-}
-
-//#define DOTRACE
-#ifdef DOTRACE
-#define TRACE(x) {        \
-    DEBUG("TRACE %s", x); \
-}
-
-#define TRACE1(x,y) {             \
-    DEBUG("TRACE %s %s\n", x, y); \
-}
-#else
-#define TRACE(x) ; 
-#define TRACE1(x,y) ; 
-#endif
-
-#endif // __FUSE_DFS_H__

http://git-wip-us.apache.org/repos/asf/hadoop/blob/960b19ed/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_dfs_wrapper.sh
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_dfs_wrapper.sh b/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_dfs_wrapper.sh
deleted file mode 100755
index 97239cc..0000000
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_dfs_wrapper.sh
+++ /dev/null
@@ -1,46 +0,0 @@
-#!/usr/bin/env bash
-#
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-#
-
-export HADOOP_PREFIX=${HADOOP_PREFIX:-/usr/local/share/hadoop}
-
-if [ "$OS_ARCH" = "" ]; then
-export OS_ARCH=amd64
-fi
-
-if [ "$JAVA_HOME" = "" ]; then
-export  JAVA_HOME=/usr/local/java
-fi
-
-if [ "$LD_LIBRARY_PATH" = "" ]; then
-export LD_LIBRARY_PATH=$JAVA_HOME/jre/lib/$OS_ARCH/server:/usr/local/lib
-fi
-
-# If dev build set paths accordingly
-if [ -d $HADOOP_PREFIX/build ]; then
-  export HADOOP_PREFIX=$HADOOP_PREFIX
-  for f in ${HADOOP_PREFIX}/build/*.jar ; do
-    export CLASSPATH=$CLASSPATH:$f
-  done
-  for f in $HADOOP_PREFIX/build/ivy/lib/hadoop-hdfs/common/*.jar ; do
-    export CLASSPATH=$CLASSPATH:$f
-  done
-  export PATH=$HADOOP_PREFIX/build/contrib/fuse-dfs:$PATH
-  export LD_LIBRARY_PATH=$HADOOP_PREFIX/build/c++/lib:$JAVA_HOME/jre/lib/$OS_ARCH/server
-fi
-
-fuse_dfs $@

http://git-wip-us.apache.org/repos/asf/hadoop/blob/960b19ed/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_file_handle.h
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_file_handle.h b/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_file_handle.h
deleted file mode 100644
index 7f9346c..0000000
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_file_handle.h
+++ /dev/null
@@ -1,46 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#ifndef __FUSE_FILE_HANDLE_H__
-#define __FUSE_FILE_HANDLE_H__
-
-#include <hdfs.h>
-#include <pthread.h>
-
-struct hdfsConn;
-
-/**
- *
- * dfs_fh_struct is passed around for open files. Fuse provides a hook (the context) 
- * for storing file specific data.
- *
- * 2 Types of information:
- * a) a read buffer for performance reasons since fuse is typically called on 4K chunks only
- * b) the hdfs fs handle 
- *
- */
-typedef struct dfs_fh_struct {
-  hdfsFile hdfsFH;
-  struct hdfsConn *conn;
-  char *buf;
-  tSize bufferSize;  //what is the size of the buffer we have
-  off_t buffersStartOffset; //where the buffer starts in the file
-  pthread_mutex_t mutex;
-} dfs_fh;
-
-#endif

http://git-wip-us.apache.org/repos/asf/hadoop/blob/960b19ed/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls.h
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls.h b/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls.h
deleted file mode 100644
index d0d93e2..0000000
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls.h
+++ /dev/null
@@ -1,63 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-
-#ifndef __FUSE_IMPLS_H__
-#define __FUSE_IMPLS_H__
-
-#include <fuse.h>
-#include <syslog.h>
-
-#include "fuse_context_handle.h"
-
-/**
- * Implementations of the various fuse hooks.
- * All of these (should be) thread safe.
- *
- */
-
-int dfs_mkdir(const char *path, mode_t mode);
-int dfs_rename(const char *from, const char *to);
-int dfs_getattr(const char *path, struct stat *st);
-int dfs_readdir(const char *path, void *buf, fuse_fill_dir_t filler,
-                off_t offset, struct fuse_file_info *fi);
-int dfs_read(const char *path, char *buf, size_t size, off_t offset,
-                    struct fuse_file_info *fi);
-int dfs_statfs(const char *path, struct statvfs *st);
-int dfs_mkdir(const char *path, mode_t mode);
-int dfs_rename(const char *from, const char *to);
-int dfs_rmdir(const char *path);
-int dfs_unlink(const char *path);
-int dfs_utimens(const char *path, const struct timespec ts[2]);
-int dfs_chmod(const char *path, mode_t mode);
-int dfs_chown(const char *path, uid_t uid, gid_t gid);
-int dfs_open(const char *path, struct fuse_file_info *fi);
-int dfs_write(const char *path, const char *buf, size_t size,
-              off_t offset, struct fuse_file_info *fi);
-int dfs_release (const char *path, struct fuse_file_info *fi);
-int dfs_mknod(const char *path, mode_t mode, dev_t rdev) ;
-int dfs_create(const char *path, mode_t mode, struct fuse_file_info *fi);
-int dfs_flush(const char *path, struct fuse_file_info *fi);
-int dfs_access(const char *path, int mask);
-int dfs_truncate(const char *path, off_t size);
-int dfs_symlink(const char *from, const char *to);
-
-#endif
-
-
-

http://git-wip-us.apache.org/repos/asf/hadoop/blob/960b19ed/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_access.c
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_access.c b/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_access.c
deleted file mode 100644
index 033a1c3..0000000
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_access.c
+++ /dev/null
@@ -1,29 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#include "fuse_dfs.h"
-#include "fuse_impls.h"
-#include "fuse_connect.h"
-
-int dfs_access(const char *path, int mask)
-{
-  TRACE1("access", path)
-  assert(path != NULL);
-  // TODO: HDFS-428
-  return 0;
-}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/960b19ed/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_chmod.c
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_chmod.c b/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_chmod.c
deleted file mode 100644
index 8c25f53..0000000
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_chmod.c
+++ /dev/null
@@ -1,57 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#include "fuse_dfs.h"
-#include "fuse_impls.h"
-#include "fuse_users.h"
-#include "fuse_connect.h"
-
-int dfs_chmod(const char *path, mode_t mode)
-{
-  struct hdfsConn *conn = NULL;
-  hdfsFS fs;
-  TRACE1("chmod", path)
-  int ret = 0;
-  dfs_context *dfs = (dfs_context*)fuse_get_context()->private_data;
-
-  assert(path);
-  assert(dfs);
-  assert('/' == *path);
-
-  ret = fuseConnectAsThreadUid(&conn);
-  if (ret) {
-    fprintf(stderr, "fuseConnectAsThreadUid: failed to open a libhdfs "
-            "connection!  error %d.\n", ret);
-    ret = -EIO;
-    goto cleanup;
-  }
-  fs = hdfsConnGetFs(conn);
-
-  if (hdfsChmod(fs, path, (short)mode)) {
-    ERROR("Could not chmod %s to %d", path, (int)mode);
-    ret = (errno > 0) ? -errno : -EIO;
-    goto cleanup;
-  }
-
-cleanup:
-  if (conn) {
-    hdfsConnRelease(conn);
-  }
-
-  return ret;
-}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/960b19ed/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_chown.c
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_chown.c b/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_chown.c
deleted file mode 100644
index 2a6b61c..0000000
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_chown.c
+++ /dev/null
@@ -1,87 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#include "fuse_dfs.h"
-#include "fuse_users.h"
-#include "fuse_impls.h"
-#include "fuse_connect.h"
-
-#include <stdlib.h>
-
-int dfs_chown(const char *path, uid_t uid, gid_t gid)
-{
-  struct hdfsConn *conn = NULL;
-  int ret = 0;
-  char *user = NULL;
-  char *group = NULL;
-
-  TRACE1("chown", path)
-
-  // retrieve dfs specific data
-  dfs_context *dfs = (dfs_context*)fuse_get_context()->private_data;
-
-  // check params and the context var
-  assert(path);
-  assert(dfs);
-  assert('/' == *path);
-
-  if ((uid == -1) && (gid == -1)) {
-    ret = 0;
-    goto cleanup;
-  }
-  if (uid != -1) {
-    user = getUsername(uid);
-    if (NULL == user) {
-      ERROR("Could not lookup the user id string %d",(int)uid);
-      ret = -EIO;
-      goto cleanup;
-    }
-  }
-  if (gid != -1) {
-    group = getGroup(gid);
-    if (group == NULL) {
-      ERROR("Could not lookup the group id string %d",(int)gid);
-      ret = -EIO;
-      goto cleanup;
-    }
-  }
-
-  ret = fuseConnect(user, fuse_get_context(), &conn);
-  if (ret) {
-    fprintf(stderr, "fuseConnect: failed to open a libhdfs connection!  "
-            "error %d.\n", ret);
-    ret = -EIO;
-    goto cleanup;
-  }
-
-  if (hdfsChown(hdfsConnGetFs(conn), path, user, group)) {
-    ret = errno;
-    ERROR("Could not chown %s to %d:%d: error %d", path, (int)uid, gid, ret);
-    ret = (ret > 0) ? -ret : -EIO;
-    goto cleanup;
-  }
-
-cleanup:
-  if (conn) {
-    hdfsConnRelease(conn);
-  }
-  free(user);
-  free(group);
-
-  return ret;
-}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/960b19ed/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_create.c
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_create.c b/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_create.c
deleted file mode 100644
index 256e383..0000000
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_create.c
+++ /dev/null
@@ -1,27 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#include "fuse_dfs.h"
-#include "fuse_impls.h"
-
-int dfs_create(const char *path, mode_t mode, struct fuse_file_info *fi)
-{
-  TRACE1("create", path)
-  fi->flags |= mode;
-  return dfs_open(path, fi);
-}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/960b19ed/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_flush.c
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_flush.c b/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_flush.c
deleted file mode 100644
index adb065b..0000000
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_flush.c
+++ /dev/null
@@ -1,54 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#include "fuse_connect.h"
-#include "fuse_dfs.h"
-#include "fuse_impls.h"
-#include "fuse_file_handle.h"
-
-int dfs_flush(const char *path, struct fuse_file_info *fi) {
-  TRACE1("flush", path)
-
-  // retrieve dfs specific data
-  dfs_context *dfs = (dfs_context*)fuse_get_context()->private_data;
-
-  // check params and the context var
-  assert(path);
-  assert(dfs);
-  assert('/' == *path);
-  assert(fi);
-
-  if (NULL == (void*)fi->fh) {
-    return  0;
-  }
-
-  // note that fuse calls flush on RO files too and hdfs does not like that and will return an error
-  if (fi->flags & O_WRONLY) {
-
-    dfs_fh *fh = (dfs_fh*)fi->fh;
-    assert(fh);
-    hdfsFile file_handle = (hdfsFile)fh->hdfsFH;
-    assert(file_handle);
-    if (hdfsFlush(hdfsConnGetFs(fh->conn), file_handle) != 0) {
-      ERROR("Could not flush %lx for %s\n",(long)file_handle, path);
-      return -EIO;
-    }
-  }
-
-  return 0;
-}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/960b19ed/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_getattr.c
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_getattr.c b/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_getattr.c
deleted file mode 100644
index 2e43518..0000000
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_getattr.c
+++ /dev/null
@@ -1,75 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#include "fuse_dfs.h"
-#include "fuse_impls.h"
-#include "fuse_stat_struct.h"
-#include "fuse_connect.h"
-
-int dfs_getattr(const char *path, struct stat *st)
-{
-  struct hdfsConn *conn = NULL;
-  hdfsFS fs;
-  int ret;
-  hdfsFileInfo *info;
-
-  TRACE1("getattr", path)
-  dfs_context *dfs = (dfs_context*)fuse_get_context()->private_data;
-  assert(dfs);
-  assert(path);
-  assert(st);
-
-  ret = fuseConnectAsThreadUid(&conn);
-  if (ret) {
-    fprintf(stderr, "fuseConnectAsThreadUid: failed to open a libhdfs "
-            "connection!  error %d.\n", ret);
-    ret = -EIO;
-    goto cleanup;
-  }
-  fs = hdfsConnGetFs(conn);
-  
-  info = hdfsGetPathInfo(fs,path);
-  if (NULL == info) {
-    ret = -ENOENT;
-    goto cleanup;
-  }
-  fill_stat_structure(&info[0], st);
-
-  // setup hard link info - for a file it is 1 else num entries in a dir + 2 (for . and ..)
-  if (info[0].mKind == kObjectKindDirectory) {
-    int numEntries = 0;
-    hdfsFileInfo *info = hdfsListDirectory(fs,path,&numEntries);
-
-    if (info) {
-      hdfsFreeFileInfo(info,numEntries);
-    }
-    st->st_nlink = numEntries + 2;
-  } else {
-    // not a directory
-    st->st_nlink = 1;
-  }
-
-  // free the info pointer
-  hdfsFreeFileInfo(info,1);
-
-cleanup:
-  if (conn) {
-    hdfsConnRelease(conn);
-  }
-  return ret;
-}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/960b19ed/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_mkdir.c
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_mkdir.c b/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_mkdir.c
deleted file mode 100644
index b05551f..0000000
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_mkdir.c
+++ /dev/null
@@ -1,70 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#include "fuse_dfs.h"
-#include "fuse_impls.h"
-#include "fuse_trash.h"
-#include "fuse_connect.h"
-
-int dfs_mkdir(const char *path, mode_t mode)
-{
-  struct hdfsConn *conn = NULL;
-  hdfsFS fs;
-  dfs_context *dfs = (dfs_context*)fuse_get_context()->private_data;
-  int ret;
-
-  TRACE1("mkdir", path)
-
-  assert(path);
-  assert(dfs);
-  assert('/' == *path);
-
-  if (is_protected(path)) {
-    ERROR("HDFS trying to create directory %s", path);
-    return -EACCES;
-  }
-
-  ret = fuseConnectAsThreadUid(&conn);
-  if (ret) {
-    fprintf(stderr, "fuseConnectAsThreadUid: failed to open a libhdfs "
-            "connection!  error %d.\n", ret);
-    ret = -EIO;
-    goto cleanup;
-  }
-  fs = hdfsConnGetFs(conn);
-
-  // In theory the create and chmod should be atomic.
-
-  if (hdfsCreateDirectory(fs, path)) {
-    ERROR("HDFS could not create directory %s", path);
-    ret = (errno > 0) ? -errno : -EIO;
-    goto cleanup;
-  }
-
-  if (hdfsChmod(fs, path, (short)mode)) {
-    ERROR("Could not chmod %s to %d", path, (int)mode);
-    ret = (errno > 0) ? -errno : -EIO;
-  }
-  ret = 0;
-
-cleanup:
-  if (conn) {
-    hdfsConnRelease(conn);
-  }
-  return ret;
-}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/960b19ed/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_mknod.c
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_mknod.c b/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_mknod.c
deleted file mode 100644
index c745cf1..0000000
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_mknod.c
+++ /dev/null
@@ -1,27 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#include "fuse_dfs.h"
-#include "fuse_impls.h"
-
-int dfs_mknod(const char *path, mode_t mode, dev_t rdev)
-{
-  TRACE1("mknod", path);
-  DEBUG("dfs_mknod");
-  return 0;
-}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/960b19ed/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_open.c
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_open.c b/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_open.c
deleted file mode 100644
index ca670ce..0000000
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_open.c
+++ /dev/null
@@ -1,172 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#include "fuse_dfs.h"
-#include "fuse_impls.h"
-#include "fuse_connect.h"
-#include "fuse_file_handle.h"
-
-#include <stdio.h>
-#include <stdlib.h>
-
-/**
- * Given a set of FUSE flags, determine the libhdfs flags we need.
- *
- * This is complicated by two things:
- * 1. libhdfs doesn't support O_RDWR at all;
- * 2. when given O_WRONLY, libhdfs will truncate the file unless O_APPEND is
- * also given.  In other words, there is an implicit O_TRUNC.
- *
- * Probably the next iteration of the libhdfs interface should not use the POSIX
- * flags at all, since, as you can see, they don't really match up very closely
- * to the POSIX meaning.  However, for the time being, this is the API.
- *
- * @param fs               The libhdfs object
- * @param path             The path we're opening
- * @param flags            The FUSE flags
- *
- * @return                 negative error code on failure; flags otherwise.
- */
-static int64_t get_hdfs_open_flags(hdfsFS fs, const char *path, int flags)
-{
-  int64_t ret;
-  hdfsFileInfo *info;
-
-  if ((flags & O_ACCMODE) == O_RDONLY) {
-    return O_RDONLY;
-  }
-  if (flags & O_TRUNC) {
-    /* If we're opening for write or read/write, O_TRUNC means we should blow
-     * away the file which is there and create our own file.
-     * */
-    return O_WRONLY;
-  }
-  info = hdfsGetPathInfo(fs, path);
-  if (info) {
-    if (info->mSize == 0) {
-      // If the file has zero length, we shouldn't feel bad about blowing it
-      // away.
-      ret = O_WRONLY;
-    } else if ((flags & O_ACCMODE) == O_RDWR) {
-      // HACK: translate O_RDWR requests into O_RDONLY if the file already
-      // exists and has non-zero length.
-      ret = O_RDONLY;
-    } else { // O_WRONLY
-      // HACK: translate O_WRONLY requests into append if the file already
-      // exists.
-      ret = O_WRONLY | O_APPEND;
-    }
-  } else { // !info
-    if (flags & O_CREAT) {
-      ret = O_WRONLY;
-    } else {
-      ret = -ENOENT;
-    }
-  }
-  if (info) {
-    hdfsFreeFileInfo(info, 1);
-  }
-  return ret;
-}
-
-int dfs_open(const char *path, struct fuse_file_info *fi)
-{
-  hdfsFS fs = NULL;
-  dfs_context *dfs = (dfs_context*)fuse_get_context()->private_data;
-  dfs_fh *fh = NULL;
-  int mutexInit = 0, ret, flags = 0;
-  int64_t flagRet;
-
-  TRACE1("open", path)
-
-  // check params and the context var
-  assert(path);
-  assert('/' == *path);
-  assert(dfs);
-
-  // retrieve dfs specific data
-  fh = (dfs_fh*)calloc(1, sizeof (dfs_fh));
-  if (!fh) {
-    ERROR("Malloc of new file handle failed");
-    ret = -EIO;
-    goto error;
-  }
-  ret = fuseConnectAsThreadUid(&fh->conn);
-  if (ret) {
-    fprintf(stderr, "fuseConnectAsThreadUid: failed to open a libhdfs "
-            "connection!  error %d.\n", ret);
-    ret = -EIO;
-    goto error;
-  }
-  fs = hdfsConnGetFs(fh->conn);
-  flagRet = get_hdfs_open_flags(fs, path, fi->flags);
-  if (flagRet < 0) {
-    ret = -flagRet;
-    goto error;
-  }
-  flags = flagRet;
-  if ((fh->hdfsFH = hdfsOpenFile(fs, path, flags,  0, 0, 0)) == NULL) {
-    ERROR("Could not open file %s (errno=%d)", path, errno);
-    if (errno == 0 || errno == EINTERNAL) {
-      ret = -EIO;
-      goto error;
-    }
-    ret = -errno;
-    goto error;
-  }
-
-  ret = pthread_mutex_init(&fh->mutex, NULL);
-  if (ret) {
-    fprintf(stderr, "dfs_open: error initializing mutex: error %d\n", ret); 
-    ret = -EIO;
-    goto error;
-  }
-  mutexInit = 1;
-
-  if ((flags & O_ACCMODE) == O_WRONLY) {
-    fh->buf = NULL;
-  } else  {
-    assert(dfs->rdbuffer_size > 0);
-    fh->buf = (char*)malloc(dfs->rdbuffer_size * sizeof(char));
-    if (NULL == fh->buf) {
-      ERROR("Could not allocate memory for a read for file %s\n", path);
-      ret = -EIO;
-      goto error;
-    }
-    fh->buffersStartOffset = 0;
-    fh->bufferSize = 0;
-  }
-  fi->fh = (uint64_t)fh;
-  return 0;
-
-error:
-  if (fh) {
-    if (mutexInit) {
-      pthread_mutex_destroy(&fh->mutex);
-    }
-    free(fh->buf);
-    if (fh->hdfsFH) {
-      hdfsCloseFile(fs, fh->hdfsFH);
-    }
-    if (fh->conn) {
-      hdfsConnRelease(fh->conn);
-    }
-    free(fh);
-  }
-  return ret;
-}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/960b19ed/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_read.c
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_read.c b/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_read.c
deleted file mode 100644
index feade45..0000000
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_read.c
+++ /dev/null
@@ -1,163 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#include "fuse_connect.h"
-#include "fuse_dfs.h"
-#include "fuse_file_handle.h"
-#include "fuse_impls.h"
-
-static size_t min(const size_t x, const size_t y) {
-  return x < y ? x : y;
-}
-
-/**
- * dfs_read
- *
- * Reads from dfs or the open file's buffer.  Note that fuse requires that
- * either the entire read be satisfied or the EOF is hit or direct_io is enabled
- *
- */
-int dfs_read(const char *path, char *buf, size_t size, off_t offset,
-                   struct fuse_file_info *fi)
-{
-  TRACE1("read",path)
-  
-  // retrieve dfs specific data
-  dfs_context *dfs = (dfs_context*)fuse_get_context()->private_data;
-
-  // check params and the context var
-  assert(dfs);
-  assert(path);
-  assert(buf);
-  assert(offset >= 0);
-  assert(size >= 0);
-  assert(fi);
-
-  dfs_fh *fh = (dfs_fh*)fi->fh;
-  hdfsFS fs = hdfsConnGetFs(fh->conn);
-
-  assert(fh != NULL);
-  assert(fh->hdfsFH != NULL);
-
-  // special case this as simplifies the rest of the logic to know the caller wanted > 0 bytes
-  if (size == 0)
-    return 0;
-
-  // If size is bigger than the read buffer, then just read right into the user supplied buffer
-  if ( size >= dfs->rdbuffer_size) {
-    int num_read;
-    size_t total_read = 0;
-    while (size - total_read > 0 && (num_read = hdfsPread(fs, fh->hdfsFH, offset + total_read, buf + total_read, size - total_read)) > 0) {
-      total_read += num_read;
-    }
-    // if there was an error before satisfying the current read, this logic declares it an error
-    // and does not try to return any of the bytes read. Don't think it matters, so the code
-    // is just being conservative.
-    if (total_read < size && num_read < 0) {
-      total_read = -EIO;
-    }
-    return total_read;
-  }
-
-  //
-  // Critical section - protect from multiple reads in different threads accessing the read buffer
-  // (no returns until end)
-  //
-
-  pthread_mutex_lock(&fh->mutex);
-
-  // used only to check the postcondition of this function - namely that we satisfy
-  // the entire read or EOF is hit.
-  int isEOF = 0;
-  int ret = 0;
-
-  // check if the buffer is empty or
-  // the read starts before the buffer starts or
-  // the read ends after the buffer ends
-
-  if (fh->bufferSize == 0  || 
-      offset < fh->buffersStartOffset || 
-      offset + size > fh->buffersStartOffset + fh->bufferSize) 
-    {
-      // Read into the buffer from DFS
-      int num_read = 0;
-      size_t total_read = 0;
-
-      while (dfs->rdbuffer_size  - total_read > 0 &&
-             (num_read = hdfsPread(fs, fh->hdfsFH, offset + total_read, fh->buf + total_read, dfs->rdbuffer_size - total_read)) > 0) {
-        total_read += num_read;
-      }
-
-      // if there was an error before satisfying the current read, this logic declares it an error
-      // and does not try to return any of the bytes read. Don't think it matters, so the code
-      // is just being conservative.
-      if (total_read < size && num_read < 0) {
-        // invalidate the buffer 
-        fh->bufferSize = 0; 
-        ERROR("pread failed for %s with return code %d", path, (int)num_read);
-        ret = -EIO;
-      } else {
-        // Either EOF, all read or read beyond size, but then there was an error
-        fh->bufferSize = total_read;
-        fh->buffersStartOffset = offset;
-
-        if (dfs->rdbuffer_size - total_read > 0) {
-          // assert(num_read == 0); this should be true since if num_read < 0 handled above.
-          isEOF = 1;
-        }
-      }
-    }
-
-  //
-  // NOTE on EOF, fh->bufferSize == 0 and ret = 0 ,so the logic for copying data into the caller's buffer is bypassed, and
-  //  the code returns 0 as required
-  //
-  if (ret == 0 && fh->bufferSize > 0) {
-
-    assert(offset >= fh->buffersStartOffset);
-    assert(fh->buf);
-
-    const size_t bufferReadIndex = offset - fh->buffersStartOffset;
-    assert(bufferReadIndex >= 0 && bufferReadIndex < fh->bufferSize);
-
-    const size_t amount = min(fh->buffersStartOffset + fh->bufferSize - offset, size);
-    assert(amount >= 0 && amount <= fh->bufferSize);
-
-    const char *offsetPtr = fh->buf + bufferReadIndex;
-    assert(offsetPtr >= fh->buf);
-    assert(offsetPtr + amount <= fh->buf + fh->bufferSize);
-    
-    memcpy(buf, offsetPtr, amount);
-
-    ret = amount;
-  }
-
-  //
-  // Critical section end 
-  //
-  pthread_mutex_unlock(&fh->mutex);
- 
-  // fuse requires the below and the code should guarantee this assertion
-  // 3 cases on return:
-  //   1. entire read satisfied
-  //   2. partial read and isEOF - including 0 size read
-  //   3. error 
-  assert(ret == size || isEOF || ret < 0);
-
- return ret;
-}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/960b19ed/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_readdir.c
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_readdir.c b/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_readdir.c
deleted file mode 100644
index 326f573..0000000
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_readdir.c
+++ /dev/null
@@ -1,122 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#include "fuse_dfs.h"
-#include "fuse_impls.h"
-#include "fuse_stat_struct.h"
-#include "fuse_connect.h"
-
-int dfs_readdir(const char *path, void *buf, fuse_fill_dir_t filler,
-                       off_t offset, struct fuse_file_info *fi)
-{
-  int ret;
-  struct hdfsConn *conn = NULL;
-  hdfsFS fs;
-  dfs_context *dfs = (dfs_context*)fuse_get_context()->private_data;
-
-  TRACE1("readdir", path)
-
-  assert(dfs);
-  assert(path);
-  assert(buf);
-
-  ret = fuseConnectAsThreadUid(&conn);
-  if (ret) {
-    fprintf(stderr, "fuseConnectAsThreadUid: failed to open a libhdfs "
-            "connection!  error %d.\n", ret);
-    ret = -EIO;
-    goto cleanup;
-  }
-  fs = hdfsConnGetFs(conn);
-
-  // Read dirents. Calling a variant that just returns the final path
-  // component (HDFS-975) would save us from parsing it out below.
-  int numEntries = 0;
-  hdfsFileInfo *info = hdfsListDirectory(fs, path, &numEntries);
-
-  // NULL means either the directory doesn't exist or maybe IO error.
-  if (NULL == info) {
-    ret = (errno > 0) ? -errno : -ENOENT;
-    goto cleanup;
-  }
-
-  int i ;
-  for (i = 0; i < numEntries; i++) {
-    if (NULL == info[i].mName) {
-      ERROR("Path %s info[%d].mName is NULL", path, i);
-      continue;
-    }
-
-    struct stat st;
-    fill_stat_structure(&info[i], &st);
-
-    // Find the final path component
-    const char *str = strrchr(info[i].mName, '/');
-    if (NULL == str) {
-      ERROR("Invalid URI %s", info[i].mName);
-      continue;
-    }
-    str++;
-
-    // pack this entry into the fuse buffer
-    int res = 0;
-    if ((res = filler(buf,str,&st,0)) != 0) {
-      ERROR("Readdir filler failed: %d\n",res);
-    }
-  }
-
-  // insert '.' and '..'
-  const char *const dots [] = { ".",".."};
-  for (i = 0 ; i < 2 ; i++)
-    {
-      struct stat st;
-      memset(&st, 0, sizeof(struct stat));
-
-      // set to 0 to indicate not supported for directory because we cannot (efficiently) get this info for every subdirectory
-      st.st_nlink =  0;
-
-      // setup stat size and acl meta data
-      st.st_size    = 512;
-      st.st_blksize = 512;
-      st.st_blocks  =  1;
-      st.st_mode    = (S_IFDIR | 0777);
-      st.st_uid     = default_id;
-      st.st_gid     = default_id;
-      // todo fix below times
-      st.st_atime   = 0;
-      st.st_mtime   = 0;
-      st.st_ctime   = 0;
-
-      const char *const str = dots[i];
-
-      // flatten the info using fuse's function into a buffer
-      int res = 0;
-      if ((res = filler(buf,str,&st,0)) != 0) {
-	ERROR("Readdir filler failed: %d\n",res);
-      }
-    }
-  // free the info pointers
-  hdfsFreeFileInfo(info,numEntries);
-  ret = 0;
-
-cleanup:
-  if (conn) {
-    hdfsConnRelease(conn);
-  }
-  return ret;
-}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/960b19ed/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_release.c
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_release.c b/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_release.c
deleted file mode 100644
index 0316de6..0000000
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_release.c
+++ /dev/null
@@ -1,66 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#include "fuse_dfs.h"
-#include "fuse_impls.h"
-#include "fuse_file_handle.h"
-#include "fuse_connect.h"
-
-#include <stdlib.h>
-
-/**
- * release a fuse_file_info structure.
- *
- * When this function is invoked, there are no more references to our
- * fuse_file_info structure that exist anywhere.  So there is no need for
- * locking to protect this structure here.
- *
- * Another thread could open() the same file, and get a separate, different file
- * descriptor with a different, separate fuse_file_info structure.  In HDFS,
- * this results in one writer winning and overwriting everything the other
- * writer has done.
- */
-
-int dfs_release (const char *path, struct fuse_file_info *fi) {
-  TRACE1("release", path)
-
-  // retrieve dfs specific data
-  dfs_context *dfs = (dfs_context*)fuse_get_context()->private_data;
-
-  // check params and the context var
-  assert(path);
-  assert(dfs);
-  assert('/' == *path);
-
-  int ret = 0;
-  dfs_fh *fh = (dfs_fh*)fi->fh;
-  assert(fh);
-  hdfsFile file_handle = (hdfsFile)fh->hdfsFH;
-  if (NULL != file_handle) {
-    if (hdfsCloseFile(hdfsConnGetFs(fh->conn), file_handle) != 0) {
-      ERROR("Could not close handle %ld for %s\n",(long)file_handle, path);
-      ret = -EIO;
-    }
-  }
-  free(fh->buf);
-  hdfsConnRelease(fh->conn);
-  pthread_mutex_destroy(&fh->mutex);
-  free(fh);
-  fi->fh = 0;
-  return ret;
-}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/960b19ed/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_rename.c
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_rename.c b/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_rename.c
deleted file mode 100644
index ad7c7e5..0000000
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_rename.c
+++ /dev/null
@@ -1,66 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#include "fuse_dfs.h"
-#include "fuse_impls.h"
-#include "fuse_trash.h"
-#include "fuse_connect.h"
-
-int dfs_rename(const char *from, const char *to)
-{
-  struct hdfsConn *conn = NULL;
-  hdfsFS fs;
-  dfs_context *dfs = (dfs_context*)fuse_get_context()->private_data;
-  int ret;
-
-  TRACE1("rename", from) 
-
-  // check params and the context var
-  assert(from);
-  assert(to);
-  assert(dfs);
-
-  assert('/' == *from);
-  assert('/' == *to);
-
-  if (is_protected(from) || is_protected(to)) {
-    ERROR("Could not rename %s to %s", from, to);
-    return -EACCES;
-  }
-
-  ret = fuseConnectAsThreadUid(&conn);
-  if (ret) {
-    fprintf(stderr, "fuseConnectAsThreadUid: failed to open a libhdfs "
-            "connection!  error %d.\n", ret);
-    ret = -EIO;
-    goto cleanup;
-  }
-  fs = hdfsConnGetFs(conn);
-  if (hdfsRename(fs, from, to)) {
-    ERROR("Rename %s to %s failed", from, to);
-    ret = (errno > 0) ? -errno : -EIO;
-    goto cleanup;
-  }
-  ret = 0;
-
-cleanup:
-  if (conn) {
-    hdfsConnRelease(conn);
-  }
-  return ret;
-}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/960b19ed/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_rmdir.c
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_rmdir.c b/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_rmdir.c
deleted file mode 100644
index 493807f..0000000
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_rmdir.c
+++ /dev/null
@@ -1,76 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#include "fuse_dfs.h"
-#include "fuse_impls.h"
-#include "fuse_trash.h"
-#include "fuse_connect.h"
-
-extern const char *const TrashPrefixDir;
-
-int dfs_rmdir(const char *path)
-{
-  struct hdfsConn *conn = NULL;
-  hdfsFS fs;
-  int ret;
-  dfs_context *dfs = (dfs_context*)fuse_get_context()->private_data;
-  int numEntries = 0;
-  hdfsFileInfo *info = NULL;
-
-  TRACE1("rmdir", path)
-
-  assert(path);
-  assert(dfs);
-  assert('/' == *path);
-
-  if (is_protected(path)) {
-    ERROR("Trying to delete protected directory %s", path);
-    ret = -EACCES;
-    goto cleanup;
-  }
-
-  ret = fuseConnectAsThreadUid(&conn);
-  if (ret) {
-    fprintf(stderr, "fuseConnectAsThreadUid: failed to open a libhdfs "
-            "connection!  error %d.\n", ret);
-    ret = -EIO;
-    goto cleanup;
-  }
-  fs = hdfsConnGetFs(conn);
-  info = hdfsListDirectory(fs, path, &numEntries);
-  if (numEntries) {
-    ret = -ENOTEMPTY;
-    goto cleanup;
-  }
-
-  if (hdfsDeleteWithTrash(fs, path, dfs->usetrash)) {
-    ERROR("Error trying to delete directory %s", path);
-    ret = -EIO;
-    goto cleanup;
-  }
-  ret = 0;
-
-cleanup:
-  if (info) {
-    hdfsFreeFileInfo(info, numEntries);
-  }
-  if (conn) {
-    hdfsConnRelease(conn);
-  }
-  return ret;
-}


Mime
View raw message