Return-Path: X-Original-To: apmail-hadoop-common-commits-archive@www.apache.org Delivered-To: apmail-hadoop-common-commits-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id C3C2D18C8C for ; Wed, 7 Oct 2015 07:16:13 +0000 (UTC) Received: (qmail 66606 invoked by uid 500); 7 Oct 2015 07:16:05 -0000 Delivered-To: apmail-hadoop-common-commits-archive@hadoop.apache.org Received: (qmail 66470 invoked by uid 500); 7 Oct 2015 07:16:05 -0000 Mailing-List: contact common-commits-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: common-dev@hadoop.apache.org Delivered-To: mailing list common-commits@hadoop.apache.org Received: (qmail 65240 invoked by uid 99); 7 Oct 2015 07:16:04 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 07 Oct 2015 07:16:04 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id 2AE29E0B73; Wed, 7 Oct 2015 07:16:04 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: wheat9@apache.org To: common-commits@hadoop.apache.org Date: Wed, 07 Oct 2015 07:16:21 -0000 Message-Id: In-Reply-To: <573e33ce62b6453a90950917c77551d5@git.apache.org> References: <573e33ce62b6453a90950917c77551d5@git.apache.org> X-Mailer: ASF-Git Admin Mailer Subject: [19/19] hadoop git commit: HDFS-9170. Move libhdfs / fuse-dfs / libwebhdfs to hdfs-client. Contributed by Haohui Mai. HDFS-9170. Move libhdfs / fuse-dfs / libwebhdfs to hdfs-client. Contributed by Haohui Mai. Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/960b19ed Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/960b19ed Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/960b19ed Branch: refs/heads/branch-2 Commit: 960b19edf633cd09779da226346691970b7d97e5 Parents: a4b8dc4 Author: Haohui Mai Authored: Tue Sep 29 17:45:53 2015 -0700 Committer: Haohui Mai Committed: Wed Oct 7 00:15:52 2015 -0700 ---------------------------------------------------------------------- .../hadoop-hdfs-native-client/pom.xml | 230 ++ .../src/CMakeLists.txt | 92 + .../src/config.h.cmake | 27 + .../src/contrib/libwebhdfs/CMakeLists.txt | 87 + .../libwebhdfs/resources/FindJansson.cmake | 43 + .../contrib/libwebhdfs/src/hdfs_http_client.c | 490 +++ .../contrib/libwebhdfs/src/hdfs_http_client.h | 294 ++ .../contrib/libwebhdfs/src/hdfs_http_query.c | 402 +++ .../contrib/libwebhdfs/src/hdfs_http_query.h | 240 ++ .../contrib/libwebhdfs/src/hdfs_json_parser.c | 654 ++++ .../contrib/libwebhdfs/src/hdfs_json_parser.h | 178 + .../src/contrib/libwebhdfs/src/hdfs_web.c | 1538 ++++++++ .../libwebhdfs/src/test_libwebhdfs_ops.c | 552 +++ .../libwebhdfs/src/test_libwebhdfs_read.c | 78 + .../libwebhdfs/src/test_libwebhdfs_threaded.c | 247 ++ .../libwebhdfs/src/test_libwebhdfs_write.c | 111 + .../src/main/native/fuse-dfs/CMakeLists.txt | 88 + .../src/main/native/fuse-dfs/doc/README | 131 + .../src/main/native/fuse-dfs/fuse_connect.c | 644 ++++ .../src/main/native/fuse-dfs/fuse_connect.h | 90 + .../main/native/fuse-dfs/fuse_context_handle.h | 40 + .../src/main/native/fuse-dfs/fuse_dfs.c | 136 + .../src/main/native/fuse-dfs/fuse_dfs.h | 81 + .../main/native/fuse-dfs/fuse_dfs_wrapper.sh | 46 + .../src/main/native/fuse-dfs/fuse_file_handle.h | 46 + .../src/main/native/fuse-dfs/fuse_impls.h | 63 + .../main/native/fuse-dfs/fuse_impls_access.c | 29 + .../src/main/native/fuse-dfs/fuse_impls_chmod.c | 57 + .../src/main/native/fuse-dfs/fuse_impls_chown.c | 87 + .../main/native/fuse-dfs/fuse_impls_create.c | 27 + .../src/main/native/fuse-dfs/fuse_impls_flush.c | 54 + .../main/native/fuse-dfs/fuse_impls_getattr.c | 75 + .../src/main/native/fuse-dfs/fuse_impls_mkdir.c | 70 + .../src/main/native/fuse-dfs/fuse_impls_mknod.c | 27 + .../src/main/native/fuse-dfs/fuse_impls_open.c | 172 + .../src/main/native/fuse-dfs/fuse_impls_read.c | 163 + .../main/native/fuse-dfs/fuse_impls_readdir.c | 122 + .../main/native/fuse-dfs/fuse_impls_release.c | 66 + .../main/native/fuse-dfs/fuse_impls_rename.c | 66 + .../src/main/native/fuse-dfs/fuse_impls_rmdir.c | 76 + .../main/native/fuse-dfs/fuse_impls_statfs.c | 70 + .../main/native/fuse-dfs/fuse_impls_symlink.c | 30 + .../main/native/fuse-dfs/fuse_impls_truncate.c | 79 + .../main/native/fuse-dfs/fuse_impls_unlink.c | 65 + .../main/native/fuse-dfs/fuse_impls_utimens.c | 70 + .../src/main/native/fuse-dfs/fuse_impls_write.c | 83 + .../src/main/native/fuse-dfs/fuse_init.c | 192 + .../src/main/native/fuse-dfs/fuse_init.h | 33 + .../src/main/native/fuse-dfs/fuse_options.c | 188 + .../src/main/native/fuse-dfs/fuse_options.h | 44 + .../src/main/native/fuse-dfs/fuse_stat_struct.c | 112 + .../src/main/native/fuse-dfs/fuse_stat_struct.h | 36 + .../src/main/native/fuse-dfs/fuse_trash.c | 244 ++ .../src/main/native/fuse-dfs/fuse_trash.h | 26 + .../src/main/native/fuse-dfs/fuse_users.c | 213 ++ .../src/main/native/fuse-dfs/fuse_users.h | 70 + .../main/native/fuse-dfs/test/TestFuseDFS.java | 410 +++ .../main/native/fuse-dfs/test/fuse_workload.c | 403 +++ .../main/native/fuse-dfs/test/fuse_workload.h | 36 + .../main/native/fuse-dfs/test/test_fuse_dfs.c | 378 ++ .../src/main/native/fuse-dfs/util/posix_util.c | 155 + .../src/main/native/fuse-dfs/util/posix_util.h | 58 + .../src/main/native/fuse-dfs/util/tree.h | 765 ++++ .../src/main/native/libhdfs/CMakeLists.txt | 141 + .../src/main/native/libhdfs/common/htable.c | 287 ++ .../src/main/native/libhdfs/common/htable.h | 161 + .../src/main/native/libhdfs/exception.c | 239 ++ .../src/main/native/libhdfs/exception.h | 157 + .../src/main/native/libhdfs/expect.c | 68 + .../src/main/native/libhdfs/expect.h | 179 + .../src/main/native/libhdfs/hdfs.c | 3342 ++++++++++++++++++ .../src/main/native/libhdfs/hdfs.h | 939 +++++ .../src/main/native/libhdfs/hdfs_test.h | 64 + .../src/main/native/libhdfs/jni_helper.c | 595 ++++ .../src/main/native/libhdfs/jni_helper.h | 161 + .../src/main/native/libhdfs/native_mini_dfs.c | 375 ++ .../src/main/native/libhdfs/native_mini_dfs.h | 129 + .../src/main/native/libhdfs/os/mutexes.h | 55 + .../src/main/native/libhdfs/os/posix/mutexes.c | 43 + .../src/main/native/libhdfs/os/posix/platform.h | 34 + .../src/main/native/libhdfs/os/posix/thread.c | 52 + .../libhdfs/os/posix/thread_local_storage.c | 80 + .../src/main/native/libhdfs/os/thread.h | 54 + .../native/libhdfs/os/thread_local_storage.h | 75 + .../main/native/libhdfs/os/windows/inttypes.h | 28 + .../main/native/libhdfs/os/windows/mutexes.c | 52 + .../main/native/libhdfs/os/windows/platform.h | 86 + .../src/main/native/libhdfs/os/windows/thread.c | 66 + .../libhdfs/os/windows/thread_local_storage.c | 172 + .../src/main/native/libhdfs/os/windows/unistd.h | 29 + .../src/main/native/libhdfs/test/test_htable.c | 100 + .../main/native/libhdfs/test/test_libhdfs_ops.c | 540 +++ .../native/libhdfs/test/test_libhdfs_read.c | 72 + .../native/libhdfs/test/test_libhdfs_write.c | 99 + .../native/libhdfs/test/test_libhdfs_zerocopy.c | 280 ++ .../src/main/native/libhdfs/test/vecsum.c | 825 +++++ .../main/native/libhdfs/test_libhdfs_threaded.c | 360 ++ .../main/native/libhdfs/test_native_mini_dfs.c | 41 + hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt | 2 + hadoop-hdfs-project/hadoop-hdfs/pom.xml | 167 - .../hadoop-hdfs/src/CMakeLists.txt | 206 -- .../hadoop-hdfs/src/config.h.cmake | 27 - .../src/contrib/libwebhdfs/CMakeLists.txt | 78 - .../libwebhdfs/resources/FindJansson.cmake | 43 - .../contrib/libwebhdfs/src/hdfs_http_client.c | 490 --- .../contrib/libwebhdfs/src/hdfs_http_client.h | 294 -- .../contrib/libwebhdfs/src/hdfs_http_query.c | 402 --- .../contrib/libwebhdfs/src/hdfs_http_query.h | 240 -- .../contrib/libwebhdfs/src/hdfs_json_parser.c | 654 ---- .../contrib/libwebhdfs/src/hdfs_json_parser.h | 178 - .../src/contrib/libwebhdfs/src/hdfs_web.c | 1538 -------- .../libwebhdfs/src/test_libwebhdfs_ops.c | 552 --- .../libwebhdfs/src/test_libwebhdfs_read.c | 78 - .../libwebhdfs/src/test_libwebhdfs_threaded.c | 247 -- .../libwebhdfs/src/test_libwebhdfs_write.c | 111 - .../src/main/native/fuse-dfs/CMakeLists.txt | 102 - .../src/main/native/fuse-dfs/doc/README | 131 - .../src/main/native/fuse-dfs/fuse_connect.c | 644 ---- .../src/main/native/fuse-dfs/fuse_connect.h | 90 - .../main/native/fuse-dfs/fuse_context_handle.h | 40 - .../src/main/native/fuse-dfs/fuse_dfs.c | 136 - .../src/main/native/fuse-dfs/fuse_dfs.h | 81 - .../main/native/fuse-dfs/fuse_dfs_wrapper.sh | 46 - .../src/main/native/fuse-dfs/fuse_file_handle.h | 46 - .../src/main/native/fuse-dfs/fuse_impls.h | 63 - .../main/native/fuse-dfs/fuse_impls_access.c | 29 - .../src/main/native/fuse-dfs/fuse_impls_chmod.c | 57 - .../src/main/native/fuse-dfs/fuse_impls_chown.c | 87 - .../main/native/fuse-dfs/fuse_impls_create.c | 27 - .../src/main/native/fuse-dfs/fuse_impls_flush.c | 54 - .../main/native/fuse-dfs/fuse_impls_getattr.c | 75 - .../src/main/native/fuse-dfs/fuse_impls_mkdir.c | 70 - .../src/main/native/fuse-dfs/fuse_impls_mknod.c | 27 - .../src/main/native/fuse-dfs/fuse_impls_open.c | 172 - .../src/main/native/fuse-dfs/fuse_impls_read.c | 163 - .../main/native/fuse-dfs/fuse_impls_readdir.c | 122 - .../main/native/fuse-dfs/fuse_impls_release.c | 66 - .../main/native/fuse-dfs/fuse_impls_rename.c | 66 - .../src/main/native/fuse-dfs/fuse_impls_rmdir.c | 76 - .../main/native/fuse-dfs/fuse_impls_statfs.c | 70 - .../main/native/fuse-dfs/fuse_impls_symlink.c | 30 - .../main/native/fuse-dfs/fuse_impls_truncate.c | 79 - .../main/native/fuse-dfs/fuse_impls_unlink.c | 65 - .../main/native/fuse-dfs/fuse_impls_utimens.c | 70 - .../src/main/native/fuse-dfs/fuse_impls_write.c | 83 - .../src/main/native/fuse-dfs/fuse_init.c | 192 - .../src/main/native/fuse-dfs/fuse_init.h | 33 - .../src/main/native/fuse-dfs/fuse_options.c | 188 - .../src/main/native/fuse-dfs/fuse_options.h | 44 - .../src/main/native/fuse-dfs/fuse_stat_struct.c | 112 - .../src/main/native/fuse-dfs/fuse_stat_struct.h | 36 - .../src/main/native/fuse-dfs/fuse_trash.c | 244 -- .../src/main/native/fuse-dfs/fuse_trash.h | 26 - .../src/main/native/fuse-dfs/fuse_users.c | 213 -- .../src/main/native/fuse-dfs/fuse_users.h | 70 - .../main/native/fuse-dfs/test/TestFuseDFS.java | 410 --- .../main/native/fuse-dfs/test/fuse_workload.c | 403 --- .../main/native/fuse-dfs/test/fuse_workload.h | 36 - .../main/native/fuse-dfs/test/test_fuse_dfs.c | 378 -- .../src/main/native/libhdfs/common/htable.c | 287 -- .../src/main/native/libhdfs/common/htable.h | 161 - .../src/main/native/libhdfs/exception.c | 239 -- .../src/main/native/libhdfs/exception.h | 157 - .../src/main/native/libhdfs/expect.c | 68 - .../src/main/native/libhdfs/expect.h | 179 - .../hadoop-hdfs/src/main/native/libhdfs/hdfs.c | 3342 ------------------ .../hadoop-hdfs/src/main/native/libhdfs/hdfs.h | 939 ----- .../src/main/native/libhdfs/hdfs_test.h | 64 - .../src/main/native/libhdfs/jni_helper.c | 595 ---- .../src/main/native/libhdfs/jni_helper.h | 161 - .../src/main/native/libhdfs/native_mini_dfs.c | 375 -- .../src/main/native/libhdfs/native_mini_dfs.h | 129 - .../src/main/native/libhdfs/os/mutexes.h | 55 - .../src/main/native/libhdfs/os/posix/mutexes.c | 43 - .../src/main/native/libhdfs/os/posix/platform.h | 34 - .../src/main/native/libhdfs/os/posix/thread.c | 52 - .../libhdfs/os/posix/thread_local_storage.c | 80 - .../src/main/native/libhdfs/os/thread.h | 54 - .../native/libhdfs/os/thread_local_storage.h | 75 - .../main/native/libhdfs/os/windows/inttypes.h | 28 - .../main/native/libhdfs/os/windows/mutexes.c | 52 - .../main/native/libhdfs/os/windows/platform.h | 86 - .../src/main/native/libhdfs/os/windows/thread.c | 66 - .../libhdfs/os/windows/thread_local_storage.c | 172 - .../src/main/native/libhdfs/os/windows/unistd.h | 29 - .../src/main/native/libhdfs/test/test_htable.c | 100 - .../main/native/libhdfs/test/test_libhdfs_ops.c | 540 --- .../native/libhdfs/test/test_libhdfs_read.c | 72 - .../native/libhdfs/test/test_libhdfs_write.c | 99 - .../native/libhdfs/test/test_libhdfs_zerocopy.c | 280 -- .../src/main/native/libhdfs/test/vecsum.c | 825 ----- .../main/native/libhdfs/test_libhdfs_threaded.c | 360 -- .../main/native/libhdfs/test_native_mini_dfs.c | 41 - .../src/main/native/util/posix_util.c | 155 - .../src/main/native/util/posix_util.h | 58 - .../hadoop-hdfs/src/main/native/util/tree.h | 765 ---- hadoop-hdfs-project/pom.xml | 1 + 197 files changed, 21462 insertions(+), 21374 deletions(-) ---------------------------------------------------------------------- http://git-wip-us.apache.org/repos/asf/hadoop/blob/960b19ed/hadoop-hdfs-project/hadoop-hdfs-native-client/pom.xml ---------------------------------------------------------------------- diff --git a/hadoop-hdfs-project/hadoop-hdfs-native-client/pom.xml b/hadoop-hdfs-project/hadoop-hdfs-native-client/pom.xml new file mode 100644 index 0000000..5632c48 --- /dev/null +++ b/hadoop-hdfs-project/hadoop-hdfs-native-client/pom.xml @@ -0,0 +1,230 @@ + + + + 4.0.0 + + org.apache.hadoop + hadoop-project-dist + 3.0.0-SNAPSHOT + ../../hadoop-project-dist + + org.apache.hadoop + hadoop-hdfs-native-client + 3.0.0-SNAPSHOT + Apache Hadoop HDFS Native Client + Apache Hadoop HDFS Native Client + jar + + + false + false + + + + + org.apache.hadoop + hadoop-common + test + + + org.apache.hadoop + hadoop-common + test-jar + test + + + org.apache.hadoop + hadoop-hdfs + test + + + org.apache.hadoop + hadoop-hdfs + test-jar + test + + + org.mockito + mockito-all + test + + + + + + native-win + + false + + windows + + + + true + + + + + org.apache.maven.plugins + maven-enforcer-plugin + + + enforce-os + + enforce + + + + + windows + native-win build only supported on Windows + + + true + + + + + + org.apache.maven.plugins + maven-antrun-plugin + + + make + compile + + run + + + + + + + + + + + + + + + + + + + + + + native_tests + test + run + + ${skipTests} + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + native + + false + + + true + + + + + org.apache.maven.plugins + maven-antrun-plugin + + + make + compile + run + + + + + + + + + + + + + native_tests + test + run + + ${skipTests} + + + + + + + + + + + + + + + + + + + + + + + + + + + + http://git-wip-us.apache.org/repos/asf/hadoop/blob/960b19ed/hadoop-hdfs-project/hadoop-hdfs-native-client/src/CMakeLists.txt ---------------------------------------------------------------------- diff --git a/hadoop-hdfs-project/hadoop-hdfs-native-client/src/CMakeLists.txt b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/CMakeLists.txt new file mode 100644 index 0000000..9dacec7 --- /dev/null +++ b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/CMakeLists.txt @@ -0,0 +1,92 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +cmake_minimum_required(VERSION 2.8 FATAL_ERROR) + +list(APPEND CMAKE_MODULE_PATH ${CMAKE_SOURCE_DIR}/../../../hadoop-common-project/hadoop-common) +include(HadoopCommon) + +# Check to see if our compiler and linker support the __thread attribute. +# On Linux and some other operating systems, this is a more efficient +# alternative to POSIX thread local storage. +include(CheckCSourceCompiles) +check_c_source_compiles("int main(void) { static __thread int i = 0; return 0; }" HAVE_BETTER_TLS) + +# Check to see if we have Intel SSE intrinsics. +check_c_source_compiles("#include \nint main(void) { __m128d sum0 = _mm_set_pd(0.0,0.0); return 0; }" HAVE_INTEL_SSE_INTRINSICS) + +set(_FUSE_DFS_VERSION 0.1.0) +configure_file(${CMAKE_SOURCE_DIR}/config.h.cmake ${CMAKE_BINARY_DIR}/config.h) + +# Check if we need to link dl library to get dlopen. +# dlopen on Linux is in separate library but on FreeBSD its in libc +include(CheckLibraryExists) +check_library_exists(dl dlopen "" NEED_LINK_DL) + +if(WIN32) + # Set the optimizer level. + set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} /O2") + # Set warning level 4. + set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} /W4") + # Skip "unreferenced formal parameter". + set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} /wd4100") + # Skip "conditional expression is constant". + set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} /wd4127") + # Skip deprecated POSIX function warnings. + set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -D_CRT_NONSTDC_NO_DEPRECATE") + # Skip CRT non-secure function warnings. If we can convert usage of + # strerror, getenv and ctime to their secure CRT equivalents, then we can + # re-enable the CRT non-secure function warnings. + set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -D_CRT_SECURE_NO_WARNINGS") + # Omit unneeded headers. + set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -DWIN32_LEAN_AND_MEAN") + set(OS_DIR ${CMAKE_SOURCE_DIR}/main/native/libhdfs/os/windows) + set(OUT_DIR target/bin) +else() + set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -fvisibility=hidden") + set(OS_DIR ${CMAKE_SOURCE_DIR}/main/native/libhdfs/os/posix) + set(OUT_DIR target/usr/local/lib) +endif() + +# Configure JNI. +include(HadoopJNI) + +add_subdirectory(main/native/libhdfs) + +if(REQUIRE_LIBWEBHDFS) + add_subdirectory(contrib/libwebhdfs) +endif() + +# Find Linux FUSE +if(${CMAKE_SYSTEM_NAME} MATCHES "Linux") + find_package(PkgConfig REQUIRED) + pkg_check_modules(FUSE fuse) + if(FUSE_FOUND) + add_subdirectory(main/native/fuse-dfs) + else() + message(STATUS "Failed to find Linux FUSE libraries or include files. Will not build FUSE client.") + if(REQUIRE_FUSE) + message(FATAL_ERROR "Required component fuse_dfs could not be built.") + endif() + endif(FUSE_FOUND) +else() + message(STATUS "Non-Linux system detected. Will not build FUSE client.") + if(REQUIRE_FUSE) + message(FATAL_ERROR "Required component fuse_dfs could not be built.") + endif() +endif() http://git-wip-us.apache.org/repos/asf/hadoop/blob/960b19ed/hadoop-hdfs-project/hadoop-hdfs-native-client/src/config.h.cmake ---------------------------------------------------------------------- diff --git a/hadoop-hdfs-project/hadoop-hdfs-native-client/src/config.h.cmake b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/config.h.cmake new file mode 100644 index 0000000..0d11fc4 --- /dev/null +++ b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/config.h.cmake @@ -0,0 +1,27 @@ +/** +* Licensed to the Apache Software Foundation (ASF) under one +* or more contributor license agreements. See the NOTICE file +* distributed with this work for additional information +* regarding copyright ownership. The ASF licenses this file +* to you under the Apache License, Version 2.0 (the +* "License"); you may not use this file except in compliance +* with the License. You may obtain a copy of the License at +* +* http://www.apache.org/licenses/LICENSE-2.0 +* +* Unless required by applicable law or agreed to in writing, software +* distributed under the License is distributed on an "AS IS" BASIS, +* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +* See the License for the specific language governing permissions and +* limitations under the License. +*/ +#ifndef CONFIG_H +#define CONFIG_H + +#cmakedefine _FUSE_DFS_VERSION "@_FUSE_DFS_VERSION@" + +#cmakedefine HAVE_BETTER_TLS + +#cmakedefine HAVE_INTEL_SSE_INTRINSICS + +#endif http://git-wip-us.apache.org/repos/asf/hadoop/blob/960b19ed/hadoop-hdfs-project/hadoop-hdfs-native-client/src/contrib/libwebhdfs/CMakeLists.txt ---------------------------------------------------------------------- diff --git a/hadoop-hdfs-project/hadoop-hdfs-native-client/src/contrib/libwebhdfs/CMakeLists.txt b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/contrib/libwebhdfs/CMakeLists.txt new file mode 100644 index 0000000..009dfd6 --- /dev/null +++ b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/contrib/libwebhdfs/CMakeLists.txt @@ -0,0 +1,87 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +find_package(CURL REQUIRED) + +set(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} + "${CMAKE_SOURCE_DIR}/contrib/libwebhdfs/resources/") + +find_package(Jansson REQUIRED) +include_directories( + ${JNI_INCLUDE_DIRS} + ${CMAKE_BINARY_DIR} + ${CMAKE_SOURCE_DIR}/main/native + ${CMAKE_SOURCE_DIR}/main/native/libhdfs + ${OS_DIR} + ${JANSSON_INCLUDE_DIR} +) + +add_definitions(-DLIBHDFS_DLL_EXPORT) + +hadoop_add_dual_library(webhdfs + src/hdfs_web.c + src/hdfs_http_client.c + src/hdfs_http_query.c + src/hdfs_json_parser.c + ../../main/native/libhdfs/exception.c + ../../main/native/libhdfs/jni_helper.c + ../../main/native/libhdfs/common/htable.c + ${OS_DIR}/mutexes.c + ${OS_DIR}/thread_local_storage.c +) +hadoop_target_link_dual_libraries(webhdfs + ${JAVA_JVM_LIBRARY} + ${CURL_LIBRARY} + ${JANSSON_LIBRARY} + pthread +) +hadoop_dual_output_directory(webhdfs target) +set(LIBWEBHDFS_VERSION "0.0.0") +set_target_properties(webhdfs PROPERTIES + SOVERSION ${LIBWEBHDFS_VERSION}) + +add_executable(test_libwebhdfs_ops + src/test_libwebhdfs_ops.c +) +target_link_libraries(test_libwebhdfs_ops + webhdfs + native_mini_dfs +) + +add_executable(test_libwebhdfs_read + src/test_libwebhdfs_read.c +) +target_link_libraries(test_libwebhdfs_read + webhdfs +) + +add_executable(test_libwebhdfs_write + src/test_libwebhdfs_write.c +) +target_link_libraries(test_libwebhdfs_write + webhdfs +) + +add_executable(test_libwebhdfs_threaded + src/test_libwebhdfs_threaded.c +) +target_link_libraries(test_libwebhdfs_threaded + webhdfs + native_mini_dfs + pthread +) http://git-wip-us.apache.org/repos/asf/hadoop/blob/960b19ed/hadoop-hdfs-project/hadoop-hdfs-native-client/src/contrib/libwebhdfs/resources/FindJansson.cmake ---------------------------------------------------------------------- diff --git a/hadoop-hdfs-project/hadoop-hdfs-native-client/src/contrib/libwebhdfs/resources/FindJansson.cmake b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/contrib/libwebhdfs/resources/FindJansson.cmake new file mode 100644 index 0000000..b8c67ea --- /dev/null +++ b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/contrib/libwebhdfs/resources/FindJansson.cmake @@ -0,0 +1,43 @@ +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. + + +# - Try to find Jansson +# Once done this will define +# JANSSON_FOUND - System has Jansson +# JANSSON_INCLUDE_DIRS - The Jansson include directories +# JANSSON_LIBRARIES - The libraries needed to use Jansson +# JANSSON_DEFINITIONS - Compiler switches required for using Jansson + +find_path(JANSSON_INCLUDE_DIR jansson.h + /usr/include + /usr/include/jansson + /usr/local/include ) + +find_library(JANSSON_LIBRARY NAMES jansson + PATHS /usr/lib /usr/local/lib ) + +set(JANSSON_LIBRARIES ${JANSSON_LIBRARY} ) +set(JANSSON_INCLUDE_DIRS ${JANSSON_INCLUDE_DIR} ) + +include(FindPackageHandleStandardArgs) +# handle the QUIETLY and REQUIRED arguments and set JANSSON_FOUND to TRUE +# if all listed variables are TRUE +find_package_handle_standard_args(Jansson DEFAULT_MSG + JANSSON_LIBRARY JANSSON_INCLUDE_DIR) + +mark_as_advanced(JANSSON_INCLUDE_DIR JANSSON_LIBRARY ) http://git-wip-us.apache.org/repos/asf/hadoop/blob/960b19ed/hadoop-hdfs-project/hadoop-hdfs-native-client/src/contrib/libwebhdfs/src/hdfs_http_client.c ---------------------------------------------------------------------- diff --git a/hadoop-hdfs-project/hadoop-hdfs-native-client/src/contrib/libwebhdfs/src/hdfs_http_client.c b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/contrib/libwebhdfs/src/hdfs_http_client.c new file mode 100644 index 0000000..dc5ca41 --- /dev/null +++ b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/contrib/libwebhdfs/src/hdfs_http_client.c @@ -0,0 +1,490 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#include +#include +#include + +#include "hdfs_http_client.h" +#include "exception.h" + +static pthread_mutex_t curlInitMutex = PTHREAD_MUTEX_INITIALIZER; +static volatile int curlGlobalInited = 0; + +const char *hdfs_strerror(int errnoval) +{ +#if defined(__sun) +// MT-Safe under Solaris which doesn't support sys_errlist/sys_nerr + return strerror(errnoval); +#else + if ((errnoval < 0) || (errnoval >= sys_nerr)) { + return "unknown error."; + } + return sys_errlist[errnoval]; +#endif +} + +int initResponseBuffer(struct ResponseBuffer **buffer) +{ + struct ResponseBuffer *info = NULL; + int ret = 0; + info = calloc(1, sizeof(struct ResponseBuffer)); + if (!info) { + ret = ENOMEM; + } + *buffer = info; + return ret; +} + +void freeResponseBuffer(struct ResponseBuffer *buffer) +{ + if (buffer) { + if (buffer->content) { + free(buffer->content); + } + free(buffer); + buffer = NULL; + } +} + +void freeResponse(struct Response *resp) +{ + if (resp) { + freeResponseBuffer(resp->body); + freeResponseBuffer(resp->header); + free(resp); + resp = NULL; + } +} + +/** + * Callback used by libcurl for allocating local buffer and + * reading data to local buffer + */ +static size_t writefunc(void *ptr, size_t size, + size_t nmemb, struct ResponseBuffer *rbuffer) +{ + void *temp = NULL; + if (size * nmemb < 1) { + return 0; + } + if (!rbuffer) { + fprintf(stderr, + "ERROR: ResponseBuffer is NULL for the callback writefunc.\n"); + return 0; + } + + if (rbuffer->remaining < size * nmemb) { + temp = realloc(rbuffer->content, rbuffer->offset + size * nmemb + 1); + if (temp == NULL) { + fprintf(stderr, "ERROR: fail to realloc in callback writefunc.\n"); + return 0; + } + rbuffer->content = temp; + rbuffer->remaining = size * nmemb; + } + memcpy(rbuffer->content + rbuffer->offset, ptr, size * nmemb); + rbuffer->offset += size * nmemb; + (rbuffer->content)[rbuffer->offset] = '\0'; + rbuffer->remaining -= size * nmemb; + return size * nmemb; +} + +/** + * Callback used by libcurl for reading data into buffer provided by user, + * thus no need to reallocate buffer. + */ +static size_t writeFuncWithUserBuffer(void *ptr, size_t size, + size_t nmemb, struct ResponseBuffer *rbuffer) +{ + size_t toCopy = 0; + if (size * nmemb < 1) { + return 0; + } + if (!rbuffer || !rbuffer->content) { + fprintf(stderr, + "ERROR: buffer to read is NULL for the " + "callback writeFuncWithUserBuffer.\n"); + return 0; + } + + toCopy = rbuffer->remaining < (size * nmemb) ? + rbuffer->remaining : (size * nmemb); + memcpy(rbuffer->content + rbuffer->offset, ptr, toCopy); + rbuffer->offset += toCopy; + rbuffer->remaining -= toCopy; + return toCopy; +} + +/** + * Callback used by libcurl for writing data to remote peer + */ +static size_t readfunc(void *ptr, size_t size, size_t nmemb, void *stream) +{ + struct webhdfsBuffer *wbuffer = NULL; + if (size * nmemb < 1) { + return 0; + } + + wbuffer = stream; + pthread_mutex_lock(&wbuffer->writeMutex); + while (wbuffer->remaining == 0) { + /* + * The current remainning bytes to write is 0, + * check closeFlag to see whether need to finish the transfer. + * if yes, return 0; else, wait + */ + if (wbuffer->closeFlag) { // We can close the transfer now + //For debug + fprintf(stderr, "CloseFlag is set, ready to close the transfer\n"); + pthread_mutex_unlock(&wbuffer->writeMutex); + return 0; + } else { + // remaining == 0 but closeFlag is not set + // indicates that user's buffer has been transferred + pthread_cond_signal(&wbuffer->transfer_finish); + pthread_cond_wait(&wbuffer->newwrite_or_close, + &wbuffer->writeMutex); + } + } + + if (wbuffer->remaining > 0 && !wbuffer->closeFlag) { + size_t copySize = wbuffer->remaining < size * nmemb ? + wbuffer->remaining : size * nmemb; + memcpy(ptr, wbuffer->wbuffer + wbuffer->offset, copySize); + wbuffer->offset += copySize; + wbuffer->remaining -= copySize; + pthread_mutex_unlock(&wbuffer->writeMutex); + return copySize; + } else { + fprintf(stderr, "ERROR: webhdfsBuffer's remaining is %ld, " + "it should be a positive value!\n", wbuffer->remaining); + pthread_mutex_unlock(&wbuffer->writeMutex); + return 0; + } +} + +/** + * Initialize the global libcurl environment + */ +static void initCurlGlobal() +{ + if (!curlGlobalInited) { + pthread_mutex_lock(&curlInitMutex); + if (!curlGlobalInited) { + curl_global_init(CURL_GLOBAL_ALL); + curlGlobalInited = 1; + } + pthread_mutex_unlock(&curlInitMutex); + } +} + +/** + * Launch simple commands (commands without file I/O) and return response + * + * @param url Target URL + * @param method HTTP method (GET/PUT/POST) + * @param followloc Whether or not need to set CURLOPT_FOLLOWLOCATION + * @param response Response from remote service + * @return 0 for success and non-zero value to indicate error + */ +static int launchCmd(const char *url, enum HttpHeader method, + enum Redirect followloc, struct Response **response) +{ + CURL *curl = NULL; + CURLcode curlCode; + int ret = 0; + struct Response *resp = NULL; + + resp = calloc(1, sizeof(struct Response)); + if (!resp) { + return ENOMEM; + } + ret = initResponseBuffer(&(resp->body)); + if (ret) { + goto done; + } + ret = initResponseBuffer(&(resp->header)); + if (ret) { + goto done; + } + initCurlGlobal(); + curl = curl_easy_init(); + if (!curl) { + ret = ENOMEM; // curl_easy_init does not return error code, + // and most of its errors are caused by malloc() + fprintf(stderr, "ERROR in curl_easy_init.\n"); + goto done; + } + /* Set callback function for reading data from remote service */ + curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, writefunc); + curl_easy_setopt(curl, CURLOPT_WRITEDATA, resp->body); + curl_easy_setopt(curl, CURLOPT_HEADERFUNCTION, writefunc); + curl_easy_setopt(curl, CURLOPT_WRITEHEADER, resp->header); + curl_easy_setopt(curl, CURLOPT_URL, url); + switch(method) { + case GET: + break; + case PUT: + curl_easy_setopt(curl, CURLOPT_CUSTOMREQUEST, "PUT"); + break; + case POST: + curl_easy_setopt(curl, CURLOPT_CUSTOMREQUEST, "POST"); + break; + case DELETE: + curl_easy_setopt(curl, CURLOPT_CUSTOMREQUEST, "DELETE"); + break; + default: + ret = EINVAL; + fprintf(stderr, "ERROR: Invalid HTTP method\n"); + goto done; + } + if (followloc == YES) { + curl_easy_setopt(curl, CURLOPT_FOLLOWLOCATION, 1); + } + /* Now run the curl handler */ + curlCode = curl_easy_perform(curl); + if (curlCode != CURLE_OK) { + ret = EIO; + fprintf(stderr, "ERROR: preform the URL %s failed, <%d>: %s\n", + url, curlCode, curl_easy_strerror(curlCode)); + } +done: + if (curl != NULL) { + curl_easy_cleanup(curl); + } + if (ret) { + free(resp); + resp = NULL; + } + *response = resp; + return ret; +} + +/** + * Launch the read request. The request is sent to the NameNode and then + * redirected to corresponding DataNode + * + * @param url The URL for the read request + * @param resp The response containing the buffer provided by user + * @return 0 for success and non-zero value to indicate error + */ +static int launchReadInternal(const char *url, struct Response* resp) +{ + CURL *curl; + CURLcode curlCode; + int ret = 0; + + if (!resp || !resp->body || !resp->body->content) { + fprintf(stderr, + "ERROR: invalid user-provided buffer!\n"); + return EINVAL; + } + + initCurlGlobal(); + /* get a curl handle */ + curl = curl_easy_init(); + if (!curl) { + fprintf(stderr, "ERROR in curl_easy_init.\n"); + return ENOMEM; + } + curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, writeFuncWithUserBuffer); + curl_easy_setopt(curl, CURLOPT_WRITEDATA, resp->body); + curl_easy_setopt(curl, CURLOPT_HEADERFUNCTION, writefunc); + curl_easy_setopt(curl, CURLOPT_WRITEHEADER, resp->header); + curl_easy_setopt(curl, CURLOPT_URL, url); + curl_easy_setopt(curl, CURLOPT_FOLLOWLOCATION, 1); + + curlCode = curl_easy_perform(curl); + if (curlCode != CURLE_OK && curlCode != CURLE_PARTIAL_FILE) { + ret = EIO; + fprintf(stderr, "ERROR: preform the URL %s failed, <%d>: %s\n", + url, curlCode, curl_easy_strerror(curlCode)); + } + + curl_easy_cleanup(curl); + return ret; +} + +/** + * The function does the write operation by connecting to a DataNode. + * The function keeps the connection with the DataNode until + * the closeFlag is set. Whenever the current data has been sent out, + * the function blocks waiting for further input from user or close. + * + * @param url URL of the remote DataNode + * @param method PUT for create and POST for append + * @param uploadBuffer Buffer storing user's data to write + * @param response Response from remote service + * @return 0 for success and non-zero value to indicate error + */ +static int launchWrite(const char *url, enum HttpHeader method, + struct webhdfsBuffer *uploadBuffer, + struct Response **response) +{ + CURLcode curlCode; + struct Response* resp = NULL; + struct curl_slist *chunk = NULL; + CURL *curl = NULL; + int ret = 0; + + if (!uploadBuffer) { + fprintf(stderr, "ERROR: upload buffer is NULL!\n"); + return EINVAL; + } + + initCurlGlobal(); + resp = calloc(1, sizeof(struct Response)); + if (!resp) { + return ENOMEM; + } + ret = initResponseBuffer(&(resp->body)); + if (ret) { + goto done; + } + ret = initResponseBuffer(&(resp->header)); + if (ret) { + goto done; + } + + // Connect to the datanode in order to create the lease in the namenode + curl = curl_easy_init(); + if (!curl) { + fprintf(stderr, "ERROR: failed to initialize the curl handle.\n"); + return ENOMEM; + } + curl_easy_setopt(curl, CURLOPT_URL, url); + + curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, writefunc); + curl_easy_setopt(curl, CURLOPT_WRITEDATA, resp->body); + curl_easy_setopt(curl, CURLOPT_HEADERFUNCTION, writefunc); + curl_easy_setopt(curl, CURLOPT_WRITEHEADER, resp->header); + curl_easy_setopt(curl, CURLOPT_READFUNCTION, readfunc); + curl_easy_setopt(curl, CURLOPT_READDATA, uploadBuffer); + curl_easy_setopt(curl, CURLOPT_UPLOAD, 1L); + + chunk = curl_slist_append(chunk, "Transfer-Encoding: chunked"); + curl_easy_setopt(curl, CURLOPT_HTTPHEADER, chunk); + chunk = curl_slist_append(chunk, "Expect:"); + curl_easy_setopt(curl, CURLOPT_HTTPHEADER, chunk); + + switch(method) { + case PUT: + curl_easy_setopt(curl, CURLOPT_CUSTOMREQUEST, "PUT"); + break; + case POST: + curl_easy_setopt(curl, CURLOPT_CUSTOMREQUEST, "POST"); + break; + default: + ret = EINVAL; + fprintf(stderr, "ERROR: Invalid HTTP method\n"); + goto done; + } + curlCode = curl_easy_perform(curl); + if (curlCode != CURLE_OK) { + ret = EIO; + fprintf(stderr, "ERROR: preform the URL %s failed, <%d>: %s\n", + url, curlCode, curl_easy_strerror(curlCode)); + } + +done: + if (chunk != NULL) { + curl_slist_free_all(chunk); + } + if (curl != NULL) { + curl_easy_cleanup(curl); + } + if (ret) { + free(resp); + resp = NULL; + } + *response = resp; + return ret; +} + +int launchMKDIR(const char *url, struct Response **resp) +{ + return launchCmd(url, PUT, NO, resp); +} + +int launchRENAME(const char *url, struct Response **resp) +{ + return launchCmd(url, PUT, NO, resp); +} + +int launchGFS(const char *url, struct Response **resp) +{ + return launchCmd(url, GET, NO, resp); +} + +int launchLS(const char *url, struct Response **resp) +{ + return launchCmd(url, GET, NO, resp); +} + +int launchCHMOD(const char *url, struct Response **resp) +{ + return launchCmd(url, PUT, NO, resp); +} + +int launchCHOWN(const char *url, struct Response **resp) +{ + return launchCmd(url, PUT, NO, resp); +} + +int launchDELETE(const char *url, struct Response **resp) +{ + return launchCmd(url, DELETE, NO, resp); +} + +int launchOPEN(const char *url, struct Response* resp) +{ + return launchReadInternal(url, resp); +} + +int launchUTIMES(const char *url, struct Response **resp) +{ + return launchCmd(url, PUT, NO, resp); +} + +int launchNnWRITE(const char *url, struct Response **resp) +{ + return launchCmd(url, PUT, NO, resp); +} + +int launchNnAPPEND(const char *url, struct Response **resp) +{ + return launchCmd(url, POST, NO, resp); +} + +int launchDnWRITE(const char *url, struct webhdfsBuffer *buffer, + struct Response **resp) +{ + return launchWrite(url, PUT, buffer, resp); +} + +int launchDnAPPEND(const char *url, struct webhdfsBuffer *buffer, + struct Response **resp) +{ + return launchWrite(url, POST, buffer, resp); +} + +int launchSETREPLICATION(const char *url, struct Response **resp) +{ + return launchCmd(url, PUT, NO, resp); +} http://git-wip-us.apache.org/repos/asf/hadoop/blob/960b19ed/hadoop-hdfs-project/hadoop-hdfs-native-client/src/contrib/libwebhdfs/src/hdfs_http_client.h ---------------------------------------------------------------------- diff --git a/hadoop-hdfs-project/hadoop-hdfs-native-client/src/contrib/libwebhdfs/src/hdfs_http_client.h b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/contrib/libwebhdfs/src/hdfs_http_client.h new file mode 100644 index 0000000..8d1c3db --- /dev/null +++ b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/contrib/libwebhdfs/src/hdfs_http_client.h @@ -0,0 +1,294 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + + + +#ifndef _HDFS_HTTP_CLIENT_H_ +#define _HDFS_HTTP_CLIENT_H_ + +#include "hdfs.h" /* for tSize */ + +#include /* for pthread_t */ +#include /* for size_t */ + +/** enum indicating the type of hdfs stream */ +enum hdfsStreamType +{ + UNINITIALIZED = 0, + INPUT = 1, + OUTPUT = 2, +}; + +/** + * webhdfsBuffer - used for hold the data for read/write from/to http connection + */ +struct webhdfsBuffer { + const char *wbuffer; /* The user's buffer for uploading */ + size_t remaining; /* Length of content */ + size_t offset; /* offset for reading */ + /* Check whether the hdfsOpenFile has been called before */ + int openFlag; + /* Whether to close the http connection for writing */ + int closeFlag; + /* Synchronization between the curl and hdfsWrite threads */ + pthread_mutex_t writeMutex; + /* + * Transferring thread waits for this condition + * when there is no more content for transferring in the buffer + */ + pthread_cond_t newwrite_or_close; + /* Condition used to indicate finishing transferring (one buffer) */ + pthread_cond_t transfer_finish; +}; + +/** File handle for webhdfs */ +struct webhdfsFileHandle { + char *absPath; /* Absolute path of file */ + int bufferSize; /* Size of buffer */ + short replication; /* Number of replication */ + tSize blockSize; /* Block size */ + char *datanode; /* URL of the DataNode */ + /* webhdfsBuffer handle used to store the upload data */ + struct webhdfsBuffer *uploadBuffer; + /* The thread used for data transferring */ + pthread_t connThread; +}; + +/** Type of http header */ +enum HttpHeader { + GET, + PUT, + POST, + DELETE +}; + +/** Whether to redirect */ +enum Redirect { + YES, + NO +}; + +/** Buffer used for holding response */ +struct ResponseBuffer { + char *content; + size_t remaining; + size_t offset; +}; + +/** + * The response got through webhdfs + */ +struct Response { + struct ResponseBuffer *body; + struct ResponseBuffer *header; +}; + +/** + * Create and initialize a ResponseBuffer + * + * @param buffer Pointer pointing to new created ResponseBuffer handle + * @return 0 for success, non-zero value to indicate error + */ +int initResponseBuffer(struct ResponseBuffer **buffer) __attribute__ ((warn_unused_result)); + +/** + * Free the given ResponseBuffer + * + * @param buffer The ResponseBuffer to free + */ +void freeResponseBuffer(struct ResponseBuffer *buffer); + +/** + * Free the given Response + * + * @param resp The Response to free + */ +void freeResponse(struct Response *resp); + +/** + * Send the MKDIR request to NameNode using the given URL. + * The NameNode will execute the operation and return the result as response. + * + * @param url The URL for MKDIR operation + * @param response Response handle to store response returned from the NameNode + * @return 0 for success, non-zero value to indicate error + */ +int launchMKDIR(const char *url, + struct Response **response) __attribute__ ((warn_unused_result)); + +/** + * Send the RENAME request to NameNode using the given URL. + * The NameNode will execute the operation and return the result as response. + * + * @param url The URL for RENAME operation + * @param response Response handle to store response returned from the NameNode + * @return 0 for success, non-zero value to indicate error + */ +int launchRENAME(const char *url, + struct Response **response) __attribute__ ((warn_unused_result)); + +/** + * Send the CHMOD request to NameNode using the given URL. + * The NameNode will execute the operation and return the result as response. + * + * @param url The URL for CHMOD operation + * @param response Response handle to store response returned from the NameNode + * @return 0 for success, non-zero value to indicate error + */ +int launchCHMOD(const char *url, + struct Response **response) __attribute__ ((warn_unused_result)); + +/** + * Send the GetFileStatus request to NameNode using the given URL. + * The NameNode will execute the operation and return the result as response. + * + * @param url The URL for GetFileStatus operation + * @param response Response handle to store response returned from the NameNode, + * containing either file status or exception information + * @return 0 for success, non-zero value to indicate error + */ +int launchGFS(const char *url, + struct Response **response) __attribute__ ((warn_unused_result)); + +/** + * Send the LS (LISTSTATUS) request to NameNode using the given URL. + * The NameNode will execute the operation and return the result as response. + * + * @param url The URL for LISTSTATUS operation + * @param response Response handle to store response returned from the NameNode + * @return 0 for success, non-zero value to indicate error + */ +int launchLS(const char *url, + struct Response **response) __attribute__ ((warn_unused_result)); + +/** + * Send the DELETE request to NameNode using the given URL. + * The NameNode will execute the operation and return the result as response. + * + * @param url The URL for DELETE operation + * @param response Response handle to store response returned from the NameNode + * @return 0 for success, non-zero value to indicate error + */ +int launchDELETE(const char *url, + struct Response **response) __attribute__ ((warn_unused_result)); + +/** + * Send the CHOWN request to NameNode using the given URL. + * The NameNode will execute the operation and return the result as response. + * + * @param url The URL for CHOWN operation + * @param response Response handle to store response returned from the NameNode + * @return 0 for success, non-zero value to indicate error + */ +int launchCHOWN(const char *url, + struct Response **response) __attribute__ ((warn_unused_result)); + +/** + * Send the OPEN request to NameNode using the given URL, + * asking for reading a file (within a range). + * The NameNode first redirects the request to the datanode + * that holds the corresponding first block of the file (within a range), + * and the datanode returns the content of the file through the HTTP connection. + * + * @param url The URL for OPEN operation + * @param resp The response holding user's buffer. + The file content will be written into the buffer. + * @return 0 for success, non-zero value to indicate error + */ +int launchOPEN(const char *url, + struct Response* resp) __attribute__ ((warn_unused_result)); + +/** + * Send the SETTIMES request to NameNode using the given URL. + * The NameNode will execute the operation and return the result as response. + * + * @param url The URL for SETTIMES operation + * @param response Response handle to store response returned from the NameNode + * @return 0 for success, non-zero value to indicate error + */ +int launchUTIMES(const char *url, + struct Response **response) __attribute__ ((warn_unused_result)); + +/** + * Send the WRITE/CREATE request to NameNode using the given URL. + * The NameNode will choose the writing target datanodes + * and return the first datanode in the pipeline as response + * + * @param url The URL for WRITE/CREATE operation connecting to NameNode + * @param response Response handle to store response returned from the NameNode + * @return 0 for success, non-zero value to indicate error + */ +int launchNnWRITE(const char *url, + struct Response **response) __attribute__ ((warn_unused_result)); + +/** + * Send the WRITE request along with to-write content to + * the corresponding DataNode using the given URL. + * The DataNode will write the data and return the response. + * + * @param url The URL for WRITE operation connecting to DataNode + * @param buffer The webhdfsBuffer containing data to be written to hdfs + * @param response Response handle to store response returned from the NameNode + * @return 0 for success, non-zero value to indicate error + */ +int launchDnWRITE(const char *url, struct webhdfsBuffer *buffer, + struct Response **response) __attribute__ ((warn_unused_result)); + +/** + * Send the WRITE (APPEND) request to NameNode using the given URL. + * The NameNode determines the DataNode for appending and + * sends its URL back as response. + * + * @param url The URL for APPEND operation + * @param response Response handle to store response returned from the NameNode + * @return 0 for success, non-zero value to indicate error + */ +int launchNnAPPEND(const char *url, struct Response **response) __attribute__ ((warn_unused_result)); + +/** + * Send the SETREPLICATION request to NameNode using the given URL. + * The NameNode will execute the operation and return the result as response. + * + * @param url The URL for SETREPLICATION operation + * @param response Response handle to store response returned from the NameNode + * @return 0 for success, non-zero value to indicate error + */ +int launchSETREPLICATION(const char *url, + struct Response **response) __attribute__ ((warn_unused_result)); + +/** + * Send the APPEND request along with the content to DataNode. + * The DataNode will do the appending and return the result as response. + * + * @param url The URL for APPEND operation connecting to DataNode + * @param buffer The webhdfsBuffer containing data to be appended + * @param response Response handle to store response returned from the NameNode + * @return 0 for success, non-zero value to indicate error + */ +int launchDnAPPEND(const char *url, struct webhdfsBuffer *buffer, + struct Response **response) __attribute__ ((warn_unused_result)); + +/** + * Thread-safe strerror alternative. + * + * @param errnoval The error code value + * @return The error message string mapped to the given error code + */ +const char *hdfs_strerror(int errnoval); + +#endif //_HDFS_HTTP_CLIENT_H_ http://git-wip-us.apache.org/repos/asf/hadoop/blob/960b19ed/hadoop-hdfs-project/hadoop-hdfs-native-client/src/contrib/libwebhdfs/src/hdfs_http_query.c ---------------------------------------------------------------------- diff --git a/hadoop-hdfs-project/hadoop-hdfs-native-client/src/contrib/libwebhdfs/src/hdfs_http_query.c b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/contrib/libwebhdfs/src/hdfs_http_query.c new file mode 100644 index 0000000..b082c08 --- /dev/null +++ b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/contrib/libwebhdfs/src/hdfs_http_query.c @@ -0,0 +1,402 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +#include "hdfs_http_query.h" +#include +#include +#include +#include +#include + +#define PERM_STR_LEN 4 // "644" + one byte for NUL +#define SHORT_STR_LEN 6 // 65535 + NUL +#define LONG_STR_LEN 21 // 2^64-1 = 18446744073709551615 + NUL + +/** + * Create query based on NameNode hostname, + * NameNode port, path, operation and other parameters + * + * @param host NameNode hostName + * @param nnPort Port of NameNode + * @param path Absolute path for the corresponding file + * @param op Operations + * @param paraNum Number of remaining parameters + * @param paraNames Names of remaining parameters + * @param paraValues Values of remaining parameters + * @param url Holding the created URL + * @return 0 on success and non-zero value to indicate error + */ +static int createQueryURL(const char *host, unsigned int nnPort, + const char *path, const char *op, int paraNum, + const char **paraNames, const char **paraValues, + char **queryUrl) +{ + size_t length = 0; + int i = 0, offset = 0, ret = 0; + char *url = NULL; + const char *protocol = "http://"; + const char *prefix = "/webhdfs/v1"; + + if (!paraNames || !paraValues) { + return EINVAL; + } + length = strlen(protocol) + strlen(host) + strlen(":") + + SHORT_STR_LEN + strlen(prefix) + strlen(path) + + strlen ("?op=") + strlen(op); + for (i = 0; i < paraNum; i++) { + if (paraNames[i] && paraValues[i]) { + length += 2 + strlen(paraNames[i]) + strlen(paraValues[i]); + } + } + url = malloc(length); // The '\0' has already been included + // when using SHORT_STR_LEN + if (!url) { + return ENOMEM; + } + + offset = snprintf(url, length, "%s%s:%d%s%s?op=%s", + protocol, host, nnPort, prefix, path, op); + if (offset >= length || offset < 0) { + ret = EIO; + goto done; + } + for (i = 0; i < paraNum; i++) { + if (!paraNames[i] || !paraValues[i] || paraNames[i][0] == '\0' || + paraValues[i][0] == '\0') { + continue; + } + offset += snprintf(url + offset, length - offset, + "&%s=%s", paraNames[i], paraValues[i]); + if (offset >= length || offset < 0) { + ret = EIO; + goto done; + } + } +done: + if (ret) { + free(url); + return ret; + } + *queryUrl = url; + return 0; +} + +int createUrlForMKDIR(const char *host, int nnPort, + const char *path, const char *user, char **url) +{ + const char *userPara = "user.name"; + return createQueryURL(host, nnPort, path, "MKDIRS", 1, + &userPara, &user, url); +} + +int createUrlForGetFileStatus(const char *host, int nnPort, const char *path, + const char *user, char **url) +{ + const char *userPara = "user.name"; + return createQueryURL(host, nnPort, path, "GETFILESTATUS", 1, + &userPara, &user, url); +} + +int createUrlForLS(const char *host, int nnPort, const char *path, + const char *user, char **url) +{ + const char *userPara = "user.name"; + return createQueryURL(host, nnPort, path, "LISTSTATUS", + 1, &userPara, &user, url); +} + +int createUrlForNnAPPEND(const char *host, int nnPort, const char *path, + const char *user, char **url) +{ + const char *userPara = "user.name"; + return createQueryURL(host, nnPort, path, "APPEND", + 1, &userPara, &user, url); +} + +int createUrlForMKDIRwithMode(const char *host, int nnPort, const char *path, + int mode, const char *user, char **url) +{ + int strlength; + char permission[PERM_STR_LEN]; + const char *paraNames[2], *paraValues[2]; + + paraNames[0] = "permission"; + paraNames[1] = "user.name"; + memset(permission, 0, PERM_STR_LEN); + strlength = snprintf(permission, PERM_STR_LEN, "%o", mode); + if (strlength < 0 || strlength >= PERM_STR_LEN) { + return EIO; + } + paraValues[0] = permission; + paraValues[1] = user; + + return createQueryURL(host, nnPort, path, "MKDIRS", 2, + paraNames, paraValues, url); +} + +int createUrlForRENAME(const char *host, int nnPort, const char *srcpath, + const char *destpath, const char *user, char **url) +{ + const char *paraNames[2], *paraValues[2]; + paraNames[0] = "destination"; + paraNames[1] = "user.name"; + paraValues[0] = destpath; + paraValues[1] = user; + + return createQueryURL(host, nnPort, srcpath, + "RENAME", 2, paraNames, paraValues, url); +} + +int createUrlForCHMOD(const char *host, int nnPort, const char *path, + int mode, const char *user, char **url) +{ + int strlength; + char permission[PERM_STR_LEN]; + const char *paraNames[2], *paraValues[2]; + + paraNames[0] = "permission"; + paraNames[1] = "user.name"; + memset(permission, 0, PERM_STR_LEN); + strlength = snprintf(permission, PERM_STR_LEN, "%o", mode); + if (strlength < 0 || strlength >= PERM_STR_LEN) { + return EIO; + } + paraValues[0] = permission; + paraValues[1] = user; + + return createQueryURL(host, nnPort, path, "SETPERMISSION", + 2, paraNames, paraValues, url); +} + +int createUrlForDELETE(const char *host, int nnPort, const char *path, + int recursive, const char *user, char **url) +{ + const char *paraNames[2], *paraValues[2]; + paraNames[0] = "recursive"; + paraNames[1] = "user.name"; + if (recursive) { + paraValues[0] = "true"; + } else { + paraValues[0] = "false"; + } + paraValues[1] = user; + + return createQueryURL(host, nnPort, path, "DELETE", + 2, paraNames, paraValues, url); +} + +int createUrlForCHOWN(const char *host, int nnPort, const char *path, + const char *owner, const char *group, + const char *user, char **url) +{ + const char *paraNames[3], *paraValues[3]; + paraNames[0] = "owner"; + paraNames[1] = "group"; + paraNames[2] = "user.name"; + paraValues[0] = owner; + paraValues[1] = group; + paraValues[2] = user; + + return createQueryURL(host, nnPort, path, "SETOWNER", + 3, paraNames, paraValues, url); +} + +int createUrlForOPEN(const char *host, int nnPort, const char *path, + const char *user, size_t offset, size_t length, char **url) +{ + int strlength; + char offsetStr[LONG_STR_LEN], lengthStr[LONG_STR_LEN]; + const char *paraNames[3], *paraValues[3]; + + paraNames[0] = "offset"; + paraNames[1] = "length"; + paraNames[2] = "user.name"; + memset(offsetStr, 0, LONG_STR_LEN); + memset(lengthStr, 0, LONG_STR_LEN); + strlength = snprintf(offsetStr, LONG_STR_LEN, "%lu", offset); + if (strlength < 0 || strlength >= LONG_STR_LEN) { + return EIO; + } + strlength = snprintf(lengthStr, LONG_STR_LEN, "%lu", length); + if (strlength < 0 || strlength >= LONG_STR_LEN) { + return EIO; + } + paraValues[0] = offsetStr; + paraValues[1] = lengthStr; + paraValues[2] = user; + + return createQueryURL(host, nnPort, path, "OPEN", + 3, paraNames, paraValues, url); +} + +int createUrlForUTIMES(const char *host, int nnPort, const char *path, + long unsigned mTime, long unsigned aTime, + const char *user, char **url) +{ + int strlength; + char modTime[LONG_STR_LEN], acsTime[LONG_STR_LEN]; + const char *paraNames[3], *paraValues[3]; + + memset(modTime, 0, LONG_STR_LEN); + memset(acsTime, 0, LONG_STR_LEN); + strlength = snprintf(modTime, LONG_STR_LEN, "%lu", mTime); + if (strlength < 0 || strlength >= LONG_STR_LEN) { + return EIO; + } + strlength = snprintf(acsTime, LONG_STR_LEN, "%lu", aTime); + if (strlength < 0 || strlength >= LONG_STR_LEN) { + return EIO; + } + paraNames[0] = "modificationtime"; + paraNames[1] = "accesstime"; + paraNames[2] = "user.name"; + paraValues[0] = modTime; + paraValues[1] = acsTime; + paraValues[2] = user; + + return createQueryURL(host, nnPort, path, "SETTIMES", + 3, paraNames, paraValues, url); +} + +int createUrlForNnWRITE(const char *host, int nnPort, + const char *path, const char *user, + int16_t replication, size_t blockSize, char **url) +{ + int strlength; + char repStr[SHORT_STR_LEN], blockSizeStr[LONG_STR_LEN]; + const char *paraNames[4], *paraValues[4]; + + memset(repStr, 0, SHORT_STR_LEN); + memset(blockSizeStr, 0, LONG_STR_LEN); + if (replication > 0) { + strlength = snprintf(repStr, SHORT_STR_LEN, "%u", replication); + if (strlength < 0 || strlength >= SHORT_STR_LEN) { + return EIO; + } + } + if (blockSize > 0) { + strlength = snprintf(blockSizeStr, LONG_STR_LEN, "%lu", blockSize); + if (strlength < 0 || strlength >= LONG_STR_LEN) { + return EIO; + } + } + paraNames[0] = "overwrite"; + paraNames[1] = "replication"; + paraNames[2] = "blocksize"; + paraNames[3] = "user.name"; + paraValues[0] = "true"; + paraValues[1] = repStr; + paraValues[2] = blockSizeStr; + paraValues[3] = user; + + return createQueryURL(host, nnPort, path, "CREATE", + 4, paraNames, paraValues, url); +} + +int createUrlForSETREPLICATION(const char *host, int nnPort, + const char *path, int16_t replication, + const char *user, char **url) +{ + char repStr[SHORT_STR_LEN]; + const char *paraNames[2], *paraValues[2]; + int strlength; + + memset(repStr, 0, SHORT_STR_LEN); + if (replication > 0) { + strlength = snprintf(repStr, SHORT_STR_LEN, "%u", replication); + if (strlength < 0 || strlength >= SHORT_STR_LEN) { + return EIO; + } + } + paraNames[0] = "replication"; + paraNames[1] = "user.name"; + paraValues[0] = repStr; + paraValues[1] = user; + + return createQueryURL(host, nnPort, path, "SETREPLICATION", + 2, paraNames, paraValues, url); +} + +int createUrlForGetBlockLocations(const char *host, int nnPort, + const char *path, size_t offset, + size_t length, const char *user, char **url) +{ + char offsetStr[LONG_STR_LEN], lengthStr[LONG_STR_LEN]; + const char *paraNames[3], *paraValues[3]; + int strlength; + + memset(offsetStr, 0, LONG_STR_LEN); + memset(lengthStr, 0, LONG_STR_LEN); + if (offset > 0) { + strlength = snprintf(offsetStr, LONG_STR_LEN, "%lu", offset); + if (strlength < 0 || strlength >= LONG_STR_LEN) { + return EIO; + } + } + if (length > 0) { + strlength = snprintf(lengthStr, LONG_STR_LEN, "%lu", length); + if (strlength < 0 || strlength >= LONG_STR_LEN) { + return EIO; + } + } + paraNames[0] = "offset"; + paraNames[1] = "length"; + paraNames[2] = "user.name"; + paraValues[0] = offsetStr; + paraValues[1] = lengthStr; + paraValues[2] = user; + + return createQueryURL(host, nnPort, path, "GET_BLOCK_LOCATIONS", + 3, paraNames, paraValues, url); +} + +int createUrlForReadFromDatanode(const char *dnHost, int dnPort, + const char *path, size_t offset, + size_t length, const char *user, + const char *namenodeRpcAddr, char **url) +{ + char offsetStr[LONG_STR_LEN], lengthStr[LONG_STR_LEN]; + const char *paraNames[4], *paraValues[4]; + int strlength; + + memset(offsetStr, 0, LONG_STR_LEN); + memset(lengthStr, 0, LONG_STR_LEN); + if (offset > 0) { + strlength = snprintf(offsetStr, LONG_STR_LEN, "%lu", offset); + if (strlength < 0 || strlength >= LONG_STR_LEN) { + return EIO; + } + } + if (length > 0) { + strlength = snprintf(lengthStr, LONG_STR_LEN, "%lu", length); + if (strlength < 0 || strlength >= LONG_STR_LEN) { + return EIO; + } + } + + paraNames[0] = "offset"; + paraNames[1] = "length"; + paraNames[2] = "user.name"; + paraNames[3] = "namenoderpcaddress"; + paraValues[0] = offsetStr; + paraValues[1] = lengthStr; + paraValues[2] = user; + paraValues[3] = namenodeRpcAddr; + + return createQueryURL(dnHost, dnPort, path, "OPEN", + 4, paraNames, paraValues, url); +} \ No newline at end of file http://git-wip-us.apache.org/repos/asf/hadoop/blob/960b19ed/hadoop-hdfs-project/hadoop-hdfs-native-client/src/contrib/libwebhdfs/src/hdfs_http_query.h ---------------------------------------------------------------------- diff --git a/hadoop-hdfs-project/hadoop-hdfs-native-client/src/contrib/libwebhdfs/src/hdfs_http_query.h b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/contrib/libwebhdfs/src/hdfs_http_query.h new file mode 100644 index 0000000..432797b --- /dev/null +++ b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/contrib/libwebhdfs/src/hdfs_http_query.h @@ -0,0 +1,240 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + + +#ifndef _HDFS_HTTP_QUERY_H_ +#define _HDFS_HTTP_QUERY_H_ + +#include /* for size_t */ +#include /* for int16_t */ + +/** + * Create the URL for a MKDIR request + * + * @param host The hostname of the NameNode + * @param nnPort Port of the NameNode + * @param path Path of the dir to create + * @param user User name + * @param url Holding the generated URL for MKDIR request + * @return 0 on success and non-zero value on errors + */ +int createUrlForMKDIR(const char *host, int nnPort, + const char *path, const char *user, + char **url) __attribute__ ((warn_unused_result)); + +/** + * Create the URL for a MKDIR (with mode) request + * + * @param host The hostname of the NameNode + * @param nnPort Port of the NameNode + * @param path Path of the dir to create + * @param mode Mode of MKDIR + * @param user User name + * @param url Holding the generated URL for MKDIR request + * @return 0 on success and non-zero value on errors + */ +int createUrlForMKDIRwithMode(const char *host, int nnPort, const char *path, + int mode, const char *user, + char **url) __attribute__ ((warn_unused_result)); + +/** + * Create the URL for a RENAME request + * + * @param host The hostname of the NameNode + * @param nnPort Port of the NameNode + * @param srcpath Source path + * @param dstpath Destination path + * @param user User name + * @param url Holding the generated URL for RENAME request + * @return 0 on success and non-zero value on errors + */ +int createUrlForRENAME(const char *host, int nnPort, const char *srcpath, + const char *dstpath, const char *user, + char **url) __attribute__ ((warn_unused_result)); + +/** + * Create the URL for a CHMOD request + * + * @param host The hostname of the NameNode + * @param nnPort Port of the NameNode + * @param path Target path + * @param mode New mode for the file + * @param user User name + * @param url Holding the generated URL for CHMOD request + * @return 0 on success and non-zero value on errors + */ +int createUrlForCHMOD(const char *host, int nnPort, const char *path, + int mode, const char *user, + char **url) __attribute__ ((warn_unused_result)); + +/** + * Create the URL for a GETFILESTATUS request + * + * @param host The hostname of the NameNode + * @param nnPort Port of the NameNode + * @param path Path of the target file + * @param user User name + * @param url Holding the generated URL for GETFILESTATUS request + * @return 0 on success and non-zero value on errors + */ +int createUrlForGetFileStatus(const char *host, int nnPort, + const char *path, const char *user, + char **url) __attribute__ ((warn_unused_result)); + +/** + * Create the URL for a LISTSTATUS request + * + * @param host The hostname of the NameNode + * @param nnPort Port of the NameNode + * @param path Path of the directory for listing + * @param user User name + * @param url Holding the generated URL for LISTSTATUS request + * @return 0 on success and non-zero value on errors + */ +int createUrlForLS(const char *host, int nnPort, + const char *path, const char *user, + char **url) __attribute__ ((warn_unused_result)); + +/** + * Create the URL for a DELETE request + * + * @param host The hostname of the NameNode + * @param nnPort Port of the NameNode + * @param path Path of the file to be deletected + * @param recursive Whether or not to delete in a recursive way + * @param user User name + * @param url Holding the generated URL for DELETE request + * @return 0 on success and non-zero value on errors + */ +int createUrlForDELETE(const char *host, int nnPort, const char *path, + int recursive, const char *user, + char **url) __attribute__ ((warn_unused_result)); + +/** + * Create the URL for a CHOWN request + * + * @param host The hostname of the NameNode + * @param nnPort Port of the NameNode + * @param path Path of the target + * @param owner New owner + * @param group New group + * @param user User name + * @param url Holding the generated URL for CHOWN request + * @return 0 on success and non-zero value on errors + */ +int createUrlForCHOWN(const char *host, int nnPort, const char *path, + const char *owner, const char *group, const char *user, + char **url) __attribute__ ((warn_unused_result)); + +/** + * Create the URL for a OPEN/READ request + * + * @param host The hostname of the NameNode + * @param nnPort Port of the NameNode + * @param path Path of the file to read + * @param user User name + * @param offset Offset for reading (the start position for this read) + * @param length Length of the file to read + * @param url Holding the generated URL for OPEN/READ request + * @return 0 on success and non-zero value on errors + */ +int createUrlForOPEN(const char *host, int nnPort, const char *path, + const char *user, size_t offset, size_t length, + char **url) __attribute__ ((warn_unused_result)); + +/** + * Create the URL for a UTIMES (update time) request + * + * @param host The hostname of the NameNode + * @param nnPort Port of the NameNode + * @param path Path of the file for updating time + * @param mTime Modified time to set + * @param aTime Access time to set + * @param user User name + * @param url Holding the generated URL for UTIMES request + * @return 0 on success and non-zero value on errors + */ +int createUrlForUTIMES(const char *host, int nnPort, const char *path, + long unsigned mTime, long unsigned aTime, + const char *user, + char **url) __attribute__ ((warn_unused_result)); + +/** + * Create the URL for a WRITE/CREATE request (sent to NameNode) + * + * @param host The hostname of the NameNode + * @param nnPort Port of the NameNode + * @param path Path of the dir to create + * @param user User name + * @param replication Number of replication of the file + * @param blockSize Size of the block for the file + * @param url Holding the generated URL for WRITE request + * @return 0 on success and non-zero value on errors + */ +int createUrlForNnWRITE(const char *host, int nnPort, const char *path, + const char *user, int16_t replication, size_t blockSize, + char **url) __attribute__ ((warn_unused_result)); + +/** + * Create the URL for an APPEND request (sent to NameNode) + * + * @param host The hostname of the NameNode + * @param nnPort Port of the NameNode + * @param path Path of the file for appending + * @param user User name + * @param url Holding the generated URL for APPEND request + * @return 0 on success and non-zero value on errors + */ +int createUrlForNnAPPEND(const char *host, int nnPort, + const char *path, const char *user, + char **url) __attribute__ ((warn_unused_result)); + +/** + * Create the URL for a SETREPLICATION request + * + * @param host The hostname of the NameNode + * @param nnPort Port of the NameNode + * @param path Path of the target file + * @param replication New replication number + * @param user User name + * @param url Holding the generated URL for SETREPLICATION request + * @return 0 on success and non-zero value on errors + */ +int createUrlForSETREPLICATION(const char *host, int nnPort, const char *path, + int16_t replication, const char *user, + char **url) __attribute__ ((warn_unused_result)); + +/** + * Create the URL for a GET_BLOCK_LOCATIONS request + * + * @param host The hostname of the NameNode + * @param nnPort Port of the NameNode + * @param path Path of the target file + * @param offset The offset in the file + * @param length Length of the file content + * @param user User name + * @param url Holding the generated URL for GET_BLOCK_LOCATIONS request + * @return 0 on success and non-zero value on errors + */ +int createUrlForGetBlockLocations(const char *host, int nnPort, + const char *path, size_t offset, + size_t length, const char *user, + char **url) __attribute__ ((warn_unused_result)); + + +#endif //_HDFS_HTTP_QUERY_H_