hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Alejandro Abdelnur (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-10741) A lightweight WebHDFS client library
Date Tue, 24 Jun 2014 00:28:25 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-10741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14041515#comment-14041515

Alejandro Abdelnur commented on HADOOP-10741:

Unless you plan to reimplement Hadoop {{FileSystem}} API and {{UserGroupInformation}} which
are part of hadoop-common, and hadoop-auth you need to depend on hadoop-common.

HtptFS has a barebones FS implementation, {{org.apache.hadoop.fs.http.client.HttpFSFileSystem}},
for testing. And it requires the following imports:

import java.util.ArrayList;
import java.util.List;
import org.apache.hadoop.classification.InterfaceAudience;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.ContentSummary;
import org.apache.hadoop.fs.DelegationTokenRenewer;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.FileChecksum;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.PositionedReadable;
import org.apache.hadoop.fs.Seekable;
import org.apache.hadoop.fs.permission.AclEntry;
import org.apache.hadoop.fs.permission.AclStatus;
import org.apache.hadoop.fs.permission.FsPermission;
import org.apache.hadoop.hdfs.DFSConfigKeys;
import org.apache.hadoop.net.NetUtils;
import org.apache.hadoop.security.UserGroupInformation;
import org.apache.hadoop.security.authentication.client.AuthenticatedURL;
import org.apache.hadoop.security.authentication.client.Authenticator;
import org.apache.hadoop.security.token.Token;
import org.apache.hadoop.security.token.TokenIdentifier;
import org.apache.hadoop.util.Progressable;
import org.apache.hadoop.util.ReflectionUtils;
import org.apache.hadoop.util.StringUtils;
import org.json.simple.JSONArray;
import org.json.simple.JSONObject;

import java.io.BufferedInputStream;
import java.io.BufferedOutputStream;
import java.io.DataInput;
import java.io.DataOutput;
import java.io.FileNotFoundException;
import java.io.FilterInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.net.HttpURLConnection;
import java.net.InetSocketAddress;
import java.net.URI;
import java.net.URISyntaxException;
import java.net.URL;
import java.security.PrivilegedExceptionAction;
import java.text.MessageFormat;
import java.util.HashMap;
import java.util.Map;
import java.util.concurrent.Callable;

You could make ti a bit simpler, but I don't thing that by much unless (as said before) you
end up re-implementing a bunch of stuff from hadoop-common & hadoop-auth.

> A lightweight WebHDFS client library
> ------------------------------------
>                 Key: HADOOP-10741
>                 URL: https://issues.apache.org/jira/browse/HADOOP-10741
>             Project: Hadoop Common
>          Issue Type: New Feature
>          Components: tools
>            Reporter: Tsz Wo Nicholas Sze
>            Assignee: Mohammad Kamrul Islam
> One of the motivations for creating WebHDFS is for applications connecting to HDFS from
outside the cluster.  In order to do so, users have to either
> # install Hadoop and use WebHdfsFileSsytem, or
> # develop their own client using the WebHDFS REST API.
> For #1, it is very difficult to manage and unnecessarily complicated for other applications
since Hadoop is not a lightweight library.  For #2, it is not easy to deal with security and
handle transient errors.
> Therefore, we propose adding a lightweight WebHDFS client as a separated library which
does not depend on Common and HDFS.  The client can be packaged as a standalone jar.  Other
applications simply add the jar to their classpath for using it.

This message was sent by Atlassian JIRA

View raw message