hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Marco Shaw <marco.s...@gmail.com>
Subject Re: Urgent Requirement: How to copy File from One cluster to another cluster using java client(throught java Program)
Date Fri, 01 Mar 2013 19:59:33 GMT
Hi Samir,

I may be alone here, but I would prefer you not use "urgent" when
asking for free help from a mailing list.

My recommendation is that if this is really urgent and you need
instant support for your Hadoop installation, that you consider
getting a proper support contract to help you when you get stuck and
need help right away.

Again, it might just be me, but free support is...  free and usually
volunteer based.

Marco

On Fri, Mar 1, 2013 at 3:43 PM, samir das mohapatra
<samir.helpdoc@gmail.com> wrote:
> Hi All,
>     Any one had gone through scenario to  copy file from one cluster to
> another cluster using java application program (Using Hadoop FileSystem API)
> .
>
>   I have done some thing using java application but within the same cluster
> it is working file while I am copying the file from one cluster to another
> cluster I am getting the error.
>
> File not found org.apache.hadoop.security.
> AccessControlException: Permission denied: user=hadoop, access=WRITE,
> inode="/user/dasmohap/samir_tmp":dasmohap:dasmohap:drwxr-xr-x
>     at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:186)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:135)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4547)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4518)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1755)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:1690)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1669)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:409)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:205)
>     at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44068)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1689)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:396)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687)
>
>
> Regards,
> samir.
>
> --
>
>
>

Mime
View raw message