hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Chao Sun (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-14062) WebHDFS: Uploading a file again with the same naming convention
Date Sat, 10 Nov 2018 06:21:00 GMT

    [ https://issues.apache.org/jira/browse/HDFS-14062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16682231#comment-16682231
] 

Chao Sun commented on HDFS-14062:
---------------------------------

[~arpitkhare04] have you tried the {{overwrite}} parameter? see [here|https://hadoop.apache.org/docs/r1.2.1/webhdfs.html#CREATE].

> WebHDFS: Uploading a file again with the same naming convention
> ---------------------------------------------------------------
>
>                 Key: HDFS-14062
>                 URL: https://issues.apache.org/jira/browse/HDFS-14062
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>          Components: webhdfs
>    Affects Versions: 3.1.1
>            Reporter: Arpit Khare
>            Priority: Major
>
> *PROBLEM STATEMENT:*
> If we want to re-upload a file with the same name, to HDFS using WebHDFS APIs, WebHDFS
APIs does not allow, giving error:
> {code:java}
> "exception":"FileAlreadyExistsException","javaClassName":"org.apache.hadoop.fs.FileAlreadyExistsException"
> {code}
> But from HDFS command we can force upload(overwrite) a file having same name:
> {code:java}
> hdfs dfs -put -f /tmp/file1.txt /user/ambari-test {code}
>  
> Can we enable this feature via WebHDFS APIs also?
>  
> *STEPS TO REPRODUCE:*
> 1. Create a directory in HDFS using WebHDFS API:
> {code:java}
>  # curl -iL -X PUT "http://<NAMENODE_IP>:<PORT>/webhdfs/v1/user/admin/Test?op=MKDIRS&user.name=admin"{code}
> 2. Upload a file called /tmp/file1.txt:
> {code:java}
>  # curl -iL -X PUT -T "/tmp/file1.txt" "http://<NAMENODE_IP>:<PORT>/webhdfs/v1/user/admin/Test/file1.txt?op=CREATE&user.name=admin"
{code}
> 3. Now edit this file and then try uploading it back:
> {code}
>  # curl -iL -X PUT -T "/tmp/file1.txt" "http://<NAMENODE_IP>:<PORT>/webhdfs/v1/user/admin/Test/file1.txt?op=CREATE&user.name=admin"
{code}
> 4. We get the following error:
> {code:java}
> HTTP/1.1 100 Continue
> HTTP/1.1 403 Forbidden
>  Content-Type: application/json; charset=utf-8
>  Content-Length: 1465
>  Connection: close
> {"RemoteException":\{"exception":"FileAlreadyExistsException","javaClassName":"org.apache.hadoop.fs.FileAlreadyExistsException","message":"/user/admin/Test/file1.txt
for client 172.26.123.95 already exists\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2815)\n\tat
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2702)\n\tat
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2586)\n\tat
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:736)\n\tat
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:409)\n\tat
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\n\tat
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)\n\tat
org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)\n\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)\n\tat
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)\n\tat java.security.AccessController.doPrivileged(Native
Method)\n\tat javax.security.auth.Subject.doAs(Subject.java:422)\n\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)\n\tat
org.apache.hadoop.ipc.Server$Handler.run(Server.java:2347)\n"}}
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message