hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Shahab Yunus <shahab.yu...@gmail.com>
Subject Re: test compression in hbase
Date Tue, 25 Mar 2014 12:45:15 GMT
It says:
RemoteException(java.io.IOException): /hbase/test is non empty

Is the directory empty or are there files form some previous runs? Does the
user have access to delete the data here?

Regards,
Shahab


On Tue, Mar 25, 2014 at 7:42 AM, Mohamed Ghareb <m.ghareeb@tedata.net>wrote:

> How I can test the compression snappy in hbase
>  I ran the below command
> hbase org.apache.hadoop.hbase.util.CompressionTest /hbase/test snappy
>
> the test table is exist and empty but i have error
>
> 14/03/25 13:12:01 DEBUG util.FSUtils: Creating file=/hbase/test with
> permission=rwxrwxrwx
> 14/03/25 13:12:01 INFO hbase.HBaseFileSystem: Create Path with Perms,
> sleeping 1000 times 1
> 14/03/25 13:12:02 INFO hbase.HBaseFileSystem: Create Path with Perms,
> sleeping 1000 times 2
> 14/03/25 13:12:04 INFO hbase.HBaseFileSystem: Create Path with Perms,
> sleeping 1000 times 3
> 14/03/25 13:12:07 INFO hbase.HBaseFileSystem: Create Path with Perms,
> sleeping 1000 times 4
> 14/03/25 13:12:11 INFO hbase.HBaseFileSystem: Create Path with Perms,
> sleeping 1000 times 5
> 14/03/25 13:12:16 INFO hbase.HBaseFileSystem: Create Path with Perms,
> sleeping 1000 times 6
> 14/03/25 13:12:22 INFO hbase.HBaseFileSystem: Create Path with Perms,
> sleeping 1000 times 7
> 14/03/25 13:12:29 INFO hbase.HBaseFileSystem: Create Path with Perms,
> sleeping 1000 times 8
> 14/03/25 13:12:37 INFO hbase.HBaseFileSystem: Create Path with Perms,
> sleeping 1000 times 9
> 14/03/25 13:12:46 INFO hbase.HBaseFileSystem: Create Path with Perms,
> sleeping 1000 times 10
> 14/03/25 13:12:56 WARN hbase.HBaseFileSystem: Create Path with Perms,
> retries exhausted
> Exception in thread "main"
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): /hbase/test is
> non empty
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:2908)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInt(FSNamesystem.java:2872)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:2859)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:642)
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:408)
>         at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44968)
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1752)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1748)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:396)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1438)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1746)
>
> how can I test the compression
>
>
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message