hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From apratim sharma <apratim.sha...@gmail.com>
Subject Re: Hbase major compaction question
Date Fri, 24 Jul 2015 22:13:23 GMT
Hi Ted,

Please find my answers below.

*Relase of Hbase:* 1.0.0-cdh5.4.1
*Configuration Change Before Restart:* Changed Block Cache related
configuration (mainly increased off heap bucket cache size)
*Compaction Gone means:* Yes Data locality became poor after restart.

Please find log snippet pasted below while compaction was happening after
restart.
Looking at this log I guess that this has something to do with the hfile
privileges. May be it's not able to delete or modify the files during
compaction. In spite of that it reports 100% compaction. Only after restart
it has to re-do again because it failed to delete hfiles.

I will try once again after changing the file permissions and update you.

I have thousands of occurrences of below log in region server log file.
Pasting just one.


Thanks a lot for help
Apratim

1:35:55.086 PM WARN org.apache.hadoop.hbase.backup.HFileArchiver
Failed to archive class
org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://
lnxcdh03.emeter.com:8020/hbase/data/apratim/sdp/f5bbbf1ff78935dab7093517dffa44f6/m/3aff94a0594345968ac373179c629126_SeqId_6_
on try #0
org.apache.hadoop.security.AccessControlException: Permission denied:
user=hbase, access=WRITE,
inode="/hbase/data/apratim/sdp/f5bbbf1ff78935dab7093517dffa44f6/m/3aff94a0594345968ac373179c629126_SeqId_6_":aparsh:aparsh:-rw-r--r--
at
org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:257)
at
org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:238)
at
org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:151)
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:138)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6596)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6578)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPathAccess(FSNamesystem.java:6503)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setTimesInt(FSNamesystem.java:2209)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setTimes(FSNamesystem.java:2187)
at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setTimes(NameNodeRpcServer.java:1088)
at
org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.setTimes(AuthorizationProviderProxyClientProtocol.java:600)
at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setTimes(ClientNamenodeProtocolServerSideTranslatorPB.java:892)
at
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2038)

at sun.reflect.GeneratedConstructorAccessor35.newInstance(Unknown Source)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
at
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
at org.apache.hadoop.hdfs.DFSClient.setTimes(DFSClient.java:2829)
at
org.apache.hadoop.hdfs.DistributedFileSystem$26.doCall(DistributedFileSystem.java:1343)
at
org.apache.hadoop.hdfs.DistributedFileSystem$26.doCall(DistributedFileSystem.java:1339)
at
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at
org.apache.hadoop.hdfs.DistributedFileSystem.setTimes(DistributedFileSystem.java:1339)
at org.apache.hadoop.fs.FilterFileSystem.setTimes(FilterFileSystem.java:484)
at
org.apache.hadoop.hbase.util.FSUtils.renameAndSetModifyTime(FSUtils.java:1719)
at
org.apache.hadoop.hbase.backup.HFileArchiver$File.moveAndClose(HFileArchiver.java:586)
at
org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile(HFileArchiver.java:425)
at
org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchive(HFileArchiver.java:335)
at
org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchive(HFileArchiver.java:284)
at
org.apache.hadoop.hbase.backup.HFileArchiver.archiveStoreFiles(HFileArchiver.java:231)
at
org.apache.hadoop.hbase.regionserver.HRegionFileSystem.removeStoreFiles(HRegionFileSystem.java:424)
at
org.apache.hadoop.hbase.regionserver.HStore.completeCompaction(HStore.java:1736)
at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1256)
at org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1724)
at
org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:511)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by:
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
Permission denied: user=hbase, access=WRITE,
inode="/hbase/data/apratim/sdp/f5bbbf1ff78935dab7093517dffa44f6/m/3aff94a0594345968ac373179c629126_SeqId_6_":aparsh:aparsh:-rw-r--r--
at
org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:257)
at
org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:238)
at
org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:151)
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:138)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6596)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6578)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPathAccess(FSNamesystem.java:6503)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setTimesInt(FSNamesystem.java:2209)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setTimes(FSNamesystem.java:2187)
at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setTimes(NameNodeRpcServer.java:1088)
at
org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.setTimes(AuthorizationProviderProxyClientProtocol.java:600)
at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setTimes(ClientNamenodeProtocolServerSideTranslatorPB.java:892)
at
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2038)



On Fri, Jul 24, 2015 at 1:05 PM, Ted Yu <yuzhihong@gmail.com> wrote:

>
> Can you provide us more information:
> Release of HBase you use
> Configuration change you made prior to restarting
> By 'compaction is gone', do you mean that locality became poor again ?
>
> Can you pastebin region server log when compaction got stuck ?
>
> Thanks
>
> Saturday, July 25, 2015, 2:20 AM +0800 from apratim sharma  <
> apratim.sharma@gmail.com>:
> >I have a hbase table with with a wide row almost 2K columns per row. Each
> >KV size is approx 2.1KB
> >I have populated this table with generated hfiles using a MR job.
> >There are no write or mutate operations performed on this table.
> >
> >So Once I am done with major compaction on this table, ideally we should
> >not require another major or minor compaction if table is not modified.
> >What I observe is that if I make some configuration change that need to
> >restart my hbase service, then after restart my compaction on the table is
> >gone.
> >And if I start major compaction on the table again, It takes again a long
> >to compact the table.
> >
> >Is this expected behavior? I am curious what causes the major compaction
> to
> >take a long time if nothing has changed on the table.
> >
> >
> >I would really appreciate any help.
> >
> >
> >Thanks
> >
> >Apratim
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message