accumulo-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Josh Elser <>
Subject Re: Fwd: Data authorization/visibility limit in Accumulo
Date Fri, 08 Apr 2016 22:13:03 GMT
Hi Fikri,

Welcome! You're the first Accumulo enthusiast I've heard from in 
Indonesia :)

Responses inline:

Fikri Akbar wrote:
> Hi Guys,
> We're a group of accumulo enthusiasts from Indonesia. We've been trying to
> implement accumulo for several different type of data processing purposes.
> We've got several questions regarding Accumulo, which you might help us
> with. We encounter these issues when we're trying to process heavy amount
> of data, our questions are as follows:
> 1. Let's say that I have a file in HDFS that's about 300 GB with a total
> 1.6 Billion rows, and each line are separated by "^". The question is, what
> is the most effective way to move the data to Accumulo (with assumption
> that the structure of each cell is [rowkey cf:cq vis value] =>  [lineNumber
> raw:columnName fileName columnValue])?

For a 300GB file, you likely want to use MapReduce to ingest it into 
Accumulo. You can use the AccumuloOutputFormat to write to Accumulo 
directly from a MapReduce job.

Reading data whose lines are separated by a '^' will likely require some 
custom InputFormat. I'm not sure if one already exists that you can 
build from. If you can convert the '^' to a standard newline character, 
you can probably leverage the existing TextInputFormat or similar.

> 2. What is the most effective way to ingest data, if we're receiving data
> with the size of>1 TB on a daily basis?

If latency is not a primary concern, creating Accumuo RFiles and 
performing bulk ingest/bulk loading is by far the most efficient way to 
getting data into Accumulo. This is often done by a MapReduce job to 
process your incoming data, create Accumulo RFiles and then bulk load 
these files into Accumulo. If you have a low latency for getting data 
into Accumuo, waiting for a MapReduce job to complete may take too long 
to meet your required latencies.

> 3. We're currently testing the ability of Accumulo for its data-level
> access control, however the issue regarding the limit of dataset
> authorization occurred when the datasets reached>20,000.
> For example, lets say user X has a data called one.txt. This will make user
> X has authorization to one.txt (let's call it Now, what if X
> has more than that (one.txt, two.xt, three.txt...n.txt), this will result
> in user X having multiple authorization (as much as the data or n
> authorization) and apparently when we tried it for datasets>20,000 (which
> user will have>20,000 authorization), we're not able to execute "get
> auth". We find that this is a very crucial issue, especially if (in one
> case) there's>20,000 datasets that is being granted authorization at once.

Accumulo's column visibilities don't directly work well in the situation 
you describe; this is likely why you are having problems. Specifically, 
because the ColumnVisibility is a part of the Accumulo Key, you cannot 
update it without removing the old Key-Value and adding a new one.

As such, ColumnVisibilities work much better as a labelling system than 
a direct authorization mechanism. Does that make sense? They are a 
building block to help you build authorization, not a complete 
authorization system on their own.

Authorizations for users are stored in ZooKeeper by default, which is 
probably why you were having problems with 20k+ authorizations.

Can you go into some detail on what your access control requirements 
are? For example, are documents only visible to one user known at ingest 
time? Do the set of allowed users for a file change over time?

Commonly, some external system that manages the current roles for a user 
is a better approach here. For some $user, you can configure Accumulo to 
query that system to get the set of authorizations that $user current 
has and query that way. With some more specifics, we can try to get you 
a better recommendation.

> The following are error logs from our system.
> *Error log in shell:*
> org.apache.accumulo.core.client.AccumuloException:
> org.apache.thrift.TApplicationException: Internal error processing
> getUserAuthorizations
>          at
> org.apache.accumulo.core.client.impl.SecurityOperationsImpl.execute(
>          at
> org.apache.accumulo.core.client.impl.SecurityOperationsImpl.getUserAuthorizations(
>          at com.msk.auxilium.table.AuxUser.setUserAuth(
>          at
> com.msk.auxilium.commons.UserSystem.getAuxUser(
>          at com.msk.auxilium.tester.HDFSTest.main(
> Caused by: org.apache.thrift.TApplicationException: Internal error
> processing getUserAuthorizations
>          at
>          at
> org.apache.thrift.TServiceClient.receiveBase(
>          at
> org.apache.accumulo.core.client.impl.thrift.ClientService$Client.recv_getUserAuthorizations(
>          at
> org.apache.accumulo.core.client.impl.thrift.ClientService$Client.getUserAuthorizations(
>          at
> org.apache.accumulo.core.client.impl.SecurityOperationsImpl$6.execute(
>          at
> org.apache.accumulo.core.client.impl.SecurityOperationsImpl$6.execute(
>          at
> org.apache.accumulo.core.client.impl.ServerClient.executeRaw(
>          at
> org.apache.accumulo.core.client.impl.SecurityOperationsImpl.execute(
>          ... 4 more
> *Error log in accumulo master (web)*
> tserver:
> Zookeeper error, will retry
> 	org.apache.zookeeper.KeeperException$ConnectionLossException:
> KeeperErrorCode = ConnectionLoss for
> /accumulo/281c3ac0-74eb-4135-bc63-3158eabe2c47/tables/1a/conf/table.split.threshold
> 		at org.apache.zookeeper.KeeperException.create(
> 		at org.apache.zookeeper.KeeperException.create(
> 		at org.apache.zookeeper.ZooKeeper.exists(
> 		at org.apache.accumulo.fate.zookeeper.ZooCache$
> 		at org.apache.accumulo.fate.zookeeper.ZooCache.retry(
> 		at org.apache.accumulo.fate.zookeeper.ZooCache.get(
> 		at org.apache.accumulo.fate.zookeeper.ZooCache.get(
> 		at org.apache.accumulo.server.conf.TableConfiguration.get(
> 		at org.apache.accumulo.server.conf.TableConfiguration.get(
> 		at org.apache.accumulo.core.conf.AccumuloConfiguration.getMemoryInBytes(
> 		at org.apache.accumulo.tserver.Tablet.findSplitRow(
> 		at org.apache.accumulo.tserver.Tablet.needsSplit(
> 		at org.apache.accumulo.tserver.TabletServer$
> 		at
> 		at
> *garbage collector:*
> Zookeeper error, will retry
> 	org.apache.zookeeper.KeeperException$ConnectionLossException:
> KeeperErrorCode = ConnectionLoss for
> /accumulo/281c3ac0-74eb-4135-bc63-3158eabe2c47/tables
> 		at org.apache.zookeeper.KeeperException.create(
> 		at org.apache.zookeeper.KeeperException.create(
> 		at org.apache.zookeeper.ZooKeeper.getChildren(
> 		at org.apache.accumulo.fate.zookeeper.ZooCache$
> 		at org.apache.accumulo.fate.zookeeper.ZooCache.retry(
> 		at org.apache.accumulo.fate.zookeeper.ZooCache.getChildren(
> 		at org.apache.accumulo.core.client.impl.Tables.getMap(
> 		at org.apache.accumulo.core.client.impl.Tables.getNameToIdMap(
> 		at org.apache.accumulo.core.client.impl.Tables._getTableId(
> 		at org.apache.accumulo.core.client.impl.Tables.getTableId(
> 		at org.apache.accumulo.core.client.impl.ConnectorImpl.getTableId(
> 		at org.apache.accumulo.core.client.impl.ConnectorImpl.createScanner(
> 		at org.apache.accumulo.gc.SimpleGarbageCollector$GCEnv.getCandidates(
> 		at org.apache.accumulo.gc.GarbageCollectionAlgorithm.getCandidates(
> 		at org.apache.accumulo.gc.GarbageCollectionAlgorithm.collect(
> 		at
> 		at org.apache.accumulo.gc.SimpleGarbageCollector.main(
> 		at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 		at sun.reflect.NativeMethodAccessorImpl.invoke(
> 		at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> 		at java.lang.reflect.Method.invoke(
> 		at org.apache.accumulo.start.Main$
> 		at
> we tried finding some resources regarding this issue, but couldn't find any
> that mention the limit of authorizations per user and FYI we're using
> accumulo version 1.6.

Can you give the Accumulo processes more Java heap space? ZooKeeper 
needs to maintain a heartbeat with ZooKeeper servers to stay connected. 
These error messages are implying that the Accumulo process cannot run 
in a timely manner which causes it be disconnected from ZooKeeper (and 
the client will error until it can be reconnected -- this happens 

Also, make sure that swappiness on your nodes is set to a value less 
than 10, ideally 1 or 0. Otherwise, the operating system may swap out 
pages in memory to disk and cause you to have pauses.

> Sorry for the long email :) and have a great day.
> Regards,
> *Fikri Akbar*
> Technology
> *PT Mediatrac Sistem Komunikasi*
> Grha Tirtadi 2nd Floor   |   Jl. Senopati 71-73   |   Jakarta 12110   |
> Indonesia   |   *M**ap* 6°13'57.37"S 106°48'42.29"E
> *P* +62 21 520 2568   |   *F* +62 21 520 4180   |   *M*  +62 812 1243 4786
>     |   *<>*

View raw message