Hello Edward,
I suspect what you're seeing is caused by the fact that the maximum size
enforcement happens at the client-server packet communication level, not
on inspection of specific parts of the payload, such as the data buffer of
the znode. Other parts of the payload, such as the path, would be a
contributing factor too.
If you're in a bind, then a potential workaround is to tune the
configuration of jute.maxbuffer as described in the administrator guide.
However, this usually comes with a disclaimer that ZooKeeper is better
suited to smaller data storage, so you might want to consider if your
application can be changed to do something different.
http://zookeeper.apache.org/doc/r3.4.6/zookeeperAdmin.html
The behavior of jute.maxbuffer is a common source of confusion. As you
pointed out, the Javadoc makes it sound like the limit is enforced solely
on the znode data contents. ZOOKEEPER-1295 is an open issue that tracks
improving this documentation.
https://issues.apache.org/jira/browse/ZOOKEEPER-1295
I hope this helps.
--Chris Nauroth
On 10/15/15, 11:12 AM, "Edward Capriolo"
<edward.capriolo@huffingtonpost.com> wrote:
>We are running zookeeper-3.4.5
>
>byte[] bytes = new byte[1048576];
> zookeeper.create("/test_max", bytes);
>
>-> connection loss exception
>
>byte[] bytes = new byte[1048576-100];
> zookeeper.create("/test_max", bytes);
>-> works
>According to the documentation
>http://zookeeper.apache.org/doc/r3.4.5/api/org/apache/zookeeper/ZooKeeper.
>html
>The maximum allowable size of the data array is 1 MB (1,048,576 bytes).
>Arrays larger than this will cause a KeeperExecption to be thrown.
|