incubator-jena-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Stephen Allen (JIRA)" <>
Subject [jira] [Updated] (JENA-99) Spill to disk data bags
Date Mon, 15 Aug 2011 19:35:27 GMT


Stephen Allen updated JENA-99:

    Attachment: JENA-99-r1157891.patch

I've attached my implementation of the three types of DataBags.


1) I implemented JENA-44 and JENA-45 using the DataBags, and will post patches in those JIRAs
that depend on this code.

2) I modified the sort algorithm from JENA-44 to create SortedDataBag.  I removed the multi-threaded
aspect to simplify and ensure correct behavior.

3) I was not able to get to implementing the ThresholdPolicyMemory, so it uses ThresholdPolicyCount
for now.

Some parts I could use help on:

a) Setting the following parameters via an ARQ configuration:
   - The spill count threshold (set to 50,000 now)
   - The location of the temporary spill directory (it uses Java's temp directory now)

b) More unit tests

c) Convert the other query operators that store lots of bindings to use DataBags

d) Code review!

> Spill to disk data bags
> -----------------------
>                 Key: JENA-99
>                 URL:
>             Project: Jena
>          Issue Type: New Feature
>          Components: ARQ
>            Reporter: Stephen Allen
>         Attachments: JENA-99-r1157891.patch
> For certain query operations, ARQ needs to store a large number of tuples temporarily.
 Currently these are stored in Java Collections, however for large result sets the system
can exhaust the available memory.  There is a need for a set of generic data structures that
can hold these tuples and spill to disk if they get too large.
> ==
> The design is inspired by Apache Pig's DataBag [1]:
> A DataBag is a collection of tuples. A DataBag may or may not fit into memory. It proactively
spills to disk when its size exceeds the threshold. When it spills, it takes whatever it has
in memory, opens a spill file, and writes the contents out. This may happen multiple times.
The bag tracks all of the files it's spilled to. The spill behavior is controlled by a ThresholdPolicy
object.  The most basic policy spills based on the number of tuples added.  A more advanced
policy is to estimate the size of all the tuples added to the DataBag and spill when it passes
a byte threshold.
> A DataBag provides an Iterator interface, that allows callers to read through the contents.
The iterators are aware of the data spilling. They have to be able to handle reading from
the spill files. 
> The DataBag interface assumes that all data is written before any is read. That is, a
DataBag cannot be used as a queue. If data is written after data is read, the results are
> DataBags come in several types, default, sorted, and distinct. The type must be chosen
up front, there is no way to convert a bag on the fly. Default data bags do not guarantee
any particular order of retrieval for the tuples and may contain duplicate tuples. Sorted
data bags guarantee that tuples will be retrieved in order, where "in order" is defined either
by the default comparator for the tuple or the comparator provided by the caller when the
bag was created. Sorted bags may contain duplicates. Distinct bags do not guarantee any particular
order of retrieval, but do guarantee that they will not contain duplicate tuples. 
> The DataBags are generic containers, and may store any item that can be serialized and
deserialized.  It accepts a SerializationFactory that handles this task.
> [1]

This message is automatically generated by JIRA.
For more information on JIRA, see:


View raw message