marmotta-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From wik...@apache.org
Subject [35/52] [partial] code contribution, initial import of relevant modules of LMF-3.0.0-SNAPSHOT based on revision 4bf944319368 of the default branch at https://code.google.com/p/lmf/
Date Tue, 19 Feb 2013 12:52:00 GMT
http://git-wip-us.apache.org/repos/asf/incubator-marmotta/blob/c32963d5/ldcache/ldcache-backend-ehcache/src/main/resources/ehcache-ldcache.xml
----------------------------------------------------------------------
diff --git a/ldcache/ldcache-backend-ehcache/src/main/resources/ehcache-ldcache.xml b/ldcache/ldcache-backend-ehcache/src/main/resources/ehcache-ldcache.xml
new file mode 100644
index 0000000..8a2a883
--- /dev/null
+++ b/ldcache/ldcache-backend-ehcache/src/main/resources/ehcache-ldcache.xml
@@ -0,0 +1,740 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+
+    Copyright (C) 2013 Salzburg Research.
+
+    Licensed under the Apache License, Version 2.0 (the "License");
+    you may not use this file except in compliance with the License.
+    You may obtain a copy of the License at
+
+         http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+
+-->
+<!--
+CacheManager Configuration
+==========================
+An ehcache.xml corresponds to a single CacheManager.
+
+See instructions below or the ehcache schema (ehcache.xsd) on how to configure.
+
+System property tokens can be specified in this file which are replaced when the configuration
+is loaded. For example multicastGroupPort=${multicastGroupPort} can be replaced with the
+System property either from an environment variable or a system property specified with a
+command line switch such as -DmulticastGroupPort=4446. Another example, useful for Terracotta
+server based deployments is <terracottaConfig url="${serverAndPort}"/ and specify a command line
+switch of -Dserver36:9510
+
+The attributes of <ehcache> are:
+* name - an optional name for the CacheManager.  The name is optional and primarily used
+for documentation or to distinguish Terracotta clustered cache state.  With Terracotta
+clustered caches, a combination of CacheManager name and cache name uniquely identify a
+particular cache store in the Terracotta clustered memory.
+* updateCheck - an optional boolean flag specifying whether this CacheManager should check
+for new versions of Ehcache over the Internet.  If not specified, updateCheck="true".
+* dynamicConfig - an optional setting that can be used to disable dynamic configuration of caches
+associated with this CacheManager.  By default this is set to true - i.e. dynamic configuration
+is enabled.  Dynamically configurable caches can have their TTI, TTL and maximum disk and
+in-memory capacity changed at runtime through the cache's configuration object.
+* monitoring - an optional setting that determines whether the CacheManager should
+automatically register the SampledCacheMBean with the system MBean server.
+
+Currently, this monitoring is only useful when using Terracotta clustering and using the
+Terracotta Developer Console. With the "autodetect" value, the presence of Terracotta clustering
+will be detected and monitoring, via the Developer Console, will be enabled. Other allowed values
+are "on" and "off".  The default is "autodetect". This setting does not perform any function when
+used with JMX monitors.
+
+* maxBytesLocalHeap - optional setting that constraints the memory usage of the Caches managed by the CacheManager
+to use at most the specified number of bytes of the local VM's heap.
+* maxBytesLocalOffHeap - optional setting that constraints the offHeap usage of the Caches managed by the CacheManager
+to use at most the specified number of bytes of the local VM's offHeap memory.
+* maxBytesLocalDisk - optional setting that constraints the disk usage of the Caches managed by the CacheManager
+to use at most the specified number of bytes of the local disk.
+
+These settings let you define "resource pools", caches will share. For instance setting maxBytesLocalHeap to 100M, will result in
+all caches sharing 100 MegaBytes of ram. The CacheManager will balance these 100 MB across all caches based on their respective usage
+patterns. You can allocate a precise amount of bytes to a particular cache by setting the appropriate maxBytes* attribute for that cache.
+That amount will be subtracted from the CacheManager pools, so that if a cache a specified 30M requirement, the other caches will share
+the remaining 70M.
+
+Also, specifying a maxBytesLocalOffHeap at the CacheManager level will result in overflowToOffHeap to be true by default. If you don't want
+a specific cache to overflow to off heap, you'll have to set overflowToOffHeap="false" explicitly
+
+Here is an example of CacheManager level resource tuning, which will use up to 400M of heap and 2G of offHeap:
+
+<ehcache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+         xsi:noNamespaceSchemaLocation="ehcache.xsd"
+         updateCheck="true" monitoring="autodetect"
+         dynamicConfig="true" maxBytesLocalHeap="400M" maxBytesLocalOffHeap="2G">
+
+-->
+<ehcache name="LDCache"
+         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+         xsi:noNamespaceSchemaLocation="http://ehcache.org/ehcache.xsd"
+         updateCheck="true" monitoring="autodetect"
+         dynamicConfig="true">
+
+
+
+    <!--
+    DiskStore configuration
+    =======================
+
+    The diskStore element is optional. To turn off disk store path creation, comment out the diskStore
+    element below.
+
+    Configure it if you have disk persistence enabled for any cache or if you use
+    unclustered indexed search.
+
+    If it is not configured, and a cache is created which requires a disk store, a warning will be
+     issued and java.io.tmpdir will automatically be used.
+
+    diskStore has only one attribute - "path". It is the path to the directory where
+    any required disk files will be created.
+
+    If the path is one of the following Java System Property it is replaced by its value in the
+    running VM. For backward compatibility these should be specified without being enclosed in the ${token}
+    replacement syntax.
+
+    The following properties are translated:
+    * user.home - User's home directory
+    * user.dir - User's current working directory
+    * java.io.tmpdir - Default temp file path
+    * ehcache.disk.store.dir - A system property you would normally specify on the command line
+      e.g. java -Dehcache.disk.store.dir=/u01/myapp/diskdir ...
+
+    Subdirectories can be specified below the property e.g. java.io.tmpdir/one
+
+    -->
+    <diskStore path="java.io.tmpdir"/>
+
+
+    <!--
+    Cache configuration
+    ===================
+
+    The following attributes are required.
+
+    name:
+    Sets the name of the cache. This is used to identify the cache. It must be unique.
+
+    maxEntriesLocalHeap:
+    Sets the maximum number of objects that will be created in memory.  0 = no limit.
+    In practice no limit means Integer.MAX_SIZE (2147483647) unless the cache is distributed
+    with a Terracotta server in which case it is limited by resources.
+
+    maxEntriesLocalDisk:
+    Sets the maximum number of objects that will be maintained in the DiskStore
+    The default value is zero, meaning unlimited.
+
+    eternal:
+    Sets whether elements are eternal. If eternal,  timeouts are ignored and the
+    element is never expired.
+
+    The following attributes and elements are optional.
+
+    overflowToOffHeap:
+    (boolean) This feature is available only in enterprise versions of Ehcache.
+    When set to true, enables the cache to utilize off-heap memory
+    storage to improve performance. Off-heap memory is not subject to Java
+    GC. The default value is false.
+
+    maxBytesLocalHeap:
+    Defines how many bytes the cache may use from the VM's heap. If a CacheManager
+    maxBytesLocalHeap has been defined, this Cache's specified amount will be
+    subtracted from the CacheManager. Other caches will share the remainder.
+    This attribute's values are given as <number>k|K|m|M|g|G for
+    kilobytes (k|K), megabytes (m|M), or gigabytes (g|G).
+    For example, maxBytesLocalHeap="2g" allots 2 gigabytes of heap memory.
+    If you specify a maxBytesLocalHeap, you can't use the maxEntriesLocalHeap attribute.
+    maxEntriesLocalHeap can't be used if a CacheManager maxBytesLocalHeap is set.
+
+    Elements put into the cache will be measured in size using net.sf.ehcache.pool.sizeof.SizeOf
+    If you wish to ignore some part of the object graph, see net.sf.ehcache.pool.sizeof.annotations.IgnoreSizeOf
+
+    maxBytesLocalOffHeap:
+    This feature is available only in enterprise versions of Ehcache.
+    Sets the amount of off-heap memory this cache can use, and will reserve.
+
+    This setting will set overflowToOffHeap to true. Set explicitly to false to disable overflow behavior.
+
+    Note that it is recommended to set maxEntriesLocalHeap to at least 100 elements
+    when using an off-heap store, otherwise performance will be seriously degraded,
+    and a warning will be logged.
+
+    The minimum amount that can be allocated is 128MB. There is no maximum.
+
+    maxBytesLocalDisk:
+    As for maxBytesLocalHeap, but specifies the limit of disk storage this cache will ever use.
+
+    timeToIdleSeconds:
+    Sets the time to idle for an element before it expires.
+    i.e. The maximum amount of time between accesses before an element expires
+    Is only used if the element is not eternal.
+    Optional attribute. A value of 0 means that an Element can idle for infinity.
+    The default value is 0.
+
+    timeToLiveSeconds:
+    Sets the time to live for an element before it expires.
+    i.e. The maximum time between creation time and when an element expires.
+    Is only used if the element is not eternal.
+    Optional attribute. A value of 0 means that and Element can live for infinity.
+    The default value is 0.
+
+    diskExpiryThreadIntervalSeconds:
+    The number of seconds between runs of the disk expiry thread. The default value
+    is 120 seconds.
+
+    diskSpoolBufferSizeMB:
+    This is the size to allocate the DiskStore for a spool buffer. Writes are made
+    to this area and then asynchronously written to disk. The default size is 30MB.
+    Each spool buffer is used only by its cache. If you get OutOfMemory errors consider
+    lowering this value. To improve DiskStore performance consider increasing it. Trace level
+    logging in the DiskStore will show if put back ups are occurring.
+
+    clearOnFlush:
+    whether the MemoryStore should be cleared when flush() is called on the cache.
+    By default, this is true i.e. the MemoryStore is cleared.
+
+    statistics:
+    Whether to collect statistics. Note that this should be turned on if you are using
+    the Ehcache Monitor. By default statistics is turned off to favour raw performance.
+    To enable set statistics="true"
+
+    memoryStoreEvictionPolicy:
+    Policy would be enforced upon reaching the maxEntriesLocalHeap limit. Default
+    policy is Least Recently Used (specified as LRU). Other policies available -
+    First In First Out (specified as FIFO) and Less Frequently Used
+    (specified as LFU)
+
+    copyOnRead:
+    Whether an Element is copied when being read from a cache.
+    By default this is false.
+
+    copyOnWrite:
+    Whether an Element is copied when being added to the cache.
+    By default this is false.
+
+    Cache persistence is configured through the persistence sub-element.  The attributes of the
+    persistence element are:
+
+    strategy:
+    Configures the type of persistence provided by the configured cache.  This must be one of the
+    following values:
+
+    * localRestartable - Enables the RestartStore and copies all cache entries (on-heap and/or off-heap)
+    to disk. This option provides fast restartability with fault tolerant cache persistence on disk.
+    It is available for Enterprise Ehcache users only.
+
+    * localTempSwap - Swaps cache entries (on-heap and/or off-heap) to disk when the cache is full.
+    "localTempSwap" is not persistent.
+
+    * none - Does not persist cache entries.
+
+    * distributed - Defers to the <terracotta> configuration for persistence settings. This option
+    is not applicable for standalone.
+
+    synchronousWrites:
+    When set to true write operations on the cache do not return until after the operations data has been
+    successfully flushed to the disk storage.  This option is only valid when used with the "localRestartable"
+    strategy, and defaults to false.
+
+    The following example configuration shows a cache configured for localTempSwap restartability.
+
+    <cache name="persistentCache" maxEntriesLocalHeap="1000">
+        <persistence strategy="localTempSwap"/>
+    </cache>
+
+    Cache elements can also contain sub elements which take the same format of a factory class
+    and properties. Defined sub-elements are:
+
+    * cacheEventListenerFactory - Enables registration of listeners for cache events, such as
+      put, remove, update, and expire.
+
+    * bootstrapCacheLoaderFactory - Specifies a BootstrapCacheLoader, which is called by a
+      cache on initialisation to prepopulate itself.
+
+    * cacheExtensionFactory - Specifies a CacheExtension, a generic mechanism to tie a class
+      which holds a reference to a cache to the cache lifecycle.
+
+    * cacheExceptionHandlerFactory - Specifies a CacheExceptionHandler, which is called when
+      cache exceptions occur.
+
+    * cacheLoaderFactory - Specifies a CacheLoader, which can be used both asynchronously and
+      synchronously to load objects into a cache. More than one cacheLoaderFactory element
+      can be added, in which case the loaders form a chain which are executed in order. If a
+      loader returns null, the next in chain is called.
+
+    * copyStrategy - Specifies a fully qualified class which implements
+      net.sf.ehcache.store.compound.CopyStrategy. This strategy will be used for copyOnRead
+      and copyOnWrite in place of the default which is serialization.
+
+    Example of cache level resource tuning:
+    <cache name="memBound" maxBytesLocalHeap="100m" maxBytesLocalOffHeap="4g" maxBytesLocalDisk="200g" />
+
+
+    Cache Event Listeners
+    +++++++++++++++++++++
+
+    All cacheEventListenerFactory elements can take an optional property listenFor that describes
+    which events will be delivered in a clustered environment.  The listenFor attribute has the
+    following allowed values:
+
+    * all - the default is to deliver all local and remote events
+    * local - deliver only events originating in the current node
+    * remote - deliver only events originating in other nodes
+
+    Example of setting up a logging listener for local cache events:
+
+    <cacheEventListenerFactory class="my.company.log.CacheLogger"
+        listenFor="local" />
+
+
+    Search
+    ++++++
+
+    A <cache> can be made searchable by adding a <searchable/> sub-element. By default the keys
+    and value objects of elements put into the cache will be attributes against which
+    queries can be expressed.
+
+    <cache>
+        <searchable/>
+    </cache>
+
+
+    An "attribute" of the cache elements can also be defined to be searchable. In the example below
+    an attribute with the name "age" will be available for use in queries. The value for the "age"
+    attribute will be computed by calling the method "getAge()" on the value object of each element
+    in the cache. See net.sf.ehcache.search.attribute.ReflectionAttributeExtractor for the format of
+    attribute expressions. Attribute values must also conform to the set of types documented in the
+    net.sf.ehcache.search.attribute.AttributeExtractor interface
+
+    <cache>
+        <searchable>
+            <searchAttribute name="age" expression="value.getAge()"/>
+        </searchable>
+    </cache>
+
+
+    Attributes may also be defined using a JavaBean style. With the following attribute declaration
+    a public method getAge() will be expected to be found on either the key or value for cache elements
+
+    <cache>
+        <searchable>
+            <searchAttribute name="age"/>
+        </searchable>
+    </cache>
+
+    In more complex situations you can create your own attribute extractor by implementing the
+    AttributeExtractor interface. Providing your extractor class is shown in the following example:
+
+    <cache>
+        <searchable>
+            <searchAttribute name="age" class="com.example.MyAttributeExtractor"/>
+        </searchable>
+    </cache>
+
+    Use properties to pass state to your attribute extractor if needed. Your implementation must provide
+    a public constructor that takes a single java.util.Properties instance
+
+    <cache>
+        <searchable>
+            <searchAttribute name="age" class="com.example.MyAttributeExtractor" properties="foo=1,bar=2"/>
+        </searchable>
+    </cache>
+
+
+    RMI Cache Replication
+    +++++++++++++++++++++
+
+    Each cache that will be distributed needs to set a cache event listener which replicates
+    messages to the other CacheManager peers. For the built-in RMI implementation this is done
+    by adding a cacheEventListenerFactory element of type RMICacheReplicatorFactory to each
+    distributed cache's configuration as per the following example:
+
+    <cacheEventListenerFactory class="net.sf.ehcache.distribution.RMICacheReplicatorFactory"
+         properties="replicateAsynchronously=true,
+         replicatePuts=true,
+         replicatePutsViaCopy=false,
+         replicateUpdates=true,
+         replicateUpdatesViaCopy=true,
+         replicateRemovals=true,
+         asynchronousReplicationIntervalMillis=<number of milliseconds>,
+         asynchronousReplicationMaximumBatchSize=<number of operations>"
+         propertySeparator="," />
+
+    The RMICacheReplicatorFactory recognises the following properties:
+
+    * replicatePuts=true|false - whether new elements placed in a cache are
+      replicated to others. Defaults to true.
+
+    * replicatePutsViaCopy=true|false - whether the new elements are
+      copied to other caches (true), or whether a remove message is sent. Defaults to true.
+
+    * replicateUpdates=true|false - whether new elements which override an
+      element already existing with the same key are replicated. Defaults to true.
+
+    * replicateRemovals=true - whether element removals are replicated. Defaults to true.
+
+    * replicateAsynchronously=true | false - whether replications are
+      asynchronous (true) or synchronous (false). Defaults to true.
+
+    * replicateUpdatesViaCopy=true | false - whether the new elements are
+      copied to other caches (true), or whether a remove message is sent. Defaults to true.
+
+    * asynchronousReplicationIntervalMillis=<number of milliseconds> - The asynchronous
+      replicator runs at a set interval of milliseconds. The default is 1000. The minimum
+      is 10. This property is only applicable if replicateAsynchronously=true
+
+    * asynchronousReplicationMaximumBatchSize=<number of operations> - The maximum
+      number of operations that will be batch within a single RMI message.  The default
+      is 1000. This property is only applicable if replicateAsynchronously=true
+
+    JGroups Replication
+    +++++++++++++++++++
+
+    For the Jgroups replication this is done with:
+    <cacheEventListenerFactory class="net.sf.ehcache.distribution.jgroups.JGroupsCacheReplicatorFactory"
+                            properties="replicateAsynchronously=true, replicatePuts=true,
+               replicateUpdates=true, replicateUpdatesViaCopy=false,
+               replicateRemovals=true,asynchronousReplicationIntervalMillis=1000"/>
+    This listener supports the same properties as the RMICacheReplicationFactory.
+
+
+    JMS Replication
+    +++++++++++++++
+
+    For JMS-based replication this is done with:
+    <cacheEventListenerFactory
+          class="net.sf.ehcache.distribution.jms.JMSCacheReplicatorFactory"
+          properties="replicateAsynchronously=true,
+                       replicatePuts=true,
+                       replicateUpdates=true,
+                       replicateUpdatesViaCopy=true,
+                       replicateRemovals=true,
+                       asynchronousReplicationIntervalMillis=1000"
+           propertySeparator=","/>
+
+    This listener supports the same properties as the RMICacheReplicationFactory.
+
+    Cluster Bootstrapping
+    +++++++++++++++++++++
+
+    Bootstrapping a cluster may use a different mechanism to replication. e.g you can mix
+    JMS replication with bootstrap via RMI - just make sure you have the cacheManagerPeerProviderFactory
+    and cacheManagerPeerListenerFactory configured.
+
+    There are two bootstrapping mechanisms: RMI and JGroups.
+
+    RMI Bootstrap
+
+    The RMIBootstrapCacheLoader bootstraps caches in clusters where RMICacheReplicators are
+    used. It is configured as per the following example:
+
+    <bootstrapCacheLoaderFactory
+        class="net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory"
+        properties="bootstrapAsynchronously=true, maximumChunkSizeBytes=5000000"
+        propertySeparator="," />
+
+    The RMIBootstrapCacheLoaderFactory recognises the following optional properties:
+
+    * bootstrapAsynchronously=true|false - whether the bootstrap happens in the background
+      after the cache has started. If false, bootstrapping must complete before the cache is
+      made available. The default value is true.
+
+    * maximumChunkSizeBytes=<integer> - Caches can potentially be very large, larger than the
+      memory limits of the VM. This property allows the bootstraper to fetched elements in
+      chunks. The default chunk size is 5000000 (5MB).
+
+    JGroups Bootstrap
+
+    Here is an example of bootstrap configuration using JGroups boostrap:
+
+    <bootstrapCacheLoaderFactory class="net.sf.ehcache.distribution.jgroups.JGroupsBootstrapCacheLoaderFactory"
+                                    properties="bootstrapAsynchronously=true"/>
+
+    The configuration properties are the same as for RMI above. Note that JGroups bootstrap only supports
+    asynchronous bootstrap mode.
+
+
+    Cache Exception Handling
+    ++++++++++++++++++++++++
+
+    By default, most cache operations will propagate a runtime CacheException on failure. An
+    interceptor, using a dynamic proxy, may be configured so that a CacheExceptionHandler can
+    be configured to intercept Exceptions. Errors are not intercepted.
+
+    It is configured as per the following example:
+
+      <cacheExceptionHandlerFactory class="com.example.ExampleExceptionHandlerFactory"
+                                      properties="logLevel=FINE"/>
+
+    Caches with ExceptionHandling configured are not of type Cache, but are of type Ehcache only,
+    and are not available using CacheManager.getCache(), but using CacheManager.getEhcache().
+
+
+    Cache Loader
+    ++++++++++++
+
+    A default CacheLoader may be set which loads objects into the cache through asynchronous and
+    synchronous methods on Cache. This is different to the bootstrap cache loader, which is used
+    only in distributed caching.
+
+    It is configured as per the following example:
+
+        <cacheLoaderFactory class="com.example.ExampleCacheLoaderFactory"
+                                      properties="type=int,startCounter=10"/>
+
+    Element value comparator
+    ++++++++++++++++++++++++
+
+    These two cache atomic methods:
+      removeElement(Element e)
+      replace(Element old, Element element)
+
+    rely on comparison of cached elements value. The default implementation relies on Object.equals()
+    but that can be changed in case you want to use a different way to compute equality of two elements.
+
+    This is configured as per the following example:
+
+    <elementValueComparator class="com.company.xyz.MyElementComparator"/>
+
+    The MyElementComparator class must implement the is net.sf.ehcache.store.ElementValueComparator
+    interface. The default implementation is net.sf.ehcache.store.DefaultElementValueComparator.
+
+
+    SizeOf Policy
+    +++++++++++++
+
+    Control how deep the SizeOf engine can go when sizing on-heap elements.
+
+    This is configured as per the following example:
+
+    <sizeOfPolicy maxDepth="100" maxDepthExceededBehavior="abort"/>
+
+    maxDepth controls how many linked objects can be visited before the SizeOf engine takes any action.
+    maxDepthExceededBehavior specifies what happens when the max depth is exceeded while sizing an object graph.
+     "continue" makes the SizeOf engine log a warning and continue the sizing. This is the default.
+     "abort"    makes the SizeOf engine abort the sizing, log a warning and mark the cache as not correctly tracking
+                memory usage. This makes Ehcache.hasAbortedSizeOf() return true when this happens.
+
+    The SizeOf policy can be configured at the cache manager level (directly under <ehcache>) and at
+    the cache level (under <cache> or <defaultCache>). The cache policy always overrides the cache manager
+    one if both are set. This element has no effect on distributed caches.
+
+    Transactions
+    ++++++++++++
+
+    To enable an ehcache as transactions, set the transactionalMode
+
+    transactionalMode="xa" - high performance JTA/XA implementation
+    transactionalMode="xa_strict" - canonically correct JTA/XA implementation
+    transactionMode="local" - high performance local transactions involving caches only
+    transactionalMode="off" - the default, no transactions
+
+    If set, all cache operations will need to be done through transactions.
+
+    To prevent users keeping references on stored elements and modifying them outside of any transaction's control,
+    transactions also require the cache to be configured copyOnRead and copyOnWrite.
+
+    CacheWriter
+    ++++++++++++
+
+    A CacheWriter can be set to write to an underlying resource. Only one CacheWriter can be
+    configured per cache.
+
+    The following is an example of how to configure CacheWriter for write-through:
+
+        <cacheWriter writeMode="write-through" notifyListenersOnException="true">
+            <cacheWriterFactory class="net.sf.ehcache.writer.TestCacheWriterFactory"
+                                properties="type=int,startCounter=10"/>
+        </cacheWriter>
+
+    The following is an example of how to configure CacheWriter for write-behind:
+
+        <cacheWriter writeMode="write-behind" minWriteDelay="1" maxWriteDelay="5"
+                     rateLimitPerSecond="5" writeCoalescing="true" writeBatching="true" writeBatchSize="1"
+                     retryAttempts="2" retryAttemptDelaySeconds="1">
+            <cacheWriterFactory class="net.sf.ehcache.writer.TestCacheWriterFactory"
+                                properties="type=int,startCounter=10"/>
+        </cacheWriter>
+
+    The cacheWriter element has the following attributes:
+    * writeMode: the write mode, write-through or write-behind
+
+    These attributes only apply to write-through mode:
+    * notifyListenersOnException: Sets whether to notify listeners when an exception occurs on a writer operation.
+
+    These attributes only apply to write-behind mode:
+    * minWriteDelay: Set the minimum number of seconds to wait before writing behind. If set to a value greater than 0,
+      it permits operations to build up in the queue. This is different from the maximum write delay in that by waiting
+      a minimum amount of time, work is always being built up. If the minimum write delay is set to zero and the
+      CacheWriter performs its work very quickly, the overhead of processing the write behind queue items becomes very
+      noticeable in a cluster since all the operations might be done for individual items instead of for a collection
+      of them.
+    * maxWriteDelay: Set the maximum number of seconds to wait before writing behind. If set to a value greater than 0,
+      it permits operations to build up in the queue to enable effective coalescing and batching optimisations.
+    * writeBatching: Sets whether to batch write operations. If set to true, writeAll and deleteAll will be called on
+      the CacheWriter rather than write and delete being called for each key. Resources such as databases can perform
+      more efficiently if updates are batched, thus reducing load.
+    * writeBatchSize: Sets the number of operations to include in each batch when writeBatching is enabled. If there are
+      less entries in the write-behind queue than the batch size, the queue length size is used.
+    * rateLimitPerSecond: Sets the maximum number of write operations to allow per second when writeBatching is enabled.
+    * writeCoalescing: Sets whether to use write coalescing. If set to true and multiple operations on the same key are
+      present in the write-behind queue, only the latest write is done, as the others are redundant.
+    * retryAttempts: Sets the number of times the operation is retried in the CacheWriter, this happens after the
+      original operation.
+    * retryAttemptDelaySeconds: Sets the number of seconds to wait before retrying an failed operation.
+
+    Cache Extension
+    +++++++++++++++
+
+    CacheExtensions are a general purpose mechanism to allow generic extensions to a Cache.
+    CacheExtensions are tied into the Cache lifecycle.
+
+    CacheExtensions are created using the CacheExtensionFactory which has a
+    <code>createCacheCacheExtension()</code> method which takes as a parameter a
+    Cache and properties. It can thus call back into any public method on Cache, including, of
+    course, the load methods.
+
+    Extensions are added as per the following example:
+
+         <cacheExtensionFactory class="com.example.FileWatchingCacheRefresherExtensionFactory"
+                             properties="refreshIntervalMillis=18000, loaderTimeout=3000,
+                                         flushPeriod=whatever, someOtherProperty=someValue ..."/>
+
+    Cache Decorator Factory
+    +++++++++++++++++++++++
+
+    Cache decorators can be configured directly in ehcache.xml. The decorators will be created and added to the CacheManager.
+    It accepts the name of a concrete class that extends net.sf.ehcache.constructs.CacheDecoratorFactory
+    The properties will be parsed according to the delimiter (default is comma ',') and passed to the concrete factory's
+    <code>createDecoratedEhcache(Ehcache cache, Properties properties)</code> method along with the reference to the owning cache.
+
+    It is configured as per the following example:
+
+        <cacheDecoratorFactory
+      class="com.company.DecoratedCacheFactory"
+      properties="property1=true ..." />
+
+    Distributed Caching with Terracotta
+    +++++++++++++++++++++++++++++++++++
+
+    Distributed Caches connect to a Terracotta Server Array. They are configured with the <terracotta> sub-element.
+
+    The <terracotta> sub-element has the following attributes:
+
+    * clustered=true|false - indicates whether this cache should be clustered (distributed) with Terracotta. By
+      default, if the <terracotta> element is included, clustered=true.
+
+    * valueMode=serialization|identity - the default is serialization
+
+      Indicates whether cache Elements are distributed with serialized copies or whether a single copy
+      in identity mode is distributed.
+
+      The implications of Identity mode should be clearly understood with reference to the Terracotta
+      documentation before use.
+
+    * copyOnRead=true|false - indicates whether cache values are deserialized on every read or if the
+      materialized cache value can be re-used between get() calls. This setting is useful if a cache
+      is being shared by callers with disparate classloaders or to prevent local drift if keys/values
+      are mutated locally without being put back in the cache.
+
+      The default is false.
+
+      Note: This setting is only relevant for caches with valueMode=serialization
+
+    * consistency=strong|eventual - Indicates whether this cache should have strong consistency or eventual
+      consistency. The default is eventual. See the documentation for the meaning of these terms.
+
+    * synchronousWrites=true|false
+
+      Synchronous writes (synchronousWrites="true")  maximize data safety by blocking the client thread until
+      the write has been written to the Terracotta Server Array.
+
+      This option is only available with consistency=strong. The default is false.
+
+    * concurrency - the number of segments that will be used by the map underneath the Terracotta Store.
+      Its optional and has default value of 0, which means will use default values based on the internal
+      Map being used underneath the store.
+
+      This value cannot be changed programmatically once a cache is initialized.
+
+    The <terracotta> sub-element also has a <nonstop> sub-element to allow configuration of cache behaviour if a distributed
+    cache operation cannot be completed within a set time or in the event of a clusterOffline message. If this element does not appear, nonstop behavior is off.
+
+    <nonstop> has the following attributes:
+
+    *  enabled="true" - defaults to true.
+
+    *  timeoutMillis - An SLA setting, so that if a cache operation takes longer than the allowed ms, it will timeout.
+
+    *  immediateTimeout="true|false" - What to do on receipt of a ClusterOffline event indicating that communications
+       with the Terracotta Server Array were interrupted.
+
+    <nonstop> has one sub-element, <timeoutBehavior> which has the following attribute:
+
+    *  type="noop|exception|localReads" - What to do when a timeout has occurred. Exception is the default.
+
+    Simplest example to indicate clustering:
+        <terracotta/>
+
+    To indicate the cache should not be clustered (or remove the <terracotta> element altogether):
+        <terracotta clustered="false"/>
+
+    To indicate the cache should be clustered using identity mode:
+        <terracotta clustered="true" valueMode="identity"/>
+
+    To indicate the cache should be clustered using "eventual" consistency mode for better performance :
+        <terracotta clustered="true" consistency="eventual"/>
+
+    To indicate the cache should be clustered using synchronous-write locking level:
+        <terracotta clustered="true" synchronousWrites="true"/>
+    -->
+
+    <!--
+    Default Cache configuration. These settings will be applied to caches
+    created programmatically using CacheManager.add(String cacheName).
+    This element is optional, and using CacheManager.add(String cacheName) when
+    its not present will throw CacheException
+
+    The defaultCache has an implicit name "default" which is a reserved cache name.
+    -->
+    <defaultCache
+            maxEntriesLocalHeap="10000"
+            eternal="false"
+            timeToIdleSeconds="120"
+            timeToLiveSeconds="120"
+            diskSpoolBufferSizeMB="30"
+            maxEntriesLocalDisk="10000000"
+            diskExpiryThreadIntervalSeconds="120"
+            memoryStoreEvictionPolicy="LRU"
+            statistics="false">
+        <persistence strategy="localTempSwap"/>
+    </defaultCache>
+
+    <!--
+    Sample caches. Following are some example caches. Remove these before use.
+    -->
+
+     <cache name="ldcache"
+           maxEntriesLocalHeap="10000"
+           maxEntriesLocalDisk="1000"
+           eternal="false"
+           diskSpoolBufferSizeMB="20"
+           timeToIdleSeconds="300"
+           timeToLiveSeconds="600"
+           memoryStoreEvictionPolicy="LFU"
+           transactionalMode="off">
+        <persistence strategy="localTempSwap"/>
+    </cache>
+
+
+
+</ehcache>

http://git-wip-us.apache.org/repos/asf/incubator-marmotta/blob/c32963d5/ldcache/ldcache-backend-kiwi/.classpath
----------------------------------------------------------------------
diff --git a/ldcache/ldcache-backend-kiwi/.classpath b/ldcache/ldcache-backend-kiwi/.classpath
new file mode 100644
index 0000000..90df7da
--- /dev/null
+++ b/ldcache/ldcache-backend-kiwi/.classpath
@@ -0,0 +1,6 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<classpath>
+	<classpathentry kind="con" path="org.eclipse.jdt.launching.JRE_CONTAINER/org.eclipse.jdt.internal.debug.ui.launcher.StandardVMType/JavaSE-1.6"/>
+	<classpathentry kind="con" path="org.eclipse.m2e.MAVEN2_CLASSPATH_CONTAINER"/>
+	<classpathentry kind="output" path="target/classes"/>
+</classpath>

http://git-wip-us.apache.org/repos/asf/incubator-marmotta/blob/c32963d5/ldcache/ldcache-backend-kiwi/.project
----------------------------------------------------------------------
diff --git a/ldcache/ldcache-backend-kiwi/.project b/ldcache/ldcache-backend-kiwi/.project
new file mode 100644
index 0000000..818b3f0
--- /dev/null
+++ b/ldcache/ldcache-backend-kiwi/.project
@@ -0,0 +1,23 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<projectDescription>
+	<name>ldcache-backend-kiwi</name>
+	<comment></comment>
+	<projects>
+	</projects>
+	<buildSpec>
+		<buildCommand>
+			<name>org.eclipse.jdt.core.javabuilder</name>
+			<arguments>
+			</arguments>
+		</buildCommand>
+		<buildCommand>
+			<name>org.eclipse.m2e.core.maven2Builder</name>
+			<arguments>
+			</arguments>
+		</buildCommand>
+	</buildSpec>
+	<natures>
+		<nature>org.eclipse.jdt.core.javanature</nature>
+		<nature>org.eclipse.m2e.core.maven2Nature</nature>
+	</natures>
+</projectDescription>

http://git-wip-us.apache.org/repos/asf/incubator-marmotta/blob/c32963d5/ldcache/ldcache-backend-kiwi/.settings/org.eclipse.jdt.core.prefs
----------------------------------------------------------------------
diff --git a/ldcache/ldcache-backend-kiwi/.settings/org.eclipse.jdt.core.prefs b/ldcache/ldcache-backend-kiwi/.settings/org.eclipse.jdt.core.prefs
new file mode 100644
index 0000000..60105c1
--- /dev/null
+++ b/ldcache/ldcache-backend-kiwi/.settings/org.eclipse.jdt.core.prefs
@@ -0,0 +1,5 @@
+eclipse.preferences.version=1
+org.eclipse.jdt.core.compiler.codegen.targetPlatform=1.6
+org.eclipse.jdt.core.compiler.compliance=1.6
+org.eclipse.jdt.core.compiler.problem.forbiddenReference=warning
+org.eclipse.jdt.core.compiler.source=1.6

http://git-wip-us.apache.org/repos/asf/incubator-marmotta/blob/c32963d5/ldcache/ldcache-backend-kiwi/.settings/org.eclipse.m2e.core.prefs
----------------------------------------------------------------------
diff --git a/ldcache/ldcache-backend-kiwi/.settings/org.eclipse.m2e.core.prefs b/ldcache/ldcache-backend-kiwi/.settings/org.eclipse.m2e.core.prefs
new file mode 100644
index 0000000..f897a7f
--- /dev/null
+++ b/ldcache/ldcache-backend-kiwi/.settings/org.eclipse.m2e.core.prefs
@@ -0,0 +1,4 @@
+activeProfiles=
+eclipse.preferences.version=1
+resolveWorkspaceProjects=true
+version=1

http://git-wip-us.apache.org/repos/asf/incubator-marmotta/blob/c32963d5/ldcache/ldcache-backend-kiwi/pom.xml
----------------------------------------------------------------------
diff --git a/ldcache/ldcache-backend-kiwi/pom.xml b/ldcache/ldcache-backend-kiwi/pom.xml
new file mode 100644
index 0000000..7127188
--- /dev/null
+++ b/ldcache/ldcache-backend-kiwi/pom.xml
@@ -0,0 +1,164 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  ~ Copyright (c) 2013 The Apache Software Foundation
+  ~  
+  ~  Licensed under the Apache License, Version 2.0 (the "License");
+  ~  you may not use this file except in compliance with the License.
+  ~  You may obtain a copy of the License at
+  ~  
+  ~      http://www.apache.org/licenses/LICENSE-2.0
+  ~  
+  ~  Unless required by applicable law or agreed to in writing, software
+  ~  distributed under the License is distributed on an "AS IS" BASIS,
+  ~  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  ~  See the License for the specific language governing permissions and
+  ~  limitations under the License.
+  -->
+
+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
+    <modelVersion>4.0.0</modelVersion>
+
+    <parent>
+        <groupId>at.newmedialab.lmf</groupId>
+        <artifactId>ldcache-parent</artifactId>
+        <version>3.0.0-SNAPSHOT</version>
+        <relativePath>../</relativePath>
+    </parent>
+
+    <artifactId>ldcache-backend-kiwi</artifactId>
+    <name>LDCache Backend: KiWi</name>
+
+    <description>
+        Linked Data Caching Backend based on the JDBC database used by the KiWi triple store. Caches resources and
+        caching information in the database and triples in the central triple store (using a dedicated context graph).
+    </description>
+
+
+    <build>
+        <plugins>
+            <plugin>
+                <groupId>org.apache.maven.plugins</groupId>
+                <artifactId>maven-surefire-plugin</artifactId>
+                <configuration>
+                    <systemPropertyVariables>
+                        <h2.url>jdbc:h2:mem:test;MVCC=true;DB_CLOSE_ON_EXIT=TRUE</h2.url>
+                        <h2.user>sa</h2.user>
+                        <h2.pass />
+
+                        <!-- enable or pass on command line for testing local PostgreSQL -->
+                        <!--
+                        <postgresql.url>jdbc:postgresql://localhost:5433/kiwitest?prepareThreshold=3</postgresql.url>
+                        <postgresql.user>lmf</postgresql.user>
+                        <postgresql.pass>lmf</postgresql.pass>
+                        -->
+
+                        <!-- enable or pass on command line for testing local MySQL -->
+                        <!--
+                        <mysql.url>jdbc:mysql://localhost:3306/kiwitest</mysql.url>
+                        <mysql.user>lmf</mysql.user>
+                        <mysql.pass>lmf</mysql.pass>
+                        -->
+
+                    </systemPropertyVariables>
+                </configuration>
+            </plugin>
+        </plugins>
+    </build>
+
+
+
+    <dependencies>
+        <dependency>
+            <groupId>at.newmedialab.lmf</groupId>
+            <artifactId>ldcache-api</artifactId>
+        </dependency>
+
+
+        <!-- Use KiWi Triple Store for caching -->
+        <dependency>
+            <groupId>at.newmedialab.lmf</groupId>
+            <artifactId>kiwi-triplestore</artifactId>
+        </dependency>
+        <dependency>
+            <groupId>at.newmedialab.lmf</groupId>
+            <artifactId>kiwi-contextaware</artifactId>
+        </dependency>
+
+        <!-- Logging -->
+        <dependency>
+            <groupId>org.slf4j</groupId>
+            <artifactId>slf4j-api</artifactId>
+        </dependency>
+        <dependency>
+            <groupId>org.slf4j</groupId>
+            <artifactId>log4j-over-slf4j</artifactId>
+        </dependency>
+
+        <!-- Sesame dependencies -->
+        <dependency>
+            <groupId>org.openrdf.sesame</groupId>
+            <artifactId>sesame-model</artifactId>
+        </dependency>
+        <dependency>
+            <groupId>org.openrdf.sesame</groupId>
+            <artifactId>sesame-repository-api</artifactId>
+        </dependency>
+        <dependency>
+            <groupId>org.openrdf.sesame</groupId>
+            <artifactId>sesame-repository-sail</artifactId>
+        </dependency>
+        <dependency>
+            <groupId>org.openrdf.sesame</groupId>
+            <artifactId>sesame-sail-api</artifactId>
+        </dependency>
+
+
+        <!-- Testing -->
+        <dependency>
+            <artifactId>junit</artifactId>
+            <groupId>junit</groupId>
+            <scope>test</scope>
+        </dependency>
+        <dependency> <!-- see http://www.dbunit.org/howto.html -->
+            <artifactId>dbunit</artifactId>
+            <groupId>org.dbunit</groupId>
+            <scope>test</scope>
+        </dependency>
+        <dependency>
+            <artifactId>hamcrest-core</artifactId>
+            <groupId>org.hamcrest</groupId>
+            <scope>test</scope>
+        </dependency>
+        <dependency>
+            <artifactId>hamcrest-library</artifactId>
+            <groupId>org.hamcrest</groupId>
+            <scope>test</scope>
+        </dependency>
+        <dependency>
+            <groupId>ch.qos.logback</groupId>
+            <artifactId>logback-core</artifactId>
+            <scope>test</scope>
+        </dependency>
+        <dependency>
+            <groupId>ch.qos.logback</groupId>
+            <artifactId>logback-classic</artifactId>
+            <scope>test</scope>
+        </dependency>
+        <dependency>
+            <groupId>com.h2database</groupId>
+            <artifactId>h2</artifactId>
+            <scope>test</scope>
+        </dependency>
+        <dependency>
+            <groupId>postgresql</groupId>
+            <artifactId>postgresql</artifactId>
+            <scope>test</scope>
+        </dependency>
+        <dependency>
+            <groupId>mysql</groupId>
+            <artifactId>mysql-connector-java</artifactId>
+            <scope>test</scope>
+        </dependency>
+
+    </dependencies>
+</project>
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-marmotta/blob/c32963d5/ldcache/ldcache-backend-kiwi/src/main/java/org/apache/marmotta/ldcache/backend/kiwi/LDCachingKiWiBackend.java
----------------------------------------------------------------------
diff --git a/ldcache/ldcache-backend-kiwi/src/main/java/org/apache/marmotta/ldcache/backend/kiwi/LDCachingKiWiBackend.java b/ldcache/ldcache-backend-kiwi/src/main/java/org/apache/marmotta/ldcache/backend/kiwi/LDCachingKiWiBackend.java
new file mode 100644
index 0000000..19b71ae
--- /dev/null
+++ b/ldcache/ldcache-backend-kiwi/src/main/java/org/apache/marmotta/ldcache/backend/kiwi/LDCachingKiWiBackend.java
@@ -0,0 +1,217 @@
+/**
+ * Copyright (C) 2013 Salzburg Research.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.marmotta.ldcache.backend.kiwi;
+
+import info.aduna.iteration.CloseableIteration;
+import info.aduna.iteration.ExceptionConvertingIteration;
+import org.apache.marmotta.kiwi.sail.KiWiStore;
+import org.apache.marmotta.ldcache.api.LDCachingBackend;
+import org.apache.marmotta.ldcache.api.LDCachingConnection;
+import org.apache.marmotta.ldcache.backend.kiwi.persistence.LDCachingKiWiPersistence;
+import org.apache.marmotta.ldcache.backend.kiwi.repository.LDCachingSailRepositoryConnection;
+import org.apache.marmotta.ldcache.backend.kiwi.sail.LDCachingKiWiSail;
+import org.apache.marmotta.ldcache.backend.kiwi.sail.LDCachingKiWiSailConnection;
+import org.apache.marmotta.ldcache.model.CacheEntry;
+import org.openrdf.repository.RepositoryException;
+import org.openrdf.repository.sail.SailRepository;
+import org.openrdf.sail.SailException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.sql.SQLException;
+
+/**
+ * Add file description here!
+ * <p/>
+ * Author: Sebastian Schaffert (sschaffert@apache.org)
+ */
+public class LDCachingKiWiBackend implements LDCachingBackend {
+
+    private static Logger log = LoggerFactory.getLogger(LDCachingKiWiBackend.class);
+
+
+
+    /**
+     * URI used as cache context in the central triple store
+     */
+    private String cacheContext;
+
+
+    /**
+     * Direct access to the caching SAIL with its caching maintenance functionality.
+     */
+    private LDCachingKiWiSail sail;
+
+
+    private LDCachingKiWiPersistence persistence;
+
+    /**
+     * Repository API access to the cache data
+     */
+    private SailRepository repository;
+
+    /**
+     * Create a new LDCache KiWi backend using the given store and context for caching triples and storing cache
+     * metadata via JDBC in the database.
+     *
+     * @param store
+     * @param cacheContext
+     */
+    public LDCachingKiWiBackend(KiWiStore store, String cacheContext) {
+        this.cacheContext = cacheContext;
+        this.sail         = new LDCachingKiWiSail(store);
+        this.repository   = new SailRepository(sail);
+        this.persistence  = new LDCachingKiWiPersistence(store.getPersistence());
+    }
+
+    /**
+     * Return a repository connection that can be used for caching. The LDCache will first remove all statements for
+     * the newly cached resources and then add retrieved statements as-is to this connection and properly commit and
+     * close it after use.
+     * <p/>
+     * Note that in case the statements should be rewritten this method must take care of providing the proper
+     * connection, e.g. by using a ContextAwareRepositoryConnection to add a context to all statements when adding them.
+     *
+     *
+     * @param resource the resource that will be cached
+     * @return a repository connection that can be used for storing retrieved triples for caching
+     */
+    @Override
+    public LDCachingConnection getCacheConnection(String resource) throws RepositoryException {
+        try {
+            LDCachingKiWiSailConnection sailConnection = sail.getConnection();
+
+            return new LDCachingSailRepositoryConnection(repository,sailConnection,cacheContext);
+        } catch (SailException e) {
+            throw new RepositoryException(e);
+        }
+    }
+
+    /**
+     * Return an iterator over all expired cache entries (can e.g. be used for refreshing).
+     *
+     * @return
+     */
+    @Override
+    public CloseableIteration<CacheEntry, RepositoryException> listExpiredEntries()  throws RepositoryException {
+        try {
+            final LDCachingKiWiSailConnection sailConnection = sail.getConnection();
+            sailConnection.begin();
+
+            return new ExceptionConvertingIteration<CacheEntry, RepositoryException>(sailConnection.listExpired()) {
+                @Override
+                protected RepositoryException convert(Exception e) {
+                    return new RepositoryException(e);
+                }
+
+                /**
+                 * Closes this Iteration as well as the wrapped Iteration if it happens to be
+                 * a {@link info.aduna.iteration.CloseableIteration}.
+                 */
+                @Override
+                protected void handleClose() throws RepositoryException {
+                    super.handleClose();
+                    try {
+                        sailConnection.commit();
+                        sailConnection.close();
+                    } catch (SailException ex) {
+                        throw new RepositoryException(ex);
+                    }
+                }
+            };
+        } catch (SailException e) {
+            throw new RepositoryException(e);
+        }
+    }
+
+    /**
+     * Return an iterator over all cache entries (can e.g. be used for refreshing or expiring).
+     *
+     * @return
+     */
+    @Override
+    public CloseableIteration<CacheEntry, RepositoryException> listCacheEntries()  throws RepositoryException {
+        try {
+            final LDCachingKiWiSailConnection sailConnection = sail.getConnection();
+            sailConnection.begin();
+
+            return new ExceptionConvertingIteration<CacheEntry, RepositoryException>(sailConnection.listAll()) {
+                @Override
+                protected RepositoryException convert(Exception e) {
+                    return new RepositoryException(e);
+                }
+
+                /**
+                 * Closes this Iteration as well as the wrapped Iteration if it happens to be
+                 * a {@link info.aduna.iteration.CloseableIteration}.
+                 */
+                @Override
+                protected void handleClose() throws RepositoryException {
+                    super.handleClose();
+                    try {
+                        sailConnection.commit();
+                        sailConnection.close();
+                    } catch (SailException ex) {
+                        throw new RepositoryException(ex);
+                    }
+                }
+            };
+        } catch (SailException e) {
+            throw new RepositoryException(e);
+        }
+    }
+
+
+    public LDCachingKiWiPersistence getPersistence() {
+        return persistence;
+    }
+
+    /**
+     * Carry out any initialization tasks that might be necessary
+     */
+    @Override
+    public void initialize() {
+        try {
+            repository.initialize();
+        } catch (RepositoryException e) {
+            log.error("error initializing secondary repository",e);
+        }
+
+        try {
+            persistence.initDatabase();
+        } catch (SQLException e) {
+            log.error("error initializing LDCache database tables",e);
+        }
+
+        // register cache context in database
+        repository.getValueFactory().createURI(cacheContext);
+
+    }
+
+    /**
+     * Shutdown the backend and free all runtime resources.
+     */
+    @Override
+    public void shutdown() {
+        try {
+            repository.shutDown();
+        } catch (RepositoryException e) {
+            log.error("error shutting down secondary repository",e);
+        }
+    }
+
+
+}

http://git-wip-us.apache.org/repos/asf/incubator-marmotta/blob/c32963d5/ldcache/ldcache-backend-kiwi/src/main/java/org/apache/marmotta/ldcache/backend/kiwi/model/KiWiCacheEntry.java
----------------------------------------------------------------------
diff --git a/ldcache/ldcache-backend-kiwi/src/main/java/org/apache/marmotta/ldcache/backend/kiwi/model/KiWiCacheEntry.java b/ldcache/ldcache-backend-kiwi/src/main/java/org/apache/marmotta/ldcache/backend/kiwi/model/KiWiCacheEntry.java
new file mode 100644
index 0000000..519d721
--- /dev/null
+++ b/ldcache/ldcache-backend-kiwi/src/main/java/org/apache/marmotta/ldcache/backend/kiwi/model/KiWiCacheEntry.java
@@ -0,0 +1,39 @@
+/**
+ * Copyright (C) 2013 Salzburg Research.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.marmotta.ldcache.backend.kiwi.model;
+
+import org.apache.marmotta.ldcache.model.CacheEntry;
+
+/**
+ * Add file description here!
+ * <p/>
+ * Author: Sebastian Schaffert (sschaffert@apache.org)
+ */
+public class KiWiCacheEntry extends CacheEntry {
+
+    Long id;
+
+    public KiWiCacheEntry() {
+    }
+
+    public Long getId() {
+        return id;
+    }
+
+    public void setId(Long id) {
+        this.id = id;
+    }
+}

http://git-wip-us.apache.org/repos/asf/incubator-marmotta/blob/c32963d5/ldcache/ldcache-backend-kiwi/src/main/java/org/apache/marmotta/ldcache/backend/kiwi/persistence/LDCachingKiWiPersistence.java
----------------------------------------------------------------------
diff --git a/ldcache/ldcache-backend-kiwi/src/main/java/org/apache/marmotta/ldcache/backend/kiwi/persistence/LDCachingKiWiPersistence.java b/ldcache/ldcache-backend-kiwi/src/main/java/org/apache/marmotta/ldcache/backend/kiwi/persistence/LDCachingKiWiPersistence.java
new file mode 100644
index 0000000..2dcd991
--- /dev/null
+++ b/ldcache/ldcache-backend-kiwi/src/main/java/org/apache/marmotta/ldcache/backend/kiwi/persistence/LDCachingKiWiPersistence.java
@@ -0,0 +1,76 @@
+/**
+ * Copyright (C) 2013 Salzburg Research.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.marmotta.ldcache.backend.kiwi.persistence;
+
+import org.apache.marmotta.kiwi.persistence.KiWiDialect;
+import org.apache.marmotta.kiwi.persistence.KiWiPersistence;
+
+import java.sql.SQLException;
+
+/**
+ * A KiWi persistence wrapper for storing caching information in the database used by the KiWi triple store
+ * wrapped by the persistence.
+ * <p/>
+ * Author: Sebastian Schaffert (sschaffert@apache.org)
+ */
+public class LDCachingKiWiPersistence {
+
+    /**
+     * Get the parent persistence service to access the database
+     */
+    private KiWiPersistence persistence;
+
+
+    public LDCachingKiWiPersistence(KiWiPersistence persistence) {
+        this.persistence = persistence;
+
+        persistence.addNodeTableDependency("ldcache_entries","resource_id");
+    }
+
+    /**
+     * Initialise the database, creating or upgrading tables if they do not exist or are of the wrong version.
+     * This method must only be called after the initDatabase of the wrapped KiWiPersistence has been evaluated.
+     */
+    public void initDatabase() throws SQLException {
+        persistence.initDatabase("ldcache", new String[] {"ldcache_entries"});
+    }
+
+    /**
+     * Drop the versioning tables; this method must be called before the dropDatabase method of the underlying
+     * KiWiPersistence is called.
+     *
+     * @throws SQLException
+     */
+    public void dropDatabase() throws SQLException {
+        persistence.dropDatabase("ldcache");
+    }
+
+    /**
+     * Return a connection from the connection pool which already has the auto-commit disabled.
+     *
+     * @return a fresh JDBC connection from the connection pool
+     * @throws java.sql.SQLException in case a new connection could not be established
+     */
+    public LDCachingKiWiPersistenceConnection getConnection() throws SQLException {
+        return new LDCachingKiWiPersistenceConnection(persistence.getConnection());
+    }
+
+
+    public KiWiDialect getDialect() {
+        return persistence.getDialect();
+    }
+
+}

http://git-wip-us.apache.org/repos/asf/incubator-marmotta/blob/c32963d5/ldcache/ldcache-backend-kiwi/src/main/java/org/apache/marmotta/ldcache/backend/kiwi/persistence/LDCachingKiWiPersistenceConnection.java
----------------------------------------------------------------------
diff --git a/ldcache/ldcache-backend-kiwi/src/main/java/org/apache/marmotta/ldcache/backend/kiwi/persistence/LDCachingKiWiPersistenceConnection.java b/ldcache/ldcache-backend-kiwi/src/main/java/org/apache/marmotta/ldcache/backend/kiwi/persistence/LDCachingKiWiPersistenceConnection.java
new file mode 100644
index 0000000..4a243de
--- /dev/null
+++ b/ldcache/ldcache-backend-kiwi/src/main/java/org/apache/marmotta/ldcache/backend/kiwi/persistence/LDCachingKiWiPersistenceConnection.java
@@ -0,0 +1,352 @@
+/**
+ * Copyright (C) 2013 Salzburg Research.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.marmotta.ldcache.backend.kiwi.persistence;
+
+import info.aduna.iteration.CloseableIteration;
+import net.sf.ehcache.Cache;
+import net.sf.ehcache.Element;
+import org.apache.marmotta.kiwi.model.rdf.KiWiNode;
+import org.apache.marmotta.kiwi.model.rdf.KiWiResource;
+import org.apache.marmotta.kiwi.persistence.KiWiConnection;
+import org.apache.marmotta.kiwi.persistence.util.ResultSetIteration;
+import org.apache.marmotta.kiwi.persistence.util.ResultTransformerFunction;
+import org.apache.marmotta.ldcache.backend.kiwi.model.KiWiCacheEntry;
+import org.apache.marmotta.ldcache.model.CacheEntry;
+import org.openrdf.model.URI;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.sql.Timestamp;
+import java.util.Date;
+import java.util.Set;
+
+/**
+ * Add file description here!
+ * <p/>
+ * Author: Sebastian Schaffert (sschaffert@apache.org)
+ */
+public class LDCachingKiWiPersistenceConnection  {
+
+    private static Logger log = LoggerFactory.getLogger(LDCachingKiWiPersistenceConnection.class);
+
+
+    private KiWiConnection connection;
+
+    /**
+     * Cache entries by resource
+     */
+    private Cache entryResourceCache;
+
+
+    /**
+     * Cache entries by ID
+     */
+    private Cache entryIdCache;
+
+
+    public LDCachingKiWiPersistenceConnection(KiWiConnection connection) throws SQLException {
+        this.connection    = connection;
+
+        entryResourceCache = connection.getCacheManager().getCacheByName("ldcache-entry-uri");
+        entryIdCache       = connection.getCacheManager().getCacheByName("ldcache-entry-id");
+    }
+
+    public KiWiCacheEntry constructCacheEntry(ResultSet row) throws SQLException {
+        Long id = row.getLong("id");
+
+        Element cached = entryIdCache.get(id);
+
+        // lookup element in cache first, so we can avoid reconstructing it if it is already there
+        if(cached != null) {
+            return (KiWiCacheEntry)cached.getObjectValue();
+        }
+
+        KiWiCacheEntry entry = new KiWiCacheEntry();
+        entry.setId(id);
+        entry.setLastRetrieved(new Date(row.getTimestamp("retrieved_at").getTime()));
+        entry.setExpiryDate(new Date(row.getTimestamp("expires_at").getTime()));
+        entry.setUpdateCount(row.getInt("update_count"));
+        entry.setResource((URI) connection.loadNodeById(row.getLong("resource_id")));
+
+        entryIdCache.put(new Element(id,entry));
+        entryResourceCache.put(new Element(entry.getResource().stringValue(),entry));
+
+        return entry;
+    }
+
+    /**
+     * Load the cache entry for the given URI from the database.
+     *
+     * @param uri the URI of the cached resource for which to return the cache entry
+     * @return an instance of KiWiCacheEntry representing the caching metadata for the given resource, or null in case there
+     *         is no entry for this resource
+     * @throws SQLException
+     */
+    public KiWiCacheEntry getCacheEntry(String uri) throws SQLException {
+
+        Element cached = entryResourceCache.get(uri);
+
+        // lookup element in cache first, so we can avoid reconstructing it if it is already there
+        if(cached != null) {
+            return (KiWiCacheEntry)cached.getObjectValue();
+        }
+
+        PreparedStatement query = connection.getPreparedStatement("load.entry_by_uri");
+        query.setString(1, uri);
+        query.setMaxRows(1);
+
+        // run the database query and if it yields a result, construct a new node; the method call will take care of
+        // caching the constructed node for future calls
+        ResultSet result = query.executeQuery();
+        try {
+            if(result.next()) {
+                return constructCacheEntry(result);
+            } else {
+                return null;
+            }
+        } finally {
+            result.close();
+        }
+    }
+
+    /**
+     * Store the cache entry passed as argument in the database. In case the passed argument is not an instance of
+     * KiWiCacheEntry, it will first be converted into a KiWiCacheEntry by copying the fields. In this case, the
+     * stored object will not be the same instance as the object passed as argument.
+     *
+     * @param entry the cache entry to store
+     * @throws SQLException
+     */
+    public void storeCacheEntry(CacheEntry entry) throws SQLException {
+        KiWiCacheEntry kEntry;
+        if(entry instanceof KiWiCacheEntry) {
+            kEntry = (KiWiCacheEntry) entry;
+        } else {
+            kEntry = new KiWiCacheEntry();
+            kEntry.setExpiryDate(entry.getExpiryDate());
+            kEntry.setLastRetrieved(entry.getLastRetrieved());
+            kEntry.setUpdateCount(entry.getUpdateCount());
+            kEntry.setResource(entry.getResource());
+        }
+
+        if(! (entry.getResource() instanceof KiWiResource) || ((KiWiResource) entry.getResource()).getId() == null) {
+            throw new IllegalStateException("the resource contained in the cache entry is not a KiWiResource!");
+        }
+
+        kEntry.setId(connection.getNextSequence("seq.ldcache"));
+
+        PreparedStatement insertEntry = connection.getPreparedStatement("store.entry");
+        insertEntry.setLong(1, kEntry.getId());
+        insertEntry.setTimestamp(2, new Timestamp(kEntry.getLastRetrieved().getTime()));
+        insertEntry.setTimestamp(3,new Timestamp(kEntry.getExpiryDate().getTime()));
+        insertEntry.setLong(4,((KiWiNode)kEntry.getResource()).getId());
+        insertEntry.setInt(5, kEntry.getUpdateCount());
+        insertEntry.executeUpdate();
+
+        log.debug("persisted ld-cache entry with id {}", kEntry.getId());
+        
+        entryIdCache.put(new Element(kEntry.getId(),kEntry));
+        entryResourceCache.put(new Element(kEntry.getResource().stringValue(),kEntry));
+
+    }
+
+    /**
+     * Remove the given cache entry from the database. The cache entry passed as argument must be a persistent instance
+     * of KiWiCacheEntry.
+     * @param entry
+     * @throws SQLException
+     */
+    public void removeCacheEntry(CacheEntry entry) throws SQLException {
+        if(! (entry instanceof KiWiCacheEntry) || ((KiWiCacheEntry) entry).getId() == null) {
+            throw new IllegalStateException("the passed cache entry is not managed by this connection");
+        }
+
+        PreparedStatement deleteEntry = connection.getPreparedStatement("delete.entry");
+        deleteEntry.setLong(1,((KiWiCacheEntry) entry).getId());
+        deleteEntry.executeUpdate();
+
+        entryIdCache.remove(((KiWiCacheEntry) entry).getId());
+        entryResourceCache.remove(entry.getResource().stringValue());
+    }
+
+    /**
+     * Remove the given cache entry from the database. The cache entry passed as argument must be a persistent instance
+     * of KiWiCacheEntry.
+     * @param uri URI of the entry to delete
+     * @throws SQLException
+     */
+    public void removeCacheEntry(String uri) throws SQLException {
+
+        PreparedStatement deleteEntry = connection.getPreparedStatement("delete.entry_by_uri");
+        deleteEntry.setString(1,uri);
+        deleteEntry.executeUpdate();
+
+        Element cached = entryResourceCache.get(uri);
+
+        if(cached != null) {
+            entryResourceCache.remove(uri);
+            entryIdCache.remove(((KiWiCacheEntry) cached.getObjectValue()).getId());
+        }
+    }
+
+
+    /**
+     * List all cache entries with an expiry date older than the current time.
+     *
+     * @return a closeable iteration with KiWiCacheEntries; needs to be released by the caller
+     * @throws SQLException
+     */
+    public CloseableIteration<KiWiCacheEntry,SQLException> listExpired() throws SQLException {
+        PreparedStatement queryExpired = connection.getPreparedStatement("query.entries_expired");
+        final ResultSet result = queryExpired.executeQuery();
+
+        return new ResultSetIteration<KiWiCacheEntry>(result, new ResultTransformerFunction<KiWiCacheEntry>() {
+            @Override
+            public KiWiCacheEntry apply(ResultSet input) throws SQLException {
+                return constructCacheEntry(result);
+            }
+        });
+    }
+
+    /**
+     * List all cache entries in the database, regardless of expiry date.
+     *
+     * @return a closeable iteration with KiWiCacheEntries; needs to be released by the caller
+     * @throws SQLException
+     */
+    public CloseableIteration<KiWiCacheEntry,SQLException> listAll() throws SQLException {
+        PreparedStatement queryExpired = connection.getPreparedStatement("query.entries_all");
+        final ResultSet result = queryExpired.executeQuery();
+
+        return new ResultSetIteration<KiWiCacheEntry>(result, new ResultTransformerFunction<KiWiCacheEntry>() {
+            @Override
+            public KiWiCacheEntry apply(ResultSet input) throws SQLException {
+                return constructCacheEntry(result);
+            }
+        });
+    }
+
+    /**
+     * Makes all changes made since the previous
+     * commit/rollback permanent and releases any database locks
+     * currently held by this <code>Connection</code> object.
+     * This method should be
+     * used only when auto-commit mode has been disabled.
+     *
+     * @exception java.sql.SQLException if a database access error occurs,
+     * this method is called while participating in a distributed transaction,
+     * if this method is called on a closed conection or this
+     *            <code>Connection</code> object is in auto-commit mode
+     */
+    public void commit() throws SQLException {
+        connection.commit();
+    }
+
+    /**
+     * Releases this <code>Connection</code> object's database and JDBC resources
+     * immediately instead of waiting for them to be automatically released.
+     * <P>
+     * Calling the method <code>close</code> on a <code>Connection</code>
+     * object that is already closed is a no-op.
+     * <P>
+     * It is <b>strongly recommended</b> that an application explicitly
+     * commits or rolls back an active transaction prior to calling the
+     * <code>close</code> method.  If the <code>close</code> method is called
+     * and there is an active transaction, the results are implementation-defined.
+     * <P>
+     *
+     * @exception java.sql.SQLException SQLException if a database access error occurs
+     */
+    public void close() throws SQLException {
+        connection.close();
+    }
+
+    /**
+     * Retrieves whether this <code>Connection</code> object has been
+     * closed.  A connection is closed if the method <code>close</code>
+     * has been called on it or if certain fatal errors have occurred.
+     * This method is guaranteed to return <code>true</code> only when
+     * it is called after the method <code>Connection.close</code> has
+     * been called.
+     * <P>
+     * This method generally cannot be called to determine whether a
+     * connection to a database is valid or invalid.  A typical client
+     * can determine that a connection is invalid by catching any
+     * exceptions that might be thrown when an operation is attempted.
+     *
+     * @return <code>true</code> if this <code>Connection</code> object
+     *         is closed; <code>false</code> if it is still open
+     * @exception java.sql.SQLException if a database access error occurs
+     */
+    public boolean isClosed() throws SQLException {
+        return connection.isClosed();
+    }
+
+    /**
+     * Undoes all changes made in the current transaction
+     * and releases any database locks currently held
+     * by this <code>Connection</code> object. This method should be
+     * used only when auto-commit mode has been disabled.
+     *
+     * @exception java.sql.SQLException if a database access error occurs,
+     * this method is called while participating in a distributed transaction,
+     * this method is called on a closed connection or this
+     *            <code>Connection</code> object is in auto-commit mode
+     */
+    public void rollback() throws SQLException {
+        connection.rollback();
+    }
+
+    /**
+     * Store a new node in the database. The method will retrieve a new database id for the node and update the
+     * passed object. Afterwards, the node data will be inserted into the database using appropriate INSERT
+     * statements. The caller must make sure the connection is committed and closed properly.
+     * <p/>
+     * If the node already has an ID, the method will do nothing (assuming that it is already persistent)
+     *
+     * @param node
+     * @throws java.sql.SQLException
+     */
+    public void storeNode(KiWiNode node) throws SQLException {
+        connection.storeNode(node);
+    }
+
+    /**
+     * Return a collection of database tables contained in the database. This query is used for checking whether
+     * the database needs to be created when initialising the system.
+     *
+     *
+     *
+     * @return
+     * @throws java.sql.SQLException
+     */
+    public Set<String> getDatabaseTables() throws SQLException {
+        return connection.getDatabaseTables();
+    }
+
+    /**
+     * Return the KiWi version of the database this connection is operating on. This query is necessary for
+     * checking proper state of a database when initialising the system.
+     *
+     * @return
+     */
+    public int getDatabaseVersion() throws SQLException {
+        return connection.getDatabaseVersion();
+    }
+}

http://git-wip-us.apache.org/repos/asf/incubator-marmotta/blob/c32963d5/ldcache/ldcache-backend-kiwi/src/main/java/org/apache/marmotta/ldcache/backend/kiwi/repository/LDCachingSailRepositoryConnection.java
----------------------------------------------------------------------
diff --git a/ldcache/ldcache-backend-kiwi/src/main/java/org/apache/marmotta/ldcache/backend/kiwi/repository/LDCachingSailRepositoryConnection.java b/ldcache/ldcache-backend-kiwi/src/main/java/org/apache/marmotta/ldcache/backend/kiwi/repository/LDCachingSailRepositoryConnection.java
new file mode 100644
index 0000000..96c6c64
--- /dev/null
+++ b/ldcache/ldcache-backend-kiwi/src/main/java/org/apache/marmotta/ldcache/backend/kiwi/repository/LDCachingSailRepositoryConnection.java
@@ -0,0 +1,89 @@
+/**
+ * Copyright (C) 2013 Salzburg Research.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.marmotta.ldcache.backend.kiwi.repository;
+
+import org.apache.marmotta.kiwi.contextaware.ContextAwareSailConnection;
+import org.apache.marmotta.ldcache.api.LDCachingConnection;
+import org.apache.marmotta.ldcache.backend.kiwi.sail.LDCachingKiWiSailConnection;
+import org.apache.marmotta.ldcache.model.CacheEntry;
+import org.apache.marmotta.ldcache.sail.LDCachingSailConnection;
+import org.openrdf.model.URI;
+import org.openrdf.repository.RepositoryException;
+import org.openrdf.repository.sail.SailRepository;
+import org.openrdf.repository.sail.SailRepositoryConnection;
+
+/**
+ * This is an extension wrapper around sail repository connections that allows delegating the additional cache entry
+ * methods to the underlying SAIL repository. Otherwise it behaves like any SailRepositoryConnection.
+ * <p/>
+ * Author: Sebastian Schaffert (sschaffert@apache.org)
+ */
+public class LDCachingSailRepositoryConnection extends SailRepositoryConnection implements LDCachingConnection {
+
+    private LDCachingSailConnection cacheConnection;
+
+    public LDCachingSailRepositoryConnection(SailRepository repository, LDCachingKiWiSailConnection sailConnection, String cacheContext) {
+        super(repository, new ContextAwareSailConnection(sailConnection, sailConnection.getValueFactory().createURI(cacheContext)));
+        cacheConnection = sailConnection;
+    }
+
+    /**
+     * Store a cache entry for the passed resource in the backend. Depending on the backend, this can be a
+     * persistent storage or an in-memory storage.
+     *
+     * @param resource
+     * @param entry
+     */
+    @Override
+    public void addCacheEntry(URI resource, CacheEntry entry) throws RepositoryException {
+        try {
+            cacheConnection.addCacheEntry(resource,entry);
+        } catch (org.openrdf.sail.SailException e) {
+            throw new RepositoryException(e);
+        }
+    }
+
+    /**
+     * Get the cache entry for the passed resource, if any. Returns null in case there is no cache entry.
+     *
+     *
+     * @param resource the resource to look for
+     * @return the cache entry for the resource, or null if the resource has never been cached or is expired
+     */
+    @Override
+    public CacheEntry getCacheEntry(URI resource) throws RepositoryException {
+        try {
+            return cacheConnection.getCacheEntry(resource);
+        } catch (org.openrdf.sail.SailException e) {
+            throw new RepositoryException(e);
+        }
+    }
+
+    /**
+     * Remove the currently stored cache entry for the passed resource from the backend.
+     *
+     * @param resource
+     */
+    @Override
+    public void removeCacheEntry(URI resource) throws RepositoryException {
+        try {
+            cacheConnection.removeCacheEntry(resource);
+        } catch (org.openrdf.sail.SailException e) {
+            throw new RepositoryException(e);
+        }
+    }
+
+}

http://git-wip-us.apache.org/repos/asf/incubator-marmotta/blob/c32963d5/ldcache/ldcache-backend-kiwi/src/main/java/org/apache/marmotta/ldcache/backend/kiwi/sail/LDCachingKiWiSail.java
----------------------------------------------------------------------
diff --git a/ldcache/ldcache-backend-kiwi/src/main/java/org/apache/marmotta/ldcache/backend/kiwi/sail/LDCachingKiWiSail.java b/ldcache/ldcache-backend-kiwi/src/main/java/org/apache/marmotta/ldcache/backend/kiwi/sail/LDCachingKiWiSail.java
new file mode 100644
index 0000000..eaab2af
--- /dev/null
+++ b/ldcache/ldcache-backend-kiwi/src/main/java/org/apache/marmotta/ldcache/backend/kiwi/sail/LDCachingKiWiSail.java
@@ -0,0 +1,58 @@
+/**
+ * Copyright (C) 2013 Salzburg Research.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.marmotta.ldcache.backend.kiwi.sail;
+
+import org.apache.marmotta.kiwi.sail.KiWiSailConnection;
+import org.apache.marmotta.kiwi.sail.KiWiStore;
+import org.openrdf.sail.SailException;
+import org.openrdf.sail.helpers.SailWrapper;
+
+/**
+ * Add file description here!
+ * <p/>
+ * Author: Sebastian Schaffert (sschaffert@apache.org)
+ */
+public class LDCachingKiWiSail extends SailWrapper {
+
+    private KiWiStore store;
+
+    /**
+     * Creates a new SailWrapper that wraps the supplied Sail.
+     */
+    public LDCachingKiWiSail(KiWiStore baseSail) {
+        super(baseSail);
+
+        this.store = baseSail;
+    }
+
+    @Override
+    public LDCachingKiWiSailConnection getConnection() throws SailException {
+        return new LDCachingKiWiSailConnection((KiWiSailConnection) store.getConnection());
+    }
+
+    @Override
+    public void initialize() throws SailException {
+        // ignore, because we assume that the wrapped store is already initialized
+        if(!store.isInitialized()) {
+            throw new SailException("the LDCachingKiWiSail is a secondary sail and requires an already initialized store!");
+        }
+    }
+
+    @Override
+    public void shutDown() throws SailException {
+        // ignore, because we assume that the wrapped store will be shutdown by another sail
+    }
+}

http://git-wip-us.apache.org/repos/asf/incubator-marmotta/blob/c32963d5/ldcache/ldcache-backend-kiwi/src/main/java/org/apache/marmotta/ldcache/backend/kiwi/sail/LDCachingKiWiSailConnection.java
----------------------------------------------------------------------
diff --git a/ldcache/ldcache-backend-kiwi/src/main/java/org/apache/marmotta/ldcache/backend/kiwi/sail/LDCachingKiWiSailConnection.java b/ldcache/ldcache-backend-kiwi/src/main/java/org/apache/marmotta/ldcache/backend/kiwi/sail/LDCachingKiWiSailConnection.java
new file mode 100644
index 0000000..f5a38c2
--- /dev/null
+++ b/ldcache/ldcache-backend-kiwi/src/main/java/org/apache/marmotta/ldcache/backend/kiwi/sail/LDCachingKiWiSailConnection.java
@@ -0,0 +1,132 @@
+/**
+ * Copyright (C) 2013 Salzburg Research.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.marmotta.ldcache.backend.kiwi.sail;
+
+import info.aduna.iteration.CloseableIteration;
+import org.apache.marmotta.kiwi.sail.KiWiSailConnection;
+import org.apache.marmotta.kiwi.sail.KiWiValueFactory;
+import org.apache.marmotta.ldcache.backend.kiwi.model.KiWiCacheEntry;
+import org.apache.marmotta.ldcache.backend.kiwi.persistence.LDCachingKiWiPersistenceConnection;
+import org.apache.marmotta.ldcache.model.CacheEntry;
+import org.apache.marmotta.ldcache.sail.LDCachingSailConnection;
+import org.openrdf.model.URI;
+import org.openrdf.sail.SailException;
+import org.openrdf.sail.helpers.SailConnectionWrapper;
+
+import java.sql.SQLException;
+
+/**
+ * Add file description here!
+ * <p/>
+ * Author: Sebastian Schaffert (sschaffert@apache.org)
+ */
+public class LDCachingKiWiSailConnection extends SailConnectionWrapper implements LDCachingSailConnection {
+
+    private LDCachingKiWiPersistenceConnection persistence;
+
+    private KiWiSailConnection wrapped;
+
+    public LDCachingKiWiSailConnection(KiWiSailConnection wrappedCon) throws SailException {
+        super(wrappedCon);
+
+        this.wrapped = wrappedCon;
+        try {
+            this.persistence = new LDCachingKiWiPersistenceConnection(wrappedCon.getDatabaseConnection());
+        } catch (SQLException e) {
+            throw new SailException(e);
+        }
+    }
+
+    public KiWiValueFactory getValueFactory() {
+        return wrapped.getValueFactory();
+    }
+
+    /**
+     * Store a cache entry for the passed resource in the backend. Depending on the backend, this can be a
+     * persistent storage or an in-memory storage.
+     *
+     * @param resource
+     * @param entry
+     */
+    @Override
+    public void addCacheEntry(URI resource, CacheEntry entry) throws SailException {
+        try {
+            persistence.storeCacheEntry(entry);
+        } catch (SQLException e) {
+            throw new SailException(e);
+        }
+    }
+
+    /**
+     * Get the cache entry for the passed resource, if any. Returns null in case there is no cache entry.
+     *
+     *
+     * @param resource the resource to look for
+     * @return the cache entry for the resource, or null if the resource has never been cached or is expired
+     */
+    @Override
+    public CacheEntry getCacheEntry(URI resource) throws SailException {
+        try {
+            return persistence.getCacheEntry(resource.stringValue());
+        } catch (SQLException e) {
+            throw new SailException(e);
+        }
+    }
+
+    /**
+     * Remove the currently stored cache entry for the passed resource from the backend.
+     *
+     * @param resource
+     */
+    @Override
+    public void removeCacheEntry(URI resource) throws SailException {
+        try {
+            persistence.removeCacheEntry(resource.stringValue());
+        } catch (SQLException e) {
+            throw new SailException(e);
+        }
+    }
+
+    /**
+     * List all cache entries with an expiry date older than the current time.
+     *
+     * @return a closeable iteration with KiWiCacheEntries; needs to be released by the caller
+     * @throws SQLException
+     */
+    public CloseableIteration<KiWiCacheEntry,SQLException> listExpired() throws SailException {
+        try {
+            return persistence.listExpired();
+        } catch (SQLException e) {
+            throw new SailException(e);
+        }
+    }
+
+
+    /**
+     * List all cache entries in the database, regardless of expiry date.
+     *
+     * @return a closeable iteration with KiWiCacheEntries; needs to be released by the caller
+     * @throws SQLException
+     */
+    public CloseableIteration<KiWiCacheEntry,SQLException> listAll() throws SailException {
+        try {
+            return persistence.listAll();
+        } catch (SQLException e) {
+            throw new SailException(e);
+        }
+    }
+
+}

http://git-wip-us.apache.org/repos/asf/incubator-marmotta/blob/c32963d5/ldcache/ldcache-backend-kiwi/src/main/resources/org/apache/marmotta/kiwi/persistence/h2/create_ldcache_tables.sql
----------------------------------------------------------------------
diff --git a/ldcache/ldcache-backend-kiwi/src/main/resources/org/apache/marmotta/kiwi/persistence/h2/create_ldcache_tables.sql b/ldcache/ldcache-backend-kiwi/src/main/resources/org/apache/marmotta/kiwi/persistence/h2/create_ldcache_tables.sql
new file mode 100644
index 0000000..77f745f
--- /dev/null
+++ b/ldcache/ldcache-backend-kiwi/src/main/resources/org/apache/marmotta/kiwi/persistence/h2/create_ldcache_tables.sql
@@ -0,0 +1,15 @@
+CREATE SEQUENCE seq_ldcache;
+
+CREATE TABLE ldcache_entries (
+  id           bigint     NOT NULL,
+  retrieved_at timestamp  NOT NULL,
+  expires_at   timestamp  NOT NULL,
+  resource_id  bigint     NOT NULL REFERENCES nodes(id),
+  update_count int        NOT NULL DEFAULT 0,
+  PRIMARY KEY(id)
+);
+
+
+CREATE INDEX idx_ldcache_expires ON ldcache_entries(expires_at);
+CREATE INDEX idx_ldcache_resource ON ldcache_entries(resource_id);
+

http://git-wip-us.apache.org/repos/asf/incubator-marmotta/blob/c32963d5/ldcache/ldcache-backend-kiwi/src/main/resources/org/apache/marmotta/kiwi/persistence/h2/drop_ldcache_tables.sql
----------------------------------------------------------------------
diff --git a/ldcache/ldcache-backend-kiwi/src/main/resources/org/apache/marmotta/kiwi/persistence/h2/drop_ldcache_tables.sql b/ldcache/ldcache-backend-kiwi/src/main/resources/org/apache/marmotta/kiwi/persistence/h2/drop_ldcache_tables.sql
new file mode 100644
index 0000000..45568fe
--- /dev/null
+++ b/ldcache/ldcache-backend-kiwi/src/main/resources/org/apache/marmotta/kiwi/persistence/h2/drop_ldcache_tables.sql
@@ -0,0 +1,6 @@
+DROP INDEX idx_ldcache_expires;
+DROP INDEX idx_ldcache_resource;
+
+DROP TABLE IF EXISTS ldcache_entries;
+
+DROP SEQUENCE IF EXISTS seq_ldcache;


Mime
View raw message