jackrabbit-oak-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From ang...@apache.org
Subject svn commit: r1576011 - in /jackrabbit/oak/trunk/oak-run: ./ src/main/java/org/apache/jackrabbit/oak/benchmark/ src/main/java/org/apache/jackrabbit/oak/fixture/ src/main/java/org/apache/jackrabbit/oak/run/ src/test/java/org/apache/jackrabbit/oak/run/
Date Mon, 10 Mar 2014 17:37:41 GMT
Author: angela
Date: Mon Mar 10 17:37:41 2014
New Revision: 1576011

URL: http://svn.apache.org/r1576011
Log:
OAK-1426 : Cleanup options in oak-run

Modified:
    jackrabbit/oak/trunk/oak-run/README.md
    jackrabbit/oak/trunk/oak-run/src/main/java/org/apache/jackrabbit/oak/benchmark/BenchmarkRunner.java
    jackrabbit/oak/trunk/oak-run/src/main/java/org/apache/jackrabbit/oak/fixture/OakRepositoryFixture.java
    jackrabbit/oak/trunk/oak-run/src/main/java/org/apache/jackrabbit/oak/run/Main.java
    jackrabbit/oak/trunk/oak-run/src/test/java/org/apache/jackrabbit/oak/run/BasicServerTest.java

Modified: jackrabbit/oak/trunk/oak-run/README.md
URL: http://svn.apache.org/viewvc/jackrabbit/oak/trunk/oak-run/README.md?rev=1576011&r1=1576010&r2=1576011&view=diff
==============================================================================
--- jackrabbit/oak/trunk/oak-run/README.md (original)
+++ jackrabbit/oak/trunk/oak-run/README.md Mon Mar 10 17:37:41 2014
@@ -2,14 +2,44 @@ Oak Runnable Jar
 ================
 
 This jar contains everything you need for a simple Oak installation.
-The following three runmodes are available:
+The following runmodes are currently available:
 
-    * Oak server
-    * MicroKernel server
-    * benchmark
+    * backup    : Backup an existing Oak repository
+    * benchmark : Run benchmark tests against different Oak repository fixtures.
+    * debug     : Print status information about an Oak repository.
+    * upgrade   : Upgrade from Jackrabbit 2.x repository to Oak.
+    * server    : Run the Oak Server
 
 See the subsections below for more details on how to use these modes.
 
+Backup
+------
+
+The 'backup' mode creates a backup from an existing oak repository. To start this
+mode, use:
+
+    $ java -jar oak-run-*.jar backup /path/to/repository /path/to/backup
+
+
+Debug
+-----
+
+The 'debug' mode allows to obtain information about the status of the specified
+store. Currently this is only supported for the segment store (aka tar mk). To
+start this mode, use:
+
+    $ java -jar oak-run-*.jar debug /path/to/oakrepository [id...]
+
+
+Upgrade
+-------
+
+The 'upgrade' mode allows to upgrade an existing Jackrabbit 2.x installation to
+Oak. To start the upgrade, use
+
+    $ java -jar oak-run-*.jar upgrade /path/to/jr2repository /path/to/oakrepository
+
+
 Oak server mode
 ---------------
 
@@ -27,23 +57,6 @@ and mapped to URLs under http://localhos
 See the documentation in the `oak-http` component for details about the
 available functionality.
 
-MicroKernel server mode
------------------------
-
-The MicroKernel server mode starts a MicroKernel instance and makes it
-available over HTTP mapping defined in the `oak-mk-remote` component.
-To start this mode, use:
-
-    $ java -jar oak-run-*.jar mk /path/to/mk [port] [bindaddr]
-
-The given path specific the directory that contains the MicroKernel backend.
-The optional `port` and `bindaddr` arguments can be used to control the
-address of the HTTP mapping.
-
-The resulting web interface at http://localhost:8080/ (with default
-`bindaddr` and `port` values) maps simple HTTP forms to the respective
-MicroKernel methods. See the javadocs of the MicroKernel interface for
-more details.
 
 Benchmark mode
 --------------
@@ -109,16 +122,17 @@ Finally the benchmark runner supports th
 | Fixture     | Description                                           |
 |-------------|-------------------------------------------------------|
 | Jackrabbit  | Jackrabbit with the default embedded Derby  bundle PM |
-| Oak-Memory  | Oak with the default MK using in-memory storage       |
-| Oak-Default | Oak with the default MK using embedded H2 database    |
-| Oak-Mongo   | Oak with the new MongoMK                              |
-| Oak-Tar     | Oak with the TarMK                                    |
+| Oak-Memory  | Oak with in-memory storage                            |
+| Oak-Mongo   | Oak with the Mongo backend                            |
+| Oak-Tar     | Oak with the Tar backend (aka Segment NodeStore)      |
+| Oak-H2      | Oak with the MK using embedded H2 database            |
+
 
 Once started, the benchmark runner will execute each listed test case
 against all the listed repository fixtures. After starting up the
 repository and preparing the test environment, the test case is first
 executed a few times to warm up caches before measurements are
-started. Then the test case is run repeatedly for one minute 
+started. Then the test case is run repeatedly for one minute
 and the number of milliseconds used by each execution
 is recorded. Once done, the following statistics are computed and
 reported:

Modified: jackrabbit/oak/trunk/oak-run/src/main/java/org/apache/jackrabbit/oak/benchmark/BenchmarkRunner.java
URL: http://svn.apache.org/viewvc/jackrabbit/oak/trunk/oak-run/src/main/java/org/apache/jackrabbit/oak/benchmark/BenchmarkRunner.java?rev=1576011&r1=1576010&r2=1576011&view=diff
==============================================================================
--- jackrabbit/oak/trunk/oak-run/src/main/java/org/apache/jackrabbit/oak/benchmark/BenchmarkRunner.java
(original)
+++ jackrabbit/oak/trunk/oak-run/src/main/java/org/apache/jackrabbit/oak/benchmark/BenchmarkRunner.java
Mon Mar 10 17:37:41 2014
@@ -76,11 +76,11 @@ public class BenchmarkRunner {
         OptionSet options = parser.parse(args);
         int cacheSize = cache.value(options);
         RepositoryFixture[] allFixtures = new RepositoryFixture[] {
-                new JackrabbitRepositoryFixture(
-                        base.value(options), cacheSize),
+                new JackrabbitRepositoryFixture(base.value(options), cacheSize),
                 OakRepositoryFixture.getMemory(cacheSize * MB),
-                OakRepositoryFixture.getDefault(
-                        base.value(options), cacheSize * MB),
+                OakRepositoryFixture.getMemoryNS(cacheSize * MB),
+                OakRepositoryFixture.getMemoryMK(cacheSize * MB),
+                OakRepositoryFixture.getH2MK(base.value(options), cacheSize * MB),
                 OakRepositoryFixture.getMongo(
                         host.value(options), port.value(options),
                         dbName.value(options), dropDBAfterTest.value(options),

Modified: jackrabbit/oak/trunk/oak-run/src/main/java/org/apache/jackrabbit/oak/fixture/OakRepositoryFixture.java
URL: http://svn.apache.org/viewvc/jackrabbit/oak/trunk/oak-run/src/main/java/org/apache/jackrabbit/oak/fixture/OakRepositoryFixture.java?rev=1576011&r1=1576010&r2=1576011&view=diff
==============================================================================
--- jackrabbit/oak/trunk/oak-run/src/main/java/org/apache/jackrabbit/oak/fixture/OakRepositoryFixture.java
(original)
+++ jackrabbit/oak/trunk/oak-run/src/main/java/org/apache/jackrabbit/oak/fixture/OakRepositoryFixture.java
Mon Mar 10 17:37:41 2014
@@ -23,6 +23,7 @@ import javax.jcr.Repository;
 import org.apache.commons.io.FileUtils;
 import org.apache.jackrabbit.api.JackrabbitRepository;
 import org.apache.jackrabbit.mk.api.MicroKernel;
+import org.apache.jackrabbit.oak.plugins.memory.MemoryNodeStore;
 import org.apache.jackrabbit.oak.spi.blob.BlobStore;
 import org.apache.jackrabbit.mk.core.MicroKernelImpl;
 import org.apache.jackrabbit.oak.Oak;
@@ -41,13 +42,30 @@ import org.apache.jackrabbit.oak.plugins
 public abstract class OakRepositoryFixture implements RepositoryFixture {
 
     public static RepositoryFixture getMemory(final long cacheSize) {
-        return new OakRepositoryFixture("Oak-Memory") {
+        return getMemory("Oak-Memory", false, cacheSize);
+    }
+
+    public static RepositoryFixture getMemoryNS(final long cacheSize) {
+        return getMemory("Oak-MemoryNS", false, cacheSize);
+    }
+
+    public static RepositoryFixture getMemoryMK(final long cacheSize) {
+        return getMemory("Oak-MemoryMK", true, cacheSize);
+    }
+
+    private static RepositoryFixture getMemory(String name, final boolean useMK, final long
cacheSize) {
+        return new OakRepositoryFixture(name) {
             @Override
             protected Repository[] internalSetUpCluster(int n) throws Exception {
                 Repository[] cluster = new Repository[n];
-                MicroKernel kernel = new MicroKernelImpl();
                 for (int i = 0; i < cluster.length; i++) {
-                    Oak oak = new Oak(new KernelNodeStore(kernel, cacheSize));
+                    Oak oak;
+                    if (useMK) {
+                        MicroKernel kernel = new MicroKernelImpl();
+                        oak = new Oak(new KernelNodeStore(kernel, cacheSize));
+                    } else {
+                        oak = new Oak(new MemoryNodeStore());
+                    }
                     cluster[i] = new Jcr(oak).createRepository();
                 }
                 return cluster;
@@ -55,9 +73,9 @@ public abstract class OakRepositoryFixtu
         };
     }
 
-    public static RepositoryFixture getDefault(
+    public static RepositoryFixture getH2MK(
             final File base, final long cacheSize) {
-        return new OakRepositoryFixture("Oak-Default") {
+        return new OakRepositoryFixture("Oak-H2") {
             private MicroKernelImpl[] kernels;
             @Override
             protected Repository[] internalSetUpCluster(int n) throws Exception {

Modified: jackrabbit/oak/trunk/oak-run/src/main/java/org/apache/jackrabbit/oak/run/Main.java
URL: http://svn.apache.org/viewvc/jackrabbit/oak/trunk/oak-run/src/main/java/org/apache/jackrabbit/oak/run/Main.java?rev=1576011&r1=1576010&r2=1576011&view=diff
==============================================================================
--- jackrabbit/oak/trunk/oak-run/src/main/java/org/apache/jackrabbit/oak/run/Main.java (original)
+++ jackrabbit/oak/trunk/oak-run/src/main/java/org/apache/jackrabbit/oak/run/Main.java Mon
Mar 10 17:37:41 2014
@@ -16,23 +16,23 @@
  */
 package org.apache.jackrabbit.oak.run;
 
-import static com.google.common.collect.Sets.newHashSet;
-
 import java.io.File;
+import java.io.IOException;
 import java.io.InputStream;
 import java.util.Collections;
+import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
 import java.util.Properties;
 import java.util.Queue;
 import java.util.Set;
 import java.util.UUID;
-
 import javax.jcr.Repository;
 
+import com.google.common.collect.Maps;
+import com.google.common.collect.Queues;
 import org.apache.jackrabbit.core.RepositoryContext;
 import org.apache.jackrabbit.core.config.RepositoryConfig;
-import org.apache.jackrabbit.mk.api.MicroKernel;
 import org.apache.jackrabbit.mk.core.MicroKernelImpl;
 import org.apache.jackrabbit.oak.Oak;
 import org.apache.jackrabbit.oak.api.ContentRepository;
@@ -41,12 +41,14 @@ import org.apache.jackrabbit.oak.http.Oa
 import org.apache.jackrabbit.oak.jcr.Jcr;
 import org.apache.jackrabbit.oak.kernel.KernelNodeStore;
 import org.apache.jackrabbit.oak.plugins.backup.FileStoreBackup;
+import org.apache.jackrabbit.oak.plugins.memory.MemoryNodeStore;
 import org.apache.jackrabbit.oak.plugins.segment.Segment;
 import org.apache.jackrabbit.oak.plugins.segment.SegmentIdFactory;
 import org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore;
 import org.apache.jackrabbit.oak.plugins.segment.file.FileStore;
 import org.apache.jackrabbit.oak.spi.state.NodeStore;
 import org.apache.jackrabbit.oak.upgrade.RepositoryUpgrade;
+import org.apache.jackrabbit.server.remoting.davex.JcrRemotingServlet;
 import org.apache.jackrabbit.webdav.jcr.JCRWebdavServerServlet;
 import org.apache.jackrabbit.webdav.server.AbstractWebdavServlet;
 import org.apache.jackrabbit.webdav.simple.SimpleWebdavServlet;
@@ -54,8 +56,7 @@ import org.eclipse.jetty.server.Server;
 import org.eclipse.jetty.servlet.ServletContextHandler;
 import org.eclipse.jetty.servlet.ServletHolder;
 
-import com.google.common.collect.Maps;
-import com.google.common.collect.Queues;
+import static com.google.common.collect.Sets.newHashSet;
 
 public class Main {
 
@@ -68,130 +69,33 @@ public class Main {
     public static void main(String[] args) throws Exception {
         printProductInfo();
 
-        String command = "server";
+        Mode mode = Mode.SERVER;
         if (args.length > 0) {
-            command = args[0];
+            mode = Mode.valueOf(args[0].toUpperCase());
             String[] tail = new String[args.length - 1];
             System.arraycopy(args, 1, tail, 0, tail.length);
             args = tail;
         }
-        if ("mk".equals(command)) {
-            MicroKernelServer.main(args);
-        } else if ("benchmark".equals(command)){
-            BenchmarkRunner.main(args);
-        } else if ("server".equals(command)){
-            new HttpServer(URI, args);
-        } else if ("upgrade".equals(command)) {
-            if (args.length == 2) {
-                upgrade(args[0], args[1]);
-            } else {
-                System.err.println("usage: upgrade <olddir> <newdir>");
-                System.exit(1);
-            }
-        } else if ("backup".equals(command)) {
-            if (args.length == 2) {
-                FileStore store = new FileStore(new File(args[0]), 256, false);
-                FileStoreBackup.backup(
-                        new SegmentNodeStore(store), new File(args[1]));
-                store.close();
-            } else {
-                System.err.println("usage: backup <repository> <backup>");
+        switch (mode) {
+            case BACKUP:
+                backup(args);
+                break;
+            case BENCHMARK:
+                BenchmarkRunner.main(args);
+                break;
+            case DEBUG:
+                debug(args);
+                break;
+            case SERVER:
+                server(URI, args);
+                break;
+            case UPGRADE:
+                upgrade(args);
+                break;
+            default:
+                System.err.println("Unknown command: " + mode);
                 System.exit(1);
-            }
-        } else if ("tarmk".equals(command)) {
-            if (args.length == 0) {
-                System.err.println("usage: tarmk <path> [id...]");
-                System.exit(1);
-            } else {
-                System.out.println("TarMK " + args[0]);
-                File file = new File(args[0]);
-                FileStore store = new FileStore(file, 256, false);
-                try {
-                    if (args.length == 1) {
-                        Map<UUID, List<UUID>> idmap = Maps.newHashMap();
-
-                        int dataCount = 0;
-                        long dataSize = 0;
-                        int bulkCount = 0;
-                        long bulkSize = 0;
-                        for (UUID uuid : store.getSegmentIds()) {
-                            if (SegmentIdFactory.isDataSegmentId(uuid)) {
-                                Segment segment = store.readSegment(uuid);
-                                dataCount++;
-                                dataSize += segment.size();
-                                idmap.put(uuid, segment.getReferencedIds());
-                            } else if (SegmentIdFactory.isBulkSegmentId(uuid)) {
-                                bulkCount++;
-                                bulkSize += store.readSegment(uuid).size();
-                                idmap.put(uuid, Collections.<UUID>emptyList());
-                            }
-                        }
-                        System.out.println("Total size:");
-                        System.out.format(
-                                "%6dMB in %6d data segments%n",
-                                dataSize / (1024 * 1024), dataCount);
-                        System.out.format(
-                                "%6dMB in %6d bulk segments%n",
-                                bulkSize / (1024 * 1024), bulkCount);
-
-                        Set<UUID> garbage = newHashSet(idmap.keySet());
-                        Queue<UUID> queue = Queues.newArrayDeque();
-                        queue.add(store.getHead().getRecordId().getSegmentId());
-                        while (!queue.isEmpty()) {
-                            UUID id = queue.remove();
-                            if (garbage.remove(id)) {
-                                queue.addAll(idmap.get(id));
-                            }
-                        }
-                        dataCount = 0;
-                        dataSize = 0;
-                        bulkCount = 0;
-                        bulkSize = 0;
-                        for (UUID uuid : garbage) {
-                            if (SegmentIdFactory.isDataSegmentId(uuid)) {
-                                dataCount++;
-                                dataSize += store.readSegment(uuid).size();
-                            } else if (SegmentIdFactory.isBulkSegmentId(uuid)) {
-                                bulkCount++;
-                                bulkSize += store.readSegment(uuid).size();
-                            }
-                        }
-                        System.out.println("Available for garbage collection:");
-                        System.out.format(
-                                "%6dMB in %6d data segments%n",
-                                dataSize / (1024 * 1024), dataCount);
-                        System.out.format(
-                                "%6dMB in %6d bulk segments%n",
-                                bulkSize / (1024 * 1024), bulkCount);
-                    } else {
-                        for (int i = 1; i < args.length; i++) {
-                            UUID uuid = UUID.fromString(args[i]);
-                            System.out.println(store.readSegment(uuid));
-                        }
-                    }
-                } finally {
-                    store.close();
-                }
-            }
-        } else {
-            System.err.println("Unknown command: " + command);
-            System.exit(1);
-        }
-    }
 
-    private static void upgrade(String olddir, String newdir) throws Exception {
-        RepositoryContext source = RepositoryContext.create(
-                RepositoryConfig.create(new File(olddir)));
-        try {
-            FileStore store = new FileStore(new File(newdir), 256, true);
-            try {
-                NodeStore target = new SegmentNodeStore(store);
-                new RepositoryUpgrade(source, target).copy();
-            } finally {
-                store.close();
-            }
-        } finally {
-            source.getRepository().shutdown();
         }
     }
 
@@ -223,15 +127,153 @@ public class Main {
         System.out.println(product);
     }
 
+    private static void backup(String[] args) throws IOException {
+        if (args.length == 2) {
+            // TODO: enable backup for other node store implementations
+            FileStore store = new FileStore(new File(args[0]), 256, false);
+            FileStoreBackup.backup(new SegmentNodeStore(store), new File(args[1]));
+            store.close();
+        } else {
+            System.err.println("usage: backup <repository> <backup>");
+            System.exit(1);
+        }
+    }
+
+    private static void debug(String[] args) throws IOException {
+        if (args.length == 0) {
+            System.err.println("usage: debug <path> [id...]");
+            System.exit(1);
+        } else {
+            // TODO: enable debug information for other node store implementations
+            System.out.println("Debug " + args[0]);
+            File file = new File(args[0]);
+            FileStore store = new FileStore(file, 256, false);
+            try {
+                if (args.length == 1) {
+                    Map<UUID, List<UUID>> idmap = Maps.newHashMap();
+
+                    int dataCount = 0;
+                    long dataSize = 0;
+                    int bulkCount = 0;
+                    long bulkSize = 0;
+                    for (UUID uuid : store.getSegmentIds()) {
+                        if (SegmentIdFactory.isDataSegmentId(uuid)) {
+                            Segment segment = store.readSegment(uuid);
+                            dataCount++;
+                            dataSize += segment.size();
+                            idmap.put(uuid, segment.getReferencedIds());
+                        } else if (SegmentIdFactory.isBulkSegmentId(uuid)) {
+                            bulkCount++;
+                            bulkSize += store.readSegment(uuid).size();
+                            idmap.put(uuid, Collections.<UUID>emptyList());
+                        }
+                    }
+                    System.out.println("Total size:");
+                    System.out.format(
+                            "%6dMB in %6d data segments%n",
+                            dataSize / (1024 * 1024), dataCount);
+                    System.out.format(
+                            "%6dMB in %6d bulk segments%n",
+                            bulkSize / (1024 * 1024), bulkCount);
+
+                    Set<UUID> garbage = newHashSet(idmap.keySet());
+                    Queue<UUID> queue = Queues.newArrayDeque();
+                    queue.add(store.getHead().getRecordId().getSegmentId());
+                    while (!queue.isEmpty()) {
+                        UUID id = queue.remove();
+                        if (garbage.remove(id)) {
+                            queue.addAll(idmap.get(id));
+                        }
+                    }
+                    dataCount = 0;
+                    dataSize = 0;
+                    bulkCount = 0;
+                    bulkSize = 0;
+                    for (UUID uuid : garbage) {
+                        if (SegmentIdFactory.isDataSegmentId(uuid)) {
+                            dataCount++;
+                            dataSize += store.readSegment(uuid).size();
+                        } else if (SegmentIdFactory.isBulkSegmentId(uuid)) {
+                            bulkCount++;
+                            bulkSize += store.readSegment(uuid).size();
+                        }
+                    }
+                    System.out.println("Available for garbage collection:");
+                    System.out.format(
+                            "%6dMB in %6d data segments%n",
+                            dataSize / (1024 * 1024), dataCount);
+                    System.out.format(
+                            "%6dMB in %6d bulk segments%n",
+                            bulkSize / (1024 * 1024), bulkCount);
+                } else {
+                    for (int i = 1; i < args.length; i++) {
+                        UUID uuid = UUID.fromString(args[i]);
+                        System.out.println(store.readSegment(uuid));
+                    }
+                }
+            } finally {
+                store.close();
+            }
+        }
+    }
+
+    private static void upgrade(String[] args) throws Exception {
+        if (args.length == 2) {
+            RepositoryContext source = RepositoryContext.create(RepositoryConfig.create(new
File(args[0])));
+            try {
+                FileStore store = new FileStore(new File(args[1]), 256, true);
+                try {
+                    NodeStore target = new SegmentNodeStore(store);
+                    new RepositoryUpgrade(source, target).copy();
+                } finally {
+                    store.close();
+                }
+            } finally {
+                source.getRepository().shutdown();
+            }
+        } else {
+            System.err.println("usage: upgrade <olddir> <newdir>");
+            System.exit(1);
+        }
+    }
+
+    private static void server(String uri, String[] args) throws Exception {
+        // TODO add support for different repo implementations (see fixtures for benchmarks)
+        Map<NodeStore, String> storeMap;
+        if (args.length == 0) {
+            System.out.println("Starting an in-memory repository");
+            System.out.println(uri + " -> [memory]");
+            NodeStore store = new MemoryNodeStore();
+            storeMap = Collections.singletonMap(store, "");
+        } else if (args.length == 1) {
+            System.out.println("Starting a standalone repository");
+            System.out.println(uri + " -> " + args[0]);
+            NodeStore store = new KernelNodeStore(new MicroKernelImpl(args[0]));
+            storeMap = Collections.singletonMap(store, "");
+        } else {
+            System.out.println("Starting a clustered repository");
+            storeMap = new HashMap<NodeStore, String>(args.length);
+            for (int i = 0; i < args.length; i++) {
+                // FIXME: Use a clustered MicroKernel implementation
+                System.out.println(uri + "/node" + i + "/ -> " + args[i]);
+                KernelNodeStore store = new KernelNodeStore(new MicroKernelImpl(args[i]));
+                storeMap.put(store, "/node" + i);
+            }
+        }
+        new HttpServer(uri, storeMap);
+    }
+
     public static class HttpServer {
 
         private final ServletContextHandler context;
 
         private final Server server;
 
-        private final MicroKernel[] kernels;
+        public HttpServer(String uri) throws Exception {
+            this(uri, Collections.singletonMap(new MemoryNodeStore(), ""));
+        }
 
-        public HttpServer(String uri, String[] args) throws Exception {
+        public HttpServer(String uri, Map<? extends NodeStore, String> storeMap) throws
Exception {
             int port = java.net.URI.create(uri).getPort();
             if (port == -1) {
                 // use default
@@ -241,25 +283,8 @@ public class Main {
             context = new ServletContextHandler();
             context.setContextPath("/");
 
-            if (args.length == 0) {
-                System.out.println("Starting an in-memory repository");
-                System.out.println(uri + " -> [memory]");
-                kernels = new MicroKernel[] { new MicroKernelImpl() };
-                addServlets(new KernelNodeStore(kernels[0]), "");
-            } else if (args.length == 1) {
-                System.out.println("Starting a standalone repository");
-                System.out.println(uri + " -> " + args[0]);
-                kernels = new MicroKernel[] { new MicroKernelImpl(args[0]) };
-                addServlets(new KernelNodeStore(kernels[0]), "");
-            } else {
-                System.out.println("Starting a clustered repository");
-                kernels = new MicroKernel[args.length];
-                for (int i = 0; i < args.length; i++) {
-                    // FIXME: Use a clustered MicroKernel implementation
-                    System.out.println(uri + "/node" + i + "/ -> " + args[i]);
-                    kernels[i] = new MicroKernelImpl(args[i]);
-                    addServlets(new KernelNodeStore(kernels[i]), "/node" + i);
-                }
+            for (Map.Entry<? extends NodeStore, String> entry : storeMap.entrySet())
{
+                addServlets(entry.getKey(), entry.getValue());
             }
 
             server = new Server(port);
@@ -279,45 +304,54 @@ public class Main {
             Oak oak = new Oak(store);
             Jcr jcr = new Jcr(oak);
 
+            // 1 - OakServer
             ContentRepository repository = oak.createContentRepository();
-
-            ServletHolder holder =
-                    new ServletHolder(new OakServlet(repository));
+            ServletHolder holder = new ServletHolder(new OakServlet(repository));
             context.addServlet(holder, path + "/*");
 
+            // 2 - Webdav Server on JCR repository
             final Repository jcrRepository = jcr.createRepository();
-
-            ServletHolder webdav =
-                    new ServletHolder(new SimpleWebdavServlet() {
-                        @Override
-                        public Repository getRepository() {
-                            return jcrRepository;
-                        }
-                    });
-            webdav.setInitParameter(
-                    SimpleWebdavServlet.INIT_PARAM_RESOURCE_PATH_PREFIX,
-                    path + "/webdav");
-            webdav.setInitParameter(
-                    AbstractWebdavServlet.INIT_PARAM_AUTHENTICATE_HEADER,
-                    "Basic realm=\"Oak\"");
+            ServletHolder webdav = new ServletHolder(new SimpleWebdavServlet() {
+                @Override
+                public Repository getRepository() {
+                    return jcrRepository;
+                }
+            });
+            webdav.setInitParameter(SimpleWebdavServlet.INIT_PARAM_RESOURCE_PATH_PREFIX,
path + "/webdav");
+            webdav.setInitParameter(AbstractWebdavServlet.INIT_PARAM_AUTHENTICATE_HEADER,
"Basic realm=\"Oak\"");
             context.addServlet(webdav, path + "/webdav/*");
 
-            ServletHolder davex =
-                    new ServletHolder(new JCRWebdavServerServlet() {
-                        @Override
-                        protected Repository getRepository() {
-                            return jcrRepository;
-                        }
-                    });
-            davex.setInitParameter(
-                    JCRWebdavServerServlet.INIT_PARAM_RESOURCE_PATH_PREFIX,
-                    path + "/davex");
-            webdav.setInitParameter(
-                    AbstractWebdavServlet.INIT_PARAM_AUTHENTICATE_HEADER,
-                    "Basic realm=\"Oak\"");
-            context.addServlet(davex, path + "/davex/*");
+            // 3 - JCR Remoting Server
+            ServletHolder jcrremote = new ServletHolder(new JcrRemotingServlet() {
+                @Override
+                protected Repository getRepository() {
+                    return jcrRepository;
+                }
+            });
+            jcrremote.setInitParameter(JCRWebdavServerServlet.INIT_PARAM_RESOURCE_PATH_PREFIX,
path + "/jcrremote");
+            jcrremote.setInitParameter(AbstractWebdavServlet.INIT_PARAM_AUTHENTICATE_HEADER,
"Basic realm=\"Oak\"");
+            context.addServlet(jcrremote, path + "/jcrremote/*");
         }
 
     }
 
+    public enum Mode {
+
+        BACKUP("backup"),
+        BENCHMARK("benchmark"),
+        DEBUG("debug"),
+        SERVER("server"),
+        UPGRADE("upgrade");
+
+        private final String name;
+
+        private Mode(String name) {
+            this.name = name;
+        }
+
+        @Override
+        public String toString() {
+            return name;
+        }
+    }
 }

Modified: jackrabbit/oak/trunk/oak-run/src/test/java/org/apache/jackrabbit/oak/run/BasicServerTest.java
URL: http://svn.apache.org/viewvc/jackrabbit/oak/trunk/oak-run/src/test/java/org/apache/jackrabbit/oak/run/BasicServerTest.java?rev=1576011&r1=1576010&r2=1576011&view=diff
==============================================================================
--- jackrabbit/oak/trunk/oak-run/src/test/java/org/apache/jackrabbit/oak/run/BasicServerTest.java
(original)
+++ jackrabbit/oak/trunk/oak-run/src/test/java/org/apache/jackrabbit/oak/run/BasicServerTest.java
Mon Mar 10 17:37:41 2014
@@ -44,7 +44,7 @@ public class BasicServerTest {
 
     @Before
     public void startServer() throws Exception {
-        server = new Main.HttpServer(SERVER_URL, new String[0]);
+        server = new Main.HttpServer(SERVER_URL);
     }
 
     @After



Mime
View raw message