incubator-blur-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Colton McInroy <col...@dosarrest.com>
Subject Re: Creating/Managing tables saying table already exists
Date Sun, 29 Sep 2013 06:29:37 GMT
Ok, well I think I may have found it, but I am getting an exception...

             String controllers = "127.0.0.1:40010";
             client = BlurClient.getClient(controllers);
             try {
System.out.println("Tables\n-----------------------------------");
                 List<String> tables = client.tableList();
                 for (String table : tables) {
                     System.out.println(table+"\n");
                 }
                 if (!tables.contains("test")) {
                     System.out.println("missing test table");
                     TableDescriptor tabledesc = new TableDescriptor();
                     tabledesc.blockCaching = true;
                     tabledesc.name = "test";
                     tabledesc.shardCount = 11;
                     tabledesc.tableUri = "file:///tmp/testtable";
                     client.createTable(tabledesc);
System.out.println("Tables\n-----------------------------------");
                     tables = client.tableList();
                     for (String table : tables) {
                         System.out.println(table+"\n");
                     }
                 }
             } catch (BlurException e) {
                 // TODO Auto-generated catch block
                 e.printStackTrace();
             } catch (TException e) {
                 // TODO Auto-generated catch block
                 e.printStackTrace();
             }

     This is the exception I receive though...

BlurException(message:java.io.IOException: Table [test] already exists., 
stackTraceStr:java.lang.RuntimeException: java.io.IOException: Table 
[test] already exists.
         at 
org.apache.blur.manager.clusterstatus.ZookeeperClusterStatus.createTable(ZookeeperClusterStatus.java:744)
         at 
org.apache.blur.thrift.TableAdmin.createTable(TableAdmin.java:101)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
         at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
         at java.lang.reflect.Method.invoke(Method.java:606)
         at org.apache.blur.utils.BlurUtil$1.invoke(BlurUtil.java:183)
         at com.sun.proxy.$Proxy0.createTable(Unknown Source)
         at 
org.apache.blur.thrift.generated.Blur$Processor$createTable.getResult(Blur.java:2402)
         at 
org.apache.blur.thrift.generated.Blur$Processor$createTable.getResult(Blur.java:2386)
         at 
org.apache.blur.thirdparty.thrift_0_9_0.ProcessFunction.process(ProcessFunction.java:54)
         at 
org.apache.blur.thirdparty.thrift_0_9_0.TBaseProcessor.process(TBaseProcessor.java:57)
         at 
org.apache.blur.thrift.server.AbstractNonblockingServer$FrameBuffer.invoke(AbstractNonblockingServer.java:515)
         at org.apache.blur.thrift.server.Invocation.run(Invocation.java:34)
         at 
org.apache.blur.concurrent.ThreadWatcher$ThreadWatcherExecutorService$1.run(ThreadWatcher.java:127)
         at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
         at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
         at java.lang.Thread.run(Thread.java:724)
Caused by: java.io.IOException: Table [test] already exists.
         at 
org.apache.blur.manager.clusterstatus.ZookeeperClusterStatus.createTable(ZookeeperClusterStatus.java:722) 

         ... 17 more
, errorType:UNKNOWN)
         at 
org.apache.blur.thrift.generated.Blur$createTable_result$createTable_resultStandardScheme.read(Blur.java:3818) 

         at 
org.apache.blur.thrift.generated.Blur$createTable_result$createTable_resultStandardScheme.read(Blur.java:3804) 

         at 
org.apache.blur.thrift.generated.Blur$createTable_result.read(Blur.java:3754) 

         at 
org.apache.blur.thirdparty.thrift_0_9_0.TServiceClient.receiveBase(TServiceClient.java:78) 

         at 
org.apache.blur.thrift.generated.Blur$Client.recv_createTable(Blur.java:458) 

         at 
org.apache.blur.thrift.generated.Blur$Client.createTable(Blur.java:445)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 

         at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 

         at java.lang.reflect.Method.invoke(Method.java:606)
         at 
org.apache.blur.thrift.BlurClient$BlurClientInvocationHandler$1.call(BlurClient.java:59) 

         at 
org.apache.blur.thrift.BlurClient$BlurClientInvocationHandler$1.call(BlurClient.java:55) 

         at 
org.apache.blur.thrift.AbstractCommand.call(AbstractCommand.java:62)
         at 
org.apache.blur.thrift.BlurClientManager.execute(BlurClientManager.java:167)
         at 
org.apache.blur.thrift.BlurClient$BlurClientInvocationHandler.invoke(BlurClient.java:55)
         at com.sun.proxy.$Proxy0.createTable(Unknown Source)
         at net.dosarrest.disparser.Main.main(Main.java:142)

     This looks the same as the exception I was getting earlier when the 
/etc/hosts file didn't have an entry for local host, but creating a 
table works fine from the blur shell, so I'm thinking perhaps I am 
missing something in the table create process here. Although, when I 
open the blur shell, I see the table exists and the files in 
/tmp/testtable have been been created.

*Controller 
Log-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------*

INFO  20130928_23:17:19:681_PDT [main] 
thrift.ThriftBlurControllerServer: Setting up Controller Server
INFO  20130928_23:17:19:745_PDT [main] thrift.ThriftServer: ulimit: core 
file size          (blocks, -c) 0
INFO  20130928_23:17:19:745_PDT [main] thrift.ThriftServer: ulimit: data 
seg size           (kbytes, -d) unlimited
INFO  20130928_23:17:19:745_PDT [main] thrift.ThriftServer: ulimit: 
scheduling priority             (-e) 0
INFO  20130928_23:17:19:745_PDT [main] thrift.ThriftServer: ulimit: file 
size               (blocks, -f) unlimited
INFO  20130928_23:17:19:745_PDT [main] thrift.ThriftServer: ulimit: 
pending signals                 (-i) 31635
INFO  20130928_23:17:19:745_PDT [main] thrift.ThriftServer: ulimit: max 
locked memory       (kbytes, -l) 64
INFO  20130928_23:17:19:745_PDT [main] thrift.ThriftServer: ulimit: max 
memory size         (kbytes, -m) unlimited
INFO  20130928_23:17:19:746_PDT [main] thrift.ThriftServer: ulimit: open 
files                      (-n) 4096
INFO  20130928_23:17:19:746_PDT [main] thrift.ThriftServer: ulimit: pipe 
size            (512 bytes, -p) 8
INFO  20130928_23:17:19:746_PDT [main] thrift.ThriftServer: ulimit: 
POSIX message queues     (bytes, -q) 819200
INFO  20130928_23:17:19:746_PDT [main] thrift.ThriftServer: ulimit: 
real-time priority              (-r) 0
INFO  20130928_23:17:19:746_PDT [main] thrift.ThriftServer: ulimit: 
stack size              (kbytes, -s) 8192
INFO  20130928_23:17:19:746_PDT [main] thrift.ThriftServer: ulimit: cpu 
time               (seconds, -t) unlimited
INFO  20130928_23:17:19:746_PDT [main] thrift.ThriftServer: ulimit: max 
user processes              (-u) 31635
INFO  20130928_23:17:19:746_PDT [main] thrift.ThriftServer: ulimit: 
virtual memory          (kbytes, -v) unlimited
INFO  20130928_23:17:19:746_PDT [main] thrift.ThriftServer: ulimit: file 
locks                      (-x) unlimited
INFO  20130928_23:17:19:989_PDT [main] 
thrift.ThriftBlurControllerServer: Shard Server using index [0] bind 
address [0.0.0.0:40010]
INFO  20130928_23:17:20:067_PDT [main] zookeeper.ZooKeeper: Client 
environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52 GMT
INFO  20130928_23:17:20:067_PDT [main] zookeeper.ZooKeeper: Client 
environment:host.name=blur
INFO  20130928_23:17:20:067_PDT [main] zookeeper.ZooKeeper: Client 
environment:java.version=1.7.0_40
INFO  20130928_23:17:20:067_PDT [main] zookeeper.ZooKeeper: Client 
environment:java.vendor=Oracle Corporation
INFO  20130928_23:17:20:068_PDT [main] zookeeper.ZooKeeper: Client 
environment:java.home=/opt/icedtea-bin-7.2.4.1/jre
INFO  20130928_23:17:20:068_PDT [main] zookeeper.ZooKeeper: Client 
environment:java.class.path=/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../conf:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/blur-core-0.2.0-incubating.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/blur-gui-0.2.0-incubating.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/blur-mapred-0.2.0-incubating.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/blur-query-0.2.0-incubating.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/blur-shell-0.2.0-incubating.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/blur-store-0.2.0-incubating.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/blur-thrift-0.2.0-incubating.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/blur-util-0.2.0-incubating.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/commons-cli-1.2.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/commons-logging-1.1.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/concurrentlinkedhashmap-lru-1.3.2.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/guava-14.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/httpclient-4.1.3.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/httpcore-4.1.4.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/jackson-annotations-2.1.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/jackson-core-2.1.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/jackson-databind-2.1.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/jline-2.10.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/json-20090211.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/log4j-1.2.15.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/lucene-analyzers-common-4.3.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/lucene-codecs-4.3.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/lucene-core-4.3.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/lucene-highlighter-4.3.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/lucene-memory-4.3.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/lucene-queries-4.3.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/lucene-queryparser-4.3.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/lucene-sandbox-4.3.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/lucene-spatial-4.3.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/metrics-core-2.2.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/metrics-ganglia-2.2.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/metrics-graphite-2.2.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/metrics-servlet-2.2.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/slf4j-api-1.6.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/slf4j-log4j12-1.6.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/spatial4j-0.3.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/zookeeper-3.4.5.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/blur-gui-0.2.0-incubating.war:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//conf:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//hadoop-core-1.2.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/activation-1.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/ant-1.6.5.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/asm-3.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/commons-beanutils-1.7.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/commons-beanutils-core-1.8.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/commons-codec-1.4.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/commons-collections-3.2.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/commons-configuration-1.6.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/commons-digester-1.8.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/commons-el-1.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/commons-httpclient-3.0.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/commons-io-2.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/commons-lang-2.4.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/commons-logging-1.1.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/commons-math-2.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/commons-net-1.4.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/hsqldb-1.8.0.10.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jackson-core-asl-1.7.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jackson-jaxrs-1.7.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jackson-mapper-asl-1.8.8.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jackson-xc-1.7.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jakarta-regexp-1.4.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jasper-compiler-5.5.12.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jasper-runtime-5.5.12.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jaxb-api-2.2.2.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jaxb-impl-2.2.3-1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jersey-core-1.8.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jersey-json-1.8.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jersey-server-1.8.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jets3t-0.6.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jettison-1.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jetty-6.1.26.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jetty-util-6.1.26.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jsp-2.1-6.1.14.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jsp-api-2.1-6.1.14.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/netty-3.2.2.Final.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/oro-2.0.8.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/servlet-api-2.5-20081211.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/servlet-api-2.5-6.1.14.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/stax-api-1.0-2.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/stax-api-1.0.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/xmlenc-0.52.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jsp-*/*.jar
INFO  20130928_23:17:20:068_PDT [main] zookeeper.ZooKeeper: Client 
environment:java.library.path=
INFO  20130928_23:17:20:073_PDT [main] zookeeper.ZooKeeper: Client 
environment:java.io.tmpdir=/tmp
INFO  20130928_23:17:20:073_PDT [main] zookeeper.ZooKeeper: Client 
environment:java.compiler=<NA>
INFO  20130928_23:17:20:073_PDT [main] zookeeper.ZooKeeper: Client 
environment:os.name=Linux
INFO  20130928_23:17:20:073_PDT [main] zookeeper.ZooKeeper: Client 
environment:os.arch=amd64
INFO  20130928_23:17:20:073_PDT [main] zookeeper.ZooKeeper: Client 
environment:os.version=3.10.7-gentoo
INFO  20130928_23:17:20:074_PDT [main] zookeeper.ZooKeeper: Client 
environment:user.name=hadoop
INFO  20130928_23:17:20:074_PDT [main] zookeeper.ZooKeeper: Client 
environment:user.home=/home/hadoop
INFO  20130928_23:17:20:074_PDT [main] zookeeper.ZooKeeper: Client 
environment:user.dir=/home/hadoop
INFO  20130928_23:17:20:075_PDT [main] zookeeper.ZooKeeper: Initiating 
client connection, connectString=127.0.0.1 sessionTimeout=90000 
watcher=org.apache.blur.zookeeper.ZkUtils$ConnectionWatcher@52fe10f1
INFO  20130928_23:17:20:129_PDT [main-SendThread(blur:2181)] 
zookeeper.ClientCnxn: Opening socket connection to server 
blur/127.0.0.1:2181. Will not attempt to authenticate using SASL 
(unknown error)
INFO  20130928_23:17:20:169_PDT [main-SendThread(blur:2181)] 
zookeeper.ClientCnxn: Socket connection established to 
blur/127.0.0.1:2181, initiating session
INFO  20130928_23:17:20:242_PDT [main-SendThread(blur:2181)] 
zookeeper.ClientCnxn: Session establishment complete on server 
blur/127.0.0.1:2181, sessionid = 0x141685d2dec0000, negotiated timeout = 
90000
INFO  20130928_23:17:20:256_PDT [main-EventThread] zookeeper.ZkUtils: 
ZooKeeper [127.0.0.1] timeout [90,000] changed to [SyncConnected] state
WARN  20130928_23:17:21:473_PDT [main] zookeeper.WatchNodeExistance: 
Closing [7bc256fa-fc9e-4150-9adb-037a753c142e]
WARN  20130928_23:17:21:480_PDT [main] zookeeper.WatchNodeExistance: 
Closing [573a5271-f5bc-405d-9457-996730e9982a]
INFO  20130928_23:17:21:588_PDT [Watch Children 
[/blur/clusters/default/online-nodes][f688b550-e331-4051-ae94-7a61d7f297ed]] 
thrift.BlurControllerServer: Layout change.
INFO  20130928_23:17:21:589_PDT [Watch Children 
[/blur/clusters/default/tables][257d40b6-3cee-43c7-a1b7-358909bba809]] 
thrift.BlurControllerServer: Layout change.
INFO  20130928_23:17:21:949_PDT [main] mortbay.log: Logging to 
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via 
org.mortbay.log.Slf4jLog
INFO  20130928_23:17:21:949_PDT [main] gui.HttpJettyServer: System 
props:{java.runtime.name=OpenJDK Runtime Environment, 
blur.name=controller-server-blur-0, blur.base.controller.port=40010, 
sun.boot.library.path=/opt/icedtea-bin-7.2.4.1/jre/lib/amd64, 
java.vm.version=24.0-b50, baseGuiShardPort=40090, java.vm.vendor=Oracle 
Corporation, java.vendor.url=http://java.oracle.com/, path.separator=:, 
baseGuiControllerPort=40080, java.vm.name=OpenJDK 64-Bit Server VM, 
file.encoding.pkg=sun.io, user.country=US, 
sun.java.launcher=SUN_STANDARD, sun.os.patch.level=unknown, 
java.vm.specification.name=Java Virtual Machine Specification, 
user.dir=/home/hadoop, java.runtime.version=1.7.0_40-b31, 
java.awt.graphicsenv=sun.awt.X11GraphicsEnvironment, 
java.endorsed.dirs=/opt/icedtea-bin-7.2.4.1/jre/lib/endorsed, 
os.arch=amd64, java.io.tmpdir=/tmp, line.separator=
, java.vm.specification.vendor=Oracle Corporation, 
blur.base.shard.port=40020, os.name=Linux, blur.gui.mode=controller, 
sun.jnu.encoding=ANSI_X3.4-1968, java.library.path=, 
blur.logs.dir=/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../logs, 
java.specification.name=Java Platform API Specification, 
java.class.version=51.0, java.net.preferIPv4Stack=true, 
sun.management.compiler=HotSpot 64-Bit Tiered Compilers, 
os.version=3.10.7-gentoo, user.home=/home/hadoop, 
user.timezone=Canada/Pacific, 
java.awt.printerjob=sun.print.PSPrinterJob, 
file.encoding=ANSI_X3.4-1968, java.specification.version=1.7, 
java.class.path=/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../conf:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/blur-core-0.2.0-incubating.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/blur-gui-0.2.0-incubating.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/blur-mapred-0.2.0-incubating.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/blur-query-0.2.0-incubating.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/blur-shell-0.2.0-incubating.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/blur-store-0.2.0-incubating.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/blur-thrift-0.2.0-incubating.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/blur-util-0.2.0-incubating.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/commons-cli-1.2.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/commons-logging-1.1.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/concurrentlinkedhashmap-lru-1.3.2.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/guava-14.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/httpclient-4.1.3.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/httpcore-4.1.4.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/jackson-annotations-2.1.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/jackson-core-2.1.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/jackson-databind-2.1.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/jline-2.10.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/json-20090211.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/log4j-1.2.15.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/lucene-analyzers-common-4.3.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/lucene-codecs-4.3.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/lucene-core-4.3.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/lucene-highlighter-4.3.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/lucene-memory-4.3.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/lucene-queries-4.3.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/lucene-queryparser-4.3.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/lucene-sandbox-4.3.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/lucene-spatial-4.3.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/metrics-core-2.2.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/metrics-ganglia-2.2.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/metrics-graphite-2.2.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/metrics-servlet-2.2.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/slf4j-api-1.6.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/slf4j-log4j12-1.6.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/spatial4j-0.3.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/zookeeper-3.4.5.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/blur-gui-0.2.0-incubating.war:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//conf:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//hadoop-core-1.2.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/activation-1.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/ant-1.6.5.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/asm-3.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/commons-beanutils-1.7.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/commons-beanutils-core-1.8.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/commons-codec-1.4.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/commons-collections-3.2.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/commons-configuration-1.6.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/commons-digester-1.8.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/commons-el-1.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/commons-httpclient-3.0.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/commons-io-2.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/commons-lang-2.4.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/commons-logging-1.1.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/commons-math-2.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/commons-net-1.4.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/hsqldb-1.8.0.10.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jackson-core-asl-1.7.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jackson-jaxrs-1.7.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jackson-mapper-asl-1.8.8.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jackson-xc-1.7.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jakarta-regexp-1.4.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jasper-compiler-5.5.12.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jasper-runtime-5.5.12.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jaxb-api-2.2.2.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jaxb-impl-2.2.3-1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jersey-core-1.8.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jersey-json-1.8.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jersey-server-1.8.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jets3t-0.6.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jettison-1.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jetty-6.1.26.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jetty-util-6.1.26.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jsp-2.1-6.1.14.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jsp-api-2.1-6.1.14.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/netty-3.2.2.Final.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/oro-2.0.8.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/servlet-api-2.5-20081211.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/servlet-api-2.5-6.1.14.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/stax-api-1.0-2.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/stax-api-1.0.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/xmlenc-0.52.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jsp-*/*.jar, 
user.name=hadoop, java.vm.specification.version=1.7, 
sun.java.command=org.apache.blur.thrift.ThriftBlurControllerServer -s 0, 
java.home=/opt/icedtea-bin-7.2.4.1/jre, sun.arch.data.model=64, 
user.language=en, java.specification.vendor=Oracle Corporation, 
blur.gui.servicing.port=40010, awt.toolkit=sun.awt.X11.XToolkit, 
java.vm.info=mixed mode, java.version=1.7.0_40, 
java.ext.dirs=/opt/icedtea-bin-7.2.4.1/jre/lib/ext:/usr/java/packages/lib/ext, 
blur-controller-0=, 
sun.boot.class.path=/opt/icedtea-bin-7.2.4.1/jre/lib/resources.jar:/opt/icedtea-bin-7.2.4.1/jre/lib/rt.jar:/opt/icedtea-bin-7.2.4.1/jre/lib/sunrsasign.jar:/opt/icedtea-bin-7.2.4.1/jre/lib/jsse.jar:/opt/icedtea-bin-7.2.4.1/jre/lib/jce.jar:/opt/icedtea-bin-7.2.4.1/jre/lib/charsets.jar:/opt/icedtea-bin-7.2.4.1/jre/lib/netx.jar:/opt/icedtea-bin-7.2.4.1/jre/lib/plugin.jar:/opt/icedtea-bin-7.2.4.1/jre/lib/rhino.jar:/opt/icedtea-bin-7.2.4.1/jre/lib/jfr.jar:/opt/icedtea-bin-7.2.4.1/jre/classes, 
java.awt.headless=true, java.vendor=Oracle Corporation, 
file.separator=/, 
java.vendor.url.bug=http://bugreport.sun.com/bugreport/, 
sun.io.unicode.encoding=UnicodeLittle, sun.cpu.endian=little, 
blur.log.file=blur-hadoop-controller-server-blur-0, sun.cpu.isalist=}
INFO  20130928_23:17:22:511_PDT [main] gui.HttpJettyServer: WEB GUI 
coming up for resource: controller
INFO  20130928_23:17:22:511_PDT [main] gui.HttpJettyServer: WEB GUI 
thinks its at: 
/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/blur-gui-0.2.0-incubating.war
INFO  20130928_23:17:22:511_PDT [main] gui.HttpJettyServer: 
/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../logs/blur-hadoop-controller-server-blur-0
INFO  20130928_23:17:22:511_PDT [main] mortbay.log: jetty-6.1.26
INFO  20130928_23:17:22:641_PDT [main] mortbay.log: Extract 
/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/blur-gui-0.2.0-incubating.war 
to /tmp/Jetty_0_0_0_0_40080_blur.gui.0.2.0.incubating.war____.mr122m/webapp
INFO  20130928_23:17:24:038_PDT [main] mortbay.log: Started 
SocketConnector@0.0.0.0:40080
INFO  20130928_23:17:24:039_PDT [main] gui.HttpJettyServer: WEB GUI up 
on port: 40080
INFO  20130928_23:17:24:055_PDT [main] thrift.ThriftServer: Starting 
server [blur:40010]
INFO  20130928_23:17:36:783_PDT [thrift-processors1] utils.BlurUtil: 
Path [file:/tmp/testtable] does not exist, creating.
INFO  20130928_23:17:36:793_PDT [thrift-processors1] utils.BlurUtil: 
Path [file:/tmp/testtable/shard-00000000] does not exist, creating.
INFO  20130928_23:17:36:793_PDT [thrift-processors1] utils.BlurUtil: 
Path [file:/tmp/testtable/shard-00000001] does not exist, creating.
INFO  20130928_23:17:36:793_PDT [thrift-processors1] utils.BlurUtil: 
Path [file:/tmp/testtable/shard-00000002] does not exist, creating.
INFO  20130928_23:17:36:794_PDT [thrift-processors1] utils.BlurUtil: 
Path [file:/tmp/testtable/shard-00000003] does not exist, creating.
INFO  20130928_23:17:36:794_PDT [thrift-processors1] utils.BlurUtil: 
Path [file:/tmp/testtable/shard-00000004] does not exist, creating.
INFO  20130928_23:17:36:794_PDT [thrift-processors1] utils.BlurUtil: 
Path [file:/tmp/testtable/shard-00000005] does not exist, creating.
INFO  20130928_23:17:36:795_PDT [thrift-processors1] utils.BlurUtil: 
Path [file:/tmp/testtable/shard-00000006] does not exist, creating.
INFO  20130928_23:17:36:795_PDT [thrift-processors1] utils.BlurUtil: 
Path [file:/tmp/testtable/shard-00000007] does not exist, creating.
INFO  20130928_23:17:36:795_PDT [thrift-processors1] utils.BlurUtil: 
Path [file:/tmp/testtable/shard-00000008] does not exist, creating.
INFO  20130928_23:17:36:795_PDT [thrift-processors1] utils.BlurUtil: 
Path [file:/tmp/testtable/shard-00000009] does not exist, creating.
INFO  20130928_23:17:36:796_PDT [thrift-processors1] utils.BlurUtil: 
Path [file:/tmp/testtable/shard-00000010] does not exist, creating.
INFO  20130928_23:17:36:805_PDT [Watch Children 
[/blur/clusters/default/tables][257d40b6-3cee-43c7-a1b7-358909bba809]] 
thrift.BlurControllerServer: Layout change.
INFO  20130928_23:17:36:842_PDT [thrift-processors1] thrift.TableAdmin: 
Waiting for shards to engage on table [test]
INFO  20130928_23:17:36:854_PDT [thrift-processors1] thrift.TableAdmin: 
Waiting for shards to engage on table [test]
WARN  20130928_23:17:39:530_PDT [Watch Existance 
[/blur/clusters/default/tables][d2d409cb-4cd2-4c40-8ef3-0602402f5bf4]] 
zookeeper.WatchChildren: Closing [6a2ccdab-e3b7-4f71-a71d-98116e04eee1]
WARN  20130928_23:17:39:554_PDT [Watch Existance 
[/blur/clusters/default/tables][940d781d-10e4-45d8-a169-5e26887f90c7]] 
zookeeper.WatchChildren: Closing [2f299030-8940-48b3-a7e8-71e97f2b13f5]
INFO  20130928_23:17:39:862_PDT [thrift-processors1] thrift.TableAdmin: 
Opening - Shards Open [0], Shards Opening [0] of table [test]
INFO  20130928_23:17:41:466_PDT [Thread-Watcher] 
concurrent.ThreadWatcher: Thread [Thread[thrift-processors1,5,main]] has 
been executing for [5,241 ms]
INFO  20130928_23:17:42:863_PDT [thrift-processors1] thrift.TableAdmin: 
Opening - Shards Open [0], Shards Opening [0] of table [test]
INFO  20130928_23:17:45:865_PDT [thrift-processors1] thrift.TableAdmin: 
Opening - Shards Open [0], Shards Opening [0] of table [test]
INFO  20130928_23:17:46:467_PDT [Thread-Watcher] 
concurrent.ThreadWatcher: Thread [Thread[thrift-processors1,5,main]] has 
been executing for [10,242 ms]
INFO  20130928_23:17:48:866_PDT [thrift-processors1] thrift.TableAdmin: 
Opening - Shards Open [0], Shards Opening [0] of table [test]
INFO  20130928_23:17:51:468_PDT [Thread-Watcher] 
concurrent.ThreadWatcher: Thread [Thread[thrift-processors1,5,main]] has 
been executing for [15,242 ms]
INFO  20130928_23:17:51:867_PDT [thrift-processors1] thrift.TableAdmin: 
Opening - Shards Open [0], Shards Opening [0] of table [test]
INFO  20130928_23:17:54:868_PDT [thrift-processors1] thrift.TableAdmin: 
Opening - Shards Open [0], Shards Opening [0] of table [test]
INFO  20130928_23:17:56:468_PDT [Thread-Watcher] 
concurrent.ThreadWatcher: Thread [Thread[thrift-processors1,5,main]] has 
been executing for [20,243 ms]
INFO  20130928_23:17:57:869_PDT [thrift-processors1] thrift.TableAdmin: 
Opening - Shards Open [0], Shards Opening [0] of table [test]
INFO  20130928_23:18:00:870_PDT [thrift-processors1] thrift.TableAdmin: 
Opening - Shards Open [0], Shards Opening [0] of table [test]
INFO  20130928_23:18:01:469_PDT [Thread-Watcher] 
concurrent.ThreadWatcher: Thread [Thread[thrift-processors1,5,main]] has 
been executing for [25,244 ms]
INFO  20130928_23:18:03:871_PDT [thrift-processors1] thrift.TableAdmin: 
Opening - Shards Open [0], Shards Opening [0] of table [test]
INFO  20130928_23:18:06:470_PDT [Thread-Watcher] 
concurrent.ThreadWatcher: Thread [Thread[thrift-processors1,5,main]] has 
been executing for [30,245 ms]
INFO  20130928_23:18:06:872_PDT [thrift-processors1] thrift.TableAdmin: 
Opening - Shards Open [0], Shards Opening [0] of table [test]
INFO  20130928_23:18:09:874_PDT [thrift-processors1] thrift.TableAdmin: 
Opening - Shards Open [0], Shards Opening [0] of table [test]
INFO  20130928_23:18:11:472_PDT [Thread-Watcher] 
concurrent.ThreadWatcher: Thread [Thread[thrift-processors1,5,main]] has 
been executing for [35,246 ms]
INFO  20130928_23:18:12:875_PDT [thrift-processors1] thrift.TableAdmin: 
Opening - Shards Open [0], Shards Opening [0] of table [test]
INFO  20130928_23:18:15:876_PDT [thrift-processors1] thrift.TableAdmin: 
Opening - Shards Open [0], Shards Opening [0] of table [test]
INFO  20130928_23:18:16:472_PDT [Thread-Watcher] 
concurrent.ThreadWatcher: Thread [Thread[thrift-processors1,5,main]] has 
been executing for [40,247 ms]
INFO  20130928_23:18:18:877_PDT [thrift-processors1] thrift.TableAdmin: 
Opening - Shards Open [0], Shards Opening [0] of table [test]
INFO  20130928_23:18:21:473_PDT [Thread-Watcher] 
concurrent.ThreadWatcher: Thread [Thread[thrift-processors1,5,main]] has 
been executing for [45,248 ms]
INFO  20130928_23:18:21:878_PDT [thrift-processors1] thrift.TableAdmin: 
Opening - Shards Open [0], Shards Opening [0] of table [test]
INFO  20130928_23:18:24:881_PDT [thrift-processors1] thrift.TableAdmin: 
Opening - Shards Open [0], Shards Opening [0] of table [test]
INFO  20130928_23:18:26:474_PDT [Thread-Watcher] 
concurrent.ThreadWatcher: Thread [Thread[thrift-processors1,5,main]] has 
been executing for [50,249 ms]
INFO  20130928_23:18:27:882_PDT [thrift-processors1] thrift.TableAdmin: 
Opening - Shards Open [0], Shards Opening [0] of table [test]
INFO  20130928_23:18:30:883_PDT [thrift-processors1] thrift.TableAdmin: 
Opening - Shards Open [0], Shards Opening [0] of table [test]
INFO  20130928_23:18:31:475_PDT [Thread-Watcher] 
concurrent.ThreadWatcher: Thread [Thread[thrift-processors1,5,main]] has 
been executing for [55,250 ms]
INFO  20130928_23:18:33:884_PDT [thrift-processors1] thrift.TableAdmin: 
Opening - Shards Open [0], Shards Opening [0] of table [test]
INFO  20130928_23:18:36:477_PDT [Thread-Watcher] 
concurrent.ThreadWatcher: Thread [Thread[thrift-processors1,5,main]] has 
been executing for [60,251 ms]
INFO  20130928_23:18:36:886_PDT [thrift-processors1] thrift.TableAdmin: 
Opening - Shards Open [0], Shards Opening [0] of table [test]
ERROR 20130928_23:18:37:554_PDT [thrift-processors2] thrift.TableAdmin: 
Unknown error during create of [table=test, 
tableDescriptor=TableDescriptor(enabled:true, shardCount:11, 
tableUri:file:///tmp/testtable, cluster:default, name:test, 
similarityClass:org.apache.blur.lucene.search.FairSimilarity, 
blockCaching:true, blockCachingFileTypes:null, readOnly:false, 
preCacheCols:null, tableProperties:null, strictTypes:false, 
defaultMissingFieldType:text, defaultMissingFieldLessIndexing:true, 
defaultMissingFieldProps:null)]
java.lang.RuntimeException: java.io.IOException: Table [test] already 
exists.
         at 
org.apache.blur.manager.clusterstatus.ZookeeperClusterStatus.createTable(ZookeeperClusterStatus.java:744)
         at 
org.apache.blur.thrift.TableAdmin.createTable(TableAdmin.java:101)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
         at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
         at java.lang.reflect.Method.invoke(Method.java:606)
         at org.apache.blur.utils.BlurUtil$1.invoke(BlurUtil.java:183)
         at com.sun.proxy.$Proxy0.createTable(Unknown Source)
         at 
org.apache.blur.thrift.generated.Blur$Processor$createTable.getResult(Blur.java:2402)
         at 
org.apache.blur.thrift.generated.Blur$Processor$createTable.getResult(Blur.java:2386)
         at 
org.apache.blur.thirdparty.thrift_0_9_0.ProcessFunction.process(ProcessFunction.java:54)
         at 
org.apache.blur.thirdparty.thrift_0_9_0.TBaseProcessor.process(TBaseProcessor.java:57)
         at 
org.apache.blur.thrift.server.AbstractNonblockingServer$FrameBuffer.invoke(AbstractNonblockingServer.java:515)
         at org.apache.blur.thrift.server.Invocation.run(Invocation.java:34)
         at 
org.apache.blur.concurrent.ThreadWatcher$ThreadWatcherExecutorService$1.run(ThreadWatcher.java:127)
         at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
         at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
         at java.lang.Thread.run(Thread.java:724)
Caused by: java.io.IOException: Table [test] already exists.
         at 
org.apache.blur.manager.clusterstatus.ZookeeperClusterStatus.createTable(ZookeeperClusterStatus.java:722)
         ... 17 more
INFO  20130928_23:18:39:887_PDT [thrift-processors1] thrift.TableAdmin: 
Opening - Shards Open [0], Shards Opening [0] of table [test]
INFO  20130928_23:18:41:477_PDT [Thread-Watcher] 
concurrent.ThreadWatcher: Thread [Thread[thrift-processors1,5,main]] has 
been executing for [65,252 ms]
INFO  20130928_23:18:42:888_PDT [thrift-processors1] thrift.TableAdmin: 
Opening - Shards Open [0], Shards Opening [0] of table [test]
INFO  20130928_23:18:45:889_PDT [thrift-processors1] thrift.TableAdmin: 
Opening - Shards Open [0], Shards Opening [0] of table [test]
INFO  20130928_23:18:46:478_PDT [Thread-Watcher] 
concurrent.ThreadWatcher: Thread [Thread[thrift-processors1,5,main]] has 
been executing for [70,253 ms]
INFO  20130928_23:18:48:890_PDT [thrift-processors1] thrift.TableAdmin: 
Opening - Shards Open [0], Shards Opening [0] of table [test]
INFO  20130928_23:18:50:008_PDT [Watch Children 
[/blur/clusters/default/online-nodes][f688b550-e331-4051-ae94-7a61d7f297ed]] 
thrift.BlurControllerServer: Layout change.
INFO  20130928_23:18:51:479_PDT [Thread-Watcher] 
concurrent.ThreadWatcher: Thread [Thread[thrift-processors1,5,main]] has 
been executing for [75,254 ms]
ERROR 20130928_23:18:51:898_PDT [controller-thread-pool0] 
thrift.BlurClientManager: Retrying call 
[org.apache.blur.thrift.BlurControllerServer$13@68aee2a2] retry [0] out 
of [3] message [Connection refused]
INFO  20130928_23:18:51:898_PDT [controller-thread-pool0] 
thrift.BlurClientManager: Backing off call for [500 ms]
INFO  20130928_23:18:52:399_PDT [controller-thread-pool0] 
thrift.BlurClientManager: Marking bad connection 
[org.apache.blur.thrift.Connection@56febeda]
ERROR 20130928_23:18:52:399_PDT [controller-thread-pool0] 
thrift.BlurClientManager: All connections are bad [1].
ERROR 20130928_23:18:53:400_PDT [controller-thread-pool0] 
thrift.BlurClientManager: All connections are bad [2].
ERROR 20130928_23:18:54:401_PDT [controller-thread-pool0] 
thrift.BlurClientManager: All connections are bad [3].
ERROR 20130928_23:18:54:402_PDT [thrift-processors1] 
thrift.BlurControllerServer: Unknown error while trying to get shard 
server layout [test]
BlurException(message:Call execution exception [[test]], 
stackTraceStr:org.apache.blur.thrift.BadConnectionException: Could not 
connect to controller/shard server. All connections are bad.
         at 
org.apache.blur.thrift.BlurClientManager.execute(BlurClientManager.java:207)
         at 
org.apache.blur.thrift.BlurClientManager.execute(BlurClientManager.java:262)
         at 
org.apache.blur.thrift.BlurControllerServer$BlurClientRemote.execute(BlurControllerServer.java:107)
         at 
org.apache.blur.thrift.BlurControllerServer$21.call(BlurControllerServer.java:703)
         at 
org.apache.blur.thrift.BlurControllerServer$21.call(BlurControllerServer.java:700)
         at org.apache.blur.utils.ForkJoin$2.call(ForkJoin.java:63)
         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
         at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
         at 
org.apache.blur.concurrent.ThreadWatcher$ThreadWatcherExecutorService$1.run(ThreadWatcher.java:127)
         at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
         at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
         at java.lang.Thread.run(Thread.java:724)
, errorType:UNKNOWN)
         at 
org.apache.blur.utils.BlurExecutorCompletionService.getResultThrowException(BlurExecutorCompletionService.java:135)
         at 
org.apache.blur.thrift.BlurControllerServer$14.merge(BlurControllerServer.java:570)
         at 
org.apache.blur.thrift.BlurControllerServer$14.merge(BlurControllerServer.java:562)
         at org.apache.blur.utils.ForkJoin$3.merge(ForkJoin.java:72)
         at 
org.apache.blur.thrift.BlurControllerServer.scatterGather(BlurControllerServer.java:699)
         at 
org.apache.blur.thrift.BlurControllerServer.shardServerLayoutState(BlurControllerServer.java:552)
         at 
org.apache.blur.thrift.TableAdmin.waitForTheTableToEngage(TableAdmin.java:177)
         at 
org.apache.blur.thrift.TableAdmin.enableTable(TableAdmin.java:139)
         at 
org.apache.blur.thrift.TableAdmin.createTable(TableAdmin.java:108)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
         at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
         at java.lang.reflect.Method.invoke(Method.java:606)
         at org.apache.blur.utils.BlurUtil$1.invoke(BlurUtil.java:183)
         at com.sun.proxy.$Proxy0.createTable(Unknown Source)
         at 
org.apache.blur.thrift.generated.Blur$Processor$createTable.getResult(Blur.java:2402)
         at 
org.apache.blur.thrift.generated.Blur$Processor$createTable.getResult(Blur.java:2386)
         at 
org.apache.blur.thirdparty.thrift_0_9_0.ProcessFunction.process(ProcessFunction.java:54)
         at 
org.apache.blur.thirdparty.thrift_0_9_0.TBaseProcessor.process(TBaseProcessor.java:57)
         at 
org.apache.blur.thrift.server.AbstractNonblockingServer$FrameBuffer.invoke(AbstractNonblockingServer.java:515)
         at org.apache.blur.thrift.server.Invocation.run(Invocation.java:34)
         at 
org.apache.blur.concurrent.ThreadWatcher$ThreadWatcherExecutorService$1.run(ThreadWatcher.java:127)
         at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
         at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
         at java.lang.Thread.run(Thread.java:724)
INFO  20130928_23:18:54:403_PDT [thrift-processors1] thrift.TableAdmin: 
Stilling waiting
BlurException(message:Unknown error while trying to get shard server 
layout [test], stackTraceStr:BlurException(message:Call execution 
exception [[test]], 
stackTraceStr:org.apache.blur.thrift.BadConnectionException: Could not 
connect to controller/shard server. All connections are bad.
         at 
org.apache.blur.thrift.BlurClientManager.execute(BlurClientManager.java:207)
         at 
org.apache.blur.thrift.BlurClientManager.execute(BlurClientManager.java:262)
         at 
org.apache.blur.thrift.BlurControllerServer$BlurClientRemote.execute(BlurControllerServer.java:107)
         at 
org.apache.blur.thrift.BlurControllerServer$21.call(BlurControllerServer.java:703)
         at 
org.apache.blur.thrift.BlurControllerServer$21.call(BlurControllerServer.java:700)
         at org.apache.blur.utils.ForkJoin$2.call(ForkJoin.java:63)
         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
         at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
         at 
org.apache.blur.concurrent.ThreadWatcher$ThreadWatcherExecutorService$1.run(ThreadWatcher.java:127)
         at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
         at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
         at java.lang.Thread.run(Thread.java:724)
, errorType:UNKNOWN)
         at 
org.apache.blur.utils.BlurExecutorCompletionService.getResultThrowException(BlurExecutorCompletionService.java:135)
         at 
org.apache.blur.thrift.BlurControllerServer$14.merge(BlurControllerServer.java:570)
         at 
org.apache.blur.thrift.BlurControllerServer$14.merge(BlurControllerServer.java:562)
         at org.apache.blur.utils.ForkJoin$3.merge(ForkJoin.java:72)
         at 
org.apache.blur.thrift.BlurControllerServer.scatterGather(BlurControllerServer.java:699)
         at 
org.apache.blur.thrift.BlurControllerServer.shardServerLayoutState(BlurControllerServer.java:552)
         at 
org.apache.blur.thrift.TableAdmin.waitForTheTableToEngage(TableAdmin.java:177)
         at 
org.apache.blur.thrift.TableAdmin.enableTable(TableAdmin.java:139)
         at 
org.apache.blur.thrift.TableAdmin.createTable(TableAdmin.java:108)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
         at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
         at java.lang.reflect.Method.invoke(Method.java:606)
         at org.apache.blur.utils.BlurUtil$1.invoke(BlurUtil.java:183)
         at com.sun.proxy.$Proxy0.createTable(Unknown Source)
         at 
org.apache.blur.thrift.generated.Blur$Processor$createTable.getResult(Blur.java:2402)
         at 
org.apache.blur.thrift.generated.Blur$Processor$createTable.getResult(Blur.java:2386)
         at 
org.apache.blur.thirdparty.thrift_0_9_0.ProcessFunction.process(ProcessFunction.java:54)
         at 
org.apache.blur.thirdparty.thrift_0_9_0.TBaseProcessor.process(TBaseProcessor.java:57)
         at 
org.apache.blur.thrift.server.AbstractNonblockingServer$FrameBuffer.invoke(AbstractNonblockingServer.java:515)
         at org.apache.blur.thrift.server.Invocation.run(Invocation.java:34)
         at 
org.apache.blur.concurrent.ThreadWatcher$ThreadWatcherExecutorService$1.run(ThreadWatcher.java:127)
         at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
         at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
         at java.lang.Thread.run(Thread.java:724)
, errorType:UNKNOWN)
         at 
org.apache.blur.thrift.BlurControllerServer.shardServerLayoutState(BlurControllerServer.java:585)
         at 
org.apache.blur.thrift.TableAdmin.waitForTheTableToEngage(TableAdmin.java:177)
         at 
org.apache.blur.thrift.TableAdmin.enableTable(TableAdmin.java:139)
         at 
org.apache.blur.thrift.TableAdmin.createTable(TableAdmin.java:108)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
         at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
         at java.lang.reflect.Method.invoke(Method.java:606)
         at org.apache.blur.utils.BlurUtil$1.invoke(BlurUtil.java:183)
         at com.sun.proxy.$Proxy0.createTable(Unknown Source)
         at 
org.apache.blur.thrift.generated.Blur$Processor$createTable.getResult(Blur.java:2402)
         at 
org.apache.blur.thrift.generated.Blur$Processor$createTable.getResult(Blur.java:2386)
         at 
org.apache.blur.thirdparty.thrift_0_9_0.ProcessFunction.process(ProcessFunction.java:54)
         at 
org.apache.blur.thirdparty.thrift_0_9_0.TBaseProcessor.process(TBaseProcessor.java:57)
         at 
org.apache.blur.thrift.server.AbstractNonblockingServer$FrameBuffer.invoke(AbstractNonblockingServer.java:515)
         at org.apache.blur.thrift.server.Invocation.run(Invocation.java:34)
         at 
org.apache.blur.concurrent.ThreadWatcher$ThreadWatcherExecutorService$1.run(ThreadWatcher.java:127)
         at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
         at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
         at java.lang.Thread.run(Thread.java:724)
INFO  20130928_23:18:56:480_PDT [Thread-Watcher] 
concurrent.ThreadWatcher: Thread [Thread[thrift-processors1,5,main]] has 
been executing for [80,255 ms]
INFO  20130928_23:18:57:430_PDT [thrift-processors1] thrift.TableAdmin: 
Opening - Shards Open [0], Shards Opening [0] of table [test]
INFO  20130928_23:19:00:433_PDT [thrift-processors1] thrift.TableAdmin: 
Opening - Shards Open [0], Shards Opening [0] of table [test]
INFO  20130928_23:19:01:481_PDT [Thread-Watcher] 
concurrent.ThreadWatcher: Thread [Thread[thrift-processors1,5,main]] has 
been executing for [85,255 ms]
INFO  20130928_23:19:03:435_PDT [thrift-processors1] thrift.TableAdmin: 
Opening - Shards Open [0], Shards Opening [0] of table [test]
INFO  20130928_23:19:06:464_PDT [thrift-processors1] thrift.TableAdmin: 
Opening - Shards Open [0], Shards Opening [8] of table [test]
INFO  20130928_23:19:06:480_PDT [Thread-Watcher] 
concurrent.ThreadWatcher: Thread [Thread[thrift-processors1,5,main]] has 
been executing for [90,255 ms]
INFO  20130928_23:19:08:400_PDT [Thread-32] 
indexserver.BlurServerShutDown: Closing zookeeper.
INFO  20130928_23:19:08:406_PDT [main-EventThread] zookeeper.ClientCnxn: 
EventThread shut down
INFO  20130928_23:19:08:406_PDT [Thread-32] zookeeper.ZooKeeper: 
Session: 0x141685d2dec0000 closed

*Shard 
Log**-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------*

INFO  20130928_23:17:19:227_PDT [main] thrift.ThriftBlurShardServer: 
Setting up Shard Server
INFO  20130928_23:17:19:255_PDT [main] thrift.ThriftServer: ulimit: core 
file size          (blocks, -c) 0
INFO  20130928_23:17:19:255_PDT [main] thrift.ThriftServer: ulimit: data 
seg size           (kbytes, -d) unlimited
INFO  20130928_23:17:19:255_PDT [main] thrift.ThriftServer: ulimit: 
scheduling priority             (-e) 0
INFO  20130928_23:17:19:255_PDT [main] thrift.ThriftServer: ulimit: file 
size               (blocks, -f) unlimited
INFO  20130928_23:17:19:256_PDT [main] thrift.ThriftServer: ulimit: 
pending signals                 (-i) 31635
INFO  20130928_23:17:19:256_PDT [main] thrift.ThriftServer: ulimit: max 
locked memory       (kbytes, -l) 64
INFO  20130928_23:17:19:256_PDT [main] thrift.ThriftServer: ulimit: max 
memory size         (kbytes, -m) unlimited
INFO  20130928_23:17:19:256_PDT [main] thrift.ThriftServer: ulimit: open 
files                      (-n) 4096
INFO  20130928_23:17:19:256_PDT [main] thrift.ThriftServer: ulimit: pipe 
size            (512 bytes, -p) 8
INFO  20130928_23:17:19:256_PDT [main] thrift.ThriftServer: ulimit: 
POSIX message queues     (bytes, -q) 819200
INFO  20130928_23:17:19:256_PDT [main] thrift.ThriftServer: ulimit: 
real-time priority              (-r) 0
INFO  20130928_23:17:19:256_PDT [main] thrift.ThriftServer: ulimit: 
stack size              (kbytes, -s) 8192
INFO  20130928_23:17:19:256_PDT [main] thrift.ThriftServer: ulimit: cpu 
time               (seconds, -t) unlimited
INFO  20130928_23:17:19:256_PDT [main] thrift.ThriftServer: ulimit: max 
user processes              (-u) 31635
INFO  20130928_23:17:19:256_PDT [main] thrift.ThriftServer: ulimit: 
virtual memory          (kbytes, -v) unlimited
INFO  20130928_23:17:19:256_PDT [main] thrift.ThriftServer: ulimit: file 
locks                      (-x) unlimited
WARN  20130928_23:17:19:464_PDT [main] utils.GCWatcher: GCWatcher was 
NOT setup.
INFO  20130928_23:17:19:561_PDT [main] mortbay.log: Logging to 
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via 
org.mortbay.log.Slf4jLog
INFO  20130928_23:17:19:562_PDT [main] gui.HttpJettyServer: System 
props:{java.runtime.name=OpenJDK Runtime Environment, 
blur.name=shard-server-blur-0, blur.base.controller.port=40010, 
sun.boot.library.path=/opt/icedtea-bin-7.2.4.1/jre/lib/amd64, 
java.vm.version=24.0-b50, baseGuiShardPort=40090, java.vm.vendor=Oracle 
Corporation, java.vendor.url=http://java.oracle.com/, path.separator=:, 
baseGuiControllerPort=40080, java.vm.name=OpenJDK 64-Bit Server VM, 
file.encoding.pkg=sun.io, user.country=US, 
sun.java.launcher=SUN_STANDARD, sun.os.patch.level=unknown, 
java.vm.specification.name=Java Virtual Machine Specification, 
user.dir=/home/hadoop, java.runtime.version=1.7.0_40-b31, 
java.awt.graphicsenv=sun.awt.X11GraphicsEnvironment, 
java.endorsed.dirs=/opt/icedtea-bin-7.2.4.1/jre/lib/endorsed, 
os.arch=amd64, java.io.tmpdir=/tmp, line.separator=
, java.vm.specification.vendor=Oracle Corporation, 
blur.base.shard.port=40020, os.name=Linux, blur.gui.mode=shard, 
sun.jnu.encoding=ANSI_X3.4-1968, java.library.path=, 
blur.logs.dir=/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../logs, 
java.specification.name=Java Platform API Specification, 
java.class.version=51.0, java.net.preferIPv4Stack=true, 
sun.management.compiler=HotSpot 64-Bit Tiered Compilers, 
os.version=3.10.7-gentoo, user.home=/home/hadoop, 
user.timezone=Canada/Pacific, 
java.awt.printerjob=sun.print.PSPrinterJob, 
file.encoding=ANSI_X3.4-1968, java.specification.version=1.7, 
java.class.path=/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../conf:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/blur-core-0.2.0-incubating.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/blur-gui-0.2.0-incubating.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/blur-mapred-0.2.0-incubating.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/blur-query-0.2.0-incubating.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/blur-shell-0.2.0-incubating.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/blur-store-0.2.0-incubating.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/blur-thrift-0.2.0-incubating.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/blur-util-0.2.0-incubating.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/commons-cli-1.2.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/commons-logging-1.1.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/concurrentlinkedhashmap-lru-1.3.2.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/guava-14.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/httpclient-4.1.3.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/httpcore-4.1.4.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/jackson-annotations-2.1.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/jackson-core-2.1.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/jackson-databind-2.1.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/jline-2.10.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/json-20090211.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/log4j-1.2.15.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/lucene-analyzers-common-4.3.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/lucene-codecs-4.3.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/lucene-core-4.3.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/lucene-highlighter-4.3.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/lucene-memory-4.3.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/lucene-queries-4.3.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/lucene-queryparser-4.3.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/lucene-sandbox-4.3.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/lucene-spatial-4.3.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/metrics-core-2.2.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/metrics-ganglia-2.2.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/metrics-graphite-2.2.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/metrics-servlet-2.2.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/slf4j-api-1.6.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/slf4j-log4j12-1.6.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/spatial4j-0.3.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/zookeeper-3.4.5.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/blur-gui-0.2.0-incubating.war:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//conf:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//hadoop-core-1.2.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/activation-1.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/ant-1.6.5.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/asm-3.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/commons-beanutils-1.7.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/commons-beanutils-core-1.8.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/commons-codec-1.4.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/commons-collections-3.2.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/commons-configuration-1.6.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/commons-digester-1.8.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/commons-el-1.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/commons-httpclient-3.0.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/commons-io-2.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/commons-lang-2.4.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/commons-logging-1.1.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/commons-math-2.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/commons-net-1.4.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/hsqldb-1.8.0.10.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jackson-core-asl-1.7.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jackson-jaxrs-1.7.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jackson-mapper-asl-1.8.8.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jackson-xc-1.7.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jakarta-regexp-1.4.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jasper-compiler-5.5.12.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jasper-runtime-5.5.12.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jaxb-api-2.2.2.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jaxb-impl-2.2.3-1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jersey-core-1.8.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jersey-json-1.8.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jersey-server-1.8.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jets3t-0.6.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jettison-1.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jetty-6.1.26.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jetty-util-6.1.26.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jsp-2.1-6.1.14.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jsp-api-2.1-6.1.14.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/netty-3.2.2.Final.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/oro-2.0.8.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/servlet-api-2.5-20081211.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/servlet-api-2.5-6.1.14.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/stax-api-1.0-2.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/stax-api-1.0.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/xmlenc-0.52.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jsp-*/*.jar, 
user.name=hadoop, java.vm.specification.version=1.7, 
sun.java.command=org.apache.blur.thrift.ThriftBlurShardServer -s 0, 
java.home=/opt/icedtea-bin-7.2.4.1/jre, sun.arch.data.model=64, 
user.language=en, java.specification.vendor=Oracle Corporation, 
blur.gui.servicing.port=40020, awt.toolkit=sun.awt.X11.XToolkit, 
java.vm.info=mixed mode, blur-shard-0=, java.version=1.7.0_40, 
java.ext.dirs=/opt/icedtea-bin-7.2.4.1/jre/lib/ext:/usr/java/packages/lib/ext, 
sun.boot.class.path=/opt/icedtea-bin-7.2.4.1/jre/lib/resources.jar:/opt/icedtea-bin-7.2.4.1/jre/lib/rt.jar:/opt/icedtea-bin-7.2.4.1/jre/lib/sunrsasign.jar:/opt/icedtea-bin-7.2.4.1/jre/lib/jsse.jar:/opt/icedtea-bin-7.2.4.1/jre/lib/jce.jar:/opt/icedtea-bin-7.2.4.1/jre/lib/charsets.jar:/opt/icedtea-bin-7.2.4.1/jre/lib/netx.jar:/opt/icedtea-bin-7.2.4.1/jre/lib/plugin.jar:/opt/icedtea-bin-7.2.4.1/jre/lib/rhino.jar:/opt/icedtea-bin-7.2.4.1/jre/lib/jfr.jar:/opt/icedtea-bin-7.2.4.1/jre/classes, 
java.awt.headless=true, java.vendor=Oracle Corporation, 
file.separator=/, 
java.vendor.url.bug=http://bugreport.sun.com/bugreport/, 
sun.io.unicode.encoding=UnicodeLittle, sun.cpu.endian=little, 
blur.log.file=blur-hadoop-shard-server-blur-0, sun.cpu.isalist=}
INFO  20130928_23:17:20:195_PDT [main] gui.HttpJettyServer: WEB GUI 
coming up for resource: shard
INFO  20130928_23:17:20:195_PDT [main] gui.HttpJettyServer: WEB GUI 
thinks its at: 
/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/blur-gui-0.2.0-incubating.war
INFO  20130928_23:17:20:195_PDT [main] gui.HttpJettyServer: 
/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../logs/blur-hadoop-shard-server-blur-0
INFO  20130928_23:17:20:195_PDT [main] mortbay.log: jetty-6.1.26
INFO  20130928_23:17:20:281_PDT [main] mortbay.log: Extract 
/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/blur-gui-0.2.0-incubating.war 
to /tmp/Jetty_0_0_0_0_40090_blur.gui.0.2.0.incubating.war____5s9zib/webapp
INFO  20130928_23:17:22:051_PDT [main] mortbay.log: Started 
SocketConnector@0.0.0.0:40090
INFO  20130928_23:17:22:051_PDT [main] gui.HttpJettyServer: WEB GUI up 
on port: 40090
INFO  20130928_23:17:22:067_PDT [main] buffer.BufferStore: Initializing 
the 1024 buffers with [8,192] buffers.
INFO  20130928_23:17:22:134_PDT [main] buffer.BufferStore: Initializing 
the 8192 buffers with [8,192] buffers.
INFO  20130928_23:17:22:366_PDT [main] thrift.ThriftBlurShardServer: 
Number of slabs of block cache [2] with direct memory allocation set to 
[true]
INFO  20130928_23:17:22:366_PDT [main] thrift.ThriftBlurShardServer: 
Block cache target memory usage, slab size of [134,217,728] will 
allocate [2] slabs and use ~[268,435,456] bytes
INFO  20130928_23:17:22:403_PDT [main] thrift.ThriftBlurShardServer: 
Shard Server using index [0] bind address [0.0.0.0:40020]
INFO  20130928_23:17:22:437_PDT [main] zookeeper.ZooKeeper: Client 
environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52 GMT
INFO  20130928_23:17:22:437_PDT [main] zookeeper.ZooKeeper: Client 
environment:host.name=blur
INFO  20130928_23:17:22:437_PDT [main] zookeeper.ZooKeeper: Client 
environment:java.version=1.7.0_40
INFO  20130928_23:17:22:437_PDT [main] zookeeper.ZooKeeper: Client 
environment:java.vendor=Oracle Corporation
INFO  20130928_23:17:22:437_PDT [main] zookeeper.ZooKeeper: Client 
environment:java.home=/opt/icedtea-bin-7.2.4.1/jre
INFO  20130928_23:17:22:437_PDT [main] zookeeper.ZooKeeper: Client 
environment:java.class.path=/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../conf:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/blur-core-0.2.0-incubating.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/blur-gui-0.2.0-incubating.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/blur-mapred-0.2.0-incubating.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/blur-query-0.2.0-incubating.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/blur-shell-0.2.0-incubating.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/blur-store-0.2.0-incubating.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/blur-thrift-0.2.0-incubating.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/blur-util-0.2.0-incubating.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/commons-cli-1.2.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/commons-logging-1.1.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/concurrentlinkedhashmap-lru-1.3.2.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/guava-14.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/httpclient-4.1.3.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/httpcore-4.1.4.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/jackson-annotations-2.1.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/jackson-core-2.1.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/jackson-databind-2.1.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/jline-2.10.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/json-20090211.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/log4j-1.2.15.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/lucene-analyzers-common-4.3.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/lucene-codecs-4.3.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/lucene-core-4.3.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/lucene-highlighter-4.3.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/lucene-memory-4.3.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/lucene-queries-4.3.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/lucene-queryparser-4.3.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/lucene-sandbox-4.3.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/lucene-spatial-4.3.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/metrics-core-2.2.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/metrics-ganglia-2.2.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/metrics-graphite-2.2.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/metrics-servlet-2.2.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/slf4j-api-1.6.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/slf4j-log4j12-1.6.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/spatial4j-0.3.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/zookeeper-3.4.5.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/blur-gui-0.2.0-incubating.war:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//conf:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//hadoop-core-1.2.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/activation-1.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/ant-1.6.5.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/asm-3.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/commons-beanutils-1.7.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/commons-beanutils-core-1.8.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/commons-codec-1.4.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/commons-collections-3.2.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/commons-configuration-1.6.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/commons-digester-1.8.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/commons-el-1.0.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/commons-httpclient-3.0.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/commons-io-2.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/commons-lang-2.4.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/commons-logging-1.1.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/commons-math-2.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/commons-net-1.4.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/hsqldb-1.8.0.10.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jackson-core-asl-1.7.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jackson-jaxrs-1.7.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jackson-mapper-asl-1.8.8.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jackson-xc-1.7.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jakarta-regexp-1.4.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jasper-compiler-5.5.12.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jasper-runtime-5.5.12.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jaxb-api-2.2.2.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jaxb-impl-2.2.3-1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jersey-core-1.8.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jersey-json-1.8.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jersey-server-1.8.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jets3t-0.6.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jettison-1.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jetty-6.1.26.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jetty-util-6.1.26.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jsp-2.1-6.1.14.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jsp-api-2.1-6.1.14.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/netty-3.2.2.Final.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/oro-2.0.8.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/servlet-api-2.5-20081211.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/servlet-api-2.5-6.1.14.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/stax-api-1.0-2.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/stax-api-1.0.1.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/xmlenc-0.52.jar:/home/hadoop/apache-blur-0.2.0-incubating-bin/bin/../lib/hadoop-1.2.1//lib/jsp-*/*.jar
INFO  20130928_23:17:22:437_PDT [main] zookeeper.ZooKeeper: Client 
environment:java.library.path=
INFO  20130928_23:17:22:437_PDT [main] zookeeper.ZooKeeper: Client 
environment:java.io.tmpdir=/tmp
INFO  20130928_23:17:22:437_PDT [main] zookeeper.ZooKeeper: Client 
environment:java.compiler=<NA>
INFO  20130928_23:17:22:437_PDT [main] zookeeper.ZooKeeper: Client 
environment:os.name=Linux
INFO  20130928_23:17:22:437_PDT [main] zookeeper.ZooKeeper: Client 
environment:os.arch=amd64
INFO  20130928_23:17:22:437_PDT [main] zookeeper.ZooKeeper: Client 
environment:os.version=3.10.7-gentoo
INFO  20130928_23:17:22:437_PDT [main] zookeeper.ZooKeeper: Client 
environment:user.name=hadoop
INFO  20130928_23:17:22:437_PDT [main] zookeeper.ZooKeeper: Client 
environment:user.home=/home/hadoop
INFO  20130928_23:17:22:437_PDT [main] zookeeper.ZooKeeper: Client 
environment:user.dir=/home/hadoop
INFO  20130928_23:17:22:438_PDT [main] zookeeper.ZooKeeper: Initiating 
client connection, connectString=127.0.0.1 sessionTimeout=90000 
watcher=org.apache.blur.zookeeper.ZkUtils$ConnectionWatcher@6069094
INFO  20130928_23:17:22:504_PDT [main-SendThread(blur:2181)] 
zookeeper.ClientCnxn: Opening socket connection to server 
blur/127.0.0.1:2181. Will not attempt to authenticate using SASL 
(unknown error)
INFO  20130928_23:17:22:538_PDT [main-SendThread(blur:2181)] 
zookeeper.ClientCnxn: Socket connection established to 
blur/127.0.0.1:2181, initiating session
INFO  20130928_23:17:22:548_PDT [main-SendThread(blur:2181)] 
zookeeper.ClientCnxn: Session establishment complete on server 
blur/127.0.0.1:2181, sessionid = 0x141685d2dec0001, negotiated timeout = 
90000
INFO  20130928_23:17:22:553_PDT [main-EventThread] zookeeper.ZkUtils: 
ZooKeeper [127.0.0.1] timeout [90,000] changed to [SyncConnected] state
INFO  20130928_23:17:23:582_PDT [main] writer.BlurIndexRefresher: Init 
Complete
INFO  20130928_23:17:23:662_PDT [main] writer.BlurIndexRefresher: Init 
Complete
INFO  20130928_23:17:23:663_PDT [main] writer.BlurIndexCloser: Init Complete
INFO  20130928_23:18:50:027_PDT [main] indexserver.SafeMode: Waiting for 
cluster to settle, current size [1] total time waited so far [1 ms] 
waiting another [5000 ms].
INFO  20130928_23:18:55:028_PDT [main] indexserver.SafeMode: Clustered 
has settled.
INFO  20130928_23:18:55:042_PDT [Watch Children 
[/blur/clusters/default/online-nodes][20d3a656-411c-42aa-a876-56c242da620b]] 
indexserver.DistributedIndexServer: Online shard servers changed, 
clearing layout managers and cache.
INFO  20130928_23:18:55:042_PDT [Watch Children 
[/blur/clusters/default/online-nodes][20d3a656-411c-42aa-a876-56c242da620b]] 
indexserver.DistributedIndexServer: Node came online [blur:40020]
INFO  20130928_23:18:55:083_PDT [main] manager.IndexManager: Init Complete
INFO  20130928_23:18:55:158_PDT [main] thrift.ThriftServer: Starting 
server [blur:40020]
INFO  20130928_23:19:05:692_PDT [Table-Warmer] 
indexserver.DistributedIndexServer: Opening missing shard 
[shard-00000000] from table [test]
INFO  20130928_23:19:05:696_PDT [Table-Warmer] 
indexserver.DistributedIndexServer: Opening missing shard 
[shard-00000001] from table [test]
INFO  20130928_23:19:05:697_PDT [Table-Warmer] 
indexserver.DistributedIndexServer: Opening missing shard 
[shard-00000002] from table [test]
INFO  20130928_23:19:05:697_PDT [Table-Warmer] 
indexserver.DistributedIndexServer: Opening missing shard 
[shard-00000003] from table [test]
INFO  20130928_23:19:05:697_PDT [Table-Warmer] 
indexserver.DistributedIndexServer: Opening missing shard 
[shard-00000004] from table [test]
INFO  20130928_23:19:05:697_PDT [Table-Warmer] 
indexserver.DistributedIndexServer: Opening missing shard 
[shard-00000005] from table [test]
INFO  20130928_23:19:05:697_PDT [Table-Warmer] 
indexserver.DistributedIndexServer: Opening missing shard 
[shard-00000006] from table [test]
INFO  20130928_23:19:05:698_PDT [Table-Warmer] 
indexserver.DistributedIndexServer: Opening missing shard 
[shard-00000007] from table [test]
INFO  20130928_23:19:05:698_PDT [Table-Warmer] 
indexserver.DistributedIndexServer: Opening missing shard 
[shard-00000008] from table [test]
INFO  20130928_23:19:05:698_PDT [Table-Warmer] 
indexserver.DistributedIndexServer: Opening missing shard 
[shard-00000009] from table [test]
INFO  20130928_23:19:05:698_PDT [Table-Warmer] 
indexserver.DistributedIndexServer: Opening missing shard 
[shard-00000010] from table [test]
INFO  20130928_23:19:05:702_PDT [shard-opener1] 
indexserver.DistributedIndexServer: Opening shard [shard-00000001] for 
table [test]
INFO  20130928_23:19:05:702_PDT [shard-opener2] 
indexserver.DistributedIndexServer: Opening shard [shard-00000002] for 
table [test]
INFO  20130928_23:19:05:702_PDT [shard-opener3] 
indexserver.DistributedIndexServer: Opening shard [shard-00000003] for 
table [test]
INFO  20130928_23:19:05:702_PDT [shard-opener4] 
indexserver.DistributedIndexServer: Opening shard [shard-00000004] for 
table [test]
INFO  20130928_23:19:05:702_PDT [shard-opener5] 
indexserver.DistributedIndexServer: Opening shard [shard-00000005] for 
table [test]
INFO  20130928_23:19:05:701_PDT [shard-opener0] 
indexserver.DistributedIndexServer: Opening shard [shard-00000000] for 
table [test]
INFO  20130928_23:19:05:702_PDT [shard-opener6] 
indexserver.DistributedIndexServer: Opening shard [shard-00000006] for 
table [test]
INFO  20130928_23:19:05:705_PDT [shard-opener7] 
indexserver.DistributedIndexServer: Opening shard [shard-00000007] for 
table [test]
INFO  20130928_23:19:05:723_PDT [shard-opener2] server.TableContext: 
Creating table context for table [test]
INFO  20130928_23:19:05:724_PDT [shard-opener4] server.TableContext: 
Creating table context for table [test]
INFO  20130928_23:19:05:724_PDT [shard-opener5] server.TableContext: 
Creating table context for table [test]
INFO  20130928_23:19:05:724_PDT [shard-opener0] server.TableContext: 
Creating table context for table [test]
INFO  20130928_23:19:05:724_PDT [shard-opener1] server.TableContext: 
Creating table context for table [test]
INFO  20130928_23:19:05:724_PDT [shard-opener6] server.TableContext: 
Creating table context for table [test]
INFO  20130928_23:19:05:724_PDT [shard-opener7] server.TableContext: 
Creating table context for table [test]
INFO  20130928_23:19:05:764_PDT [shard-opener3] server.TableContext: 
Creating table context for table [test]
INFO  20130928_23:19:06:537_PDT [shard-opener1] writer.BlurNRTIndex: 
Timer Name [nrtSetup] took [36 ms]
INFO  20130928_23:19:06:537_PDT [shard-opener1] writer.BlurNRTIndex: 
Timer Name [writerOpen] took [145 ms]
INFO  20130928_23:19:06:538_PDT [shard-opener1] 
indexserver.DistributedIndexServer: Opening shard [shard-00000008] for 
table [test]
INFO  20130928_23:19:06:544_PDT [warmup0] 
indexserver.DefaultBlurIndexWarmup: Running warmup for reader 
[ExitableReader(StandardDirectoryReader(:nrt))]
INFO  20130928_23:19:06:539_PDT [shard-opener7] writer.BlurNRTIndex: 
Timer Name [nrtSetup] took [41 ms]
INFO  20130928_23:19:06:545_PDT [shard-opener7] writer.BlurNRTIndex: 
Timer Name [writerOpen] took [142 ms]
INFO  20130928_23:19:06:540_PDT [shard-opener2] writer.BlurNRTIndex: 
Timer Name [nrtSetup] took [39 ms]
INFO  20130928_23:19:06:546_PDT [shard-opener2] writer.BlurNRTIndex: 
Timer Name [writerOpen] took [147 ms]
INFO  20130928_23:19:06:540_PDT [shard-opener0] writer.BlurNRTIndex: 
Timer Name [nrtSetup] took [40 ms]
INFO  20130928_23:19:06:546_PDT [shard-opener0] writer.BlurNRTIndex: 
Timer Name [writerOpen] took [129 ms]
INFO  20130928_23:19:06:540_PDT [shard-opener6] writer.BlurNRTIndex: 
Timer Name [nrtSetup] took [40 ms]
INFO  20130928_23:19:06:550_PDT [shard-opener6] writer.BlurNRTIndex: 
Timer Name [writerOpen] took [145 ms]
INFO  20130928_23:19:06:555_PDT [shard-opener5] writer.BlurNRTIndex: 
Timer Name [nrtSetup] took [54 ms]
INFO  20130928_23:19:06:555_PDT [shard-opener5] writer.BlurNRTIndex: 
Timer Name [writerOpen] took [144 ms]
INFO  20130928_23:19:06:556_PDT [shard-opener5] 
indexserver.DistributedIndexServer: Opening shard [shard-00000009] for 
table [test]
INFO  20130928_23:19:06:565_PDT [shard-opener1] writer.BlurNRTIndex: 
Timer Name [nrtSetup] took [10 ms]
INFO  20130928_23:19:06:565_PDT [shard-opener1] writer.BlurNRTIndex: 
Timer Name [writerOpen] took [6 ms]
INFO  20130928_23:19:06:566_PDT [shard-opener1] 
indexserver.DistributedIndexServer: Opening shard [shard-00000010] for 
table [test]
INFO  20130928_23:19:06:572_PDT [shard-opener4] writer.BlurNRTIndex: 
Timer Name [nrtSetup] took [72 ms]
INFO  20130928_23:19:06:573_PDT [shard-opener4] writer.BlurNRTIndex: 
Timer Name [writerOpen] took [157 ms]
INFO  20130928_23:19:06:573_PDT [shard-opener3] writer.BlurNRTIndex: 
Timer Name [nrtSetup] took [73 ms]
INFO  20130928_23:19:06:574_PDT [shard-opener3] writer.BlurNRTIndex: 
Timer Name [writerOpen] took [146 ms]
INFO  20130928_23:19:06:571_PDT [warmup1] 
indexserver.DefaultBlurIndexWarmup: Running warmup for reader 
[ExitableReader(StandardDirectoryReader(:nrt))]
INFO  20130928_23:19:06:584_PDT [warmup1] 
indexserver.DefaultBlurIndexWarmup: Running warmup for reader 
[ExitableReader(StandardDirectoryReader(:nrt))]
INFO  20130928_23:19:06:584_PDT [warmup3] 
indexserver.DefaultBlurIndexWarmup: Running warmup for reader 
[ExitableReader(StandardDirectoryReader(:nrt))]
INFO  20130928_23:19:06:584_PDT [warmup4] 
indexserver.DefaultBlurIndexWarmup: Running warmup for reader 
[ExitableReader(StandardDirectoryReader(:nrt))]
INFO  20130928_23:19:06:584_PDT [warmup5] 
indexserver.DefaultBlurIndexWarmup: Running warmup for reader 
[ExitableReader(StandardDirectoryReader(:nrt))]
INFO  20130928_23:19:06:584_PDT [warmup6] 
indexserver.DefaultBlurIndexWarmup: Running warmup for reader 
[ExitableReader(StandardDirectoryReader(:nrt))]
INFO  20130928_23:19:06:585_PDT [warmup7] 
indexserver.DefaultBlurIndexWarmup: Running warmup for reader 
[ExitableReader(StandardDirectoryReader(:nrt))]
INFO  20130928_23:19:06:588_PDT [shard-opener5] writer.BlurNRTIndex: 
Timer Name [nrtSetup] took [26 ms]
INFO  20130928_23:19:06:588_PDT [shard-opener5] writer.BlurNRTIndex: 
Timer Name [writerOpen] took [1 ms]
INFO  20130928_23:19:06:589_PDT [warmup0] 
indexserver.DefaultBlurIndexWarmup: Running warmup for reader 
[ExitableReader(StandardDirectoryReader(:nrt))]
INFO  20130928_23:19:06:591_PDT [warmup2] 
indexserver.DefaultBlurIndexWarmup: Running warmup for reader 
[ExitableReader(StandardDirectoryReader(:nrt))]
INFO  20130928_23:19:06:594_PDT [shard-opener1] writer.BlurNRTIndex: 
Timer Name [nrtSetup] took [10 ms]
INFO  20130928_23:19:06:594_PDT [shard-opener1] writer.BlurNRTIndex: 
Timer Name [writerOpen] took [1 ms]
INFO  20130928_23:19:06:596_PDT [warmup1] 
indexserver.DefaultBlurIndexWarmup: Running warmup for reader 
[ExitableReader(StandardDirectoryReader(:nrt))]
ERROR 20130928_23:19:06:839_PDT [Table-Warmer] 
indexserver.DistributedIndexServer: Table [testtable] is missing, 
defined location [file:///tmp/testtable]
ERROR 20130928_23:19:06:840_PDT [Table-Warmer] 
indexserver.DistributedIndexServer: Unknown error
java.lang.RuntimeException: Table [testtable] is missing, defined 
location [file:///tmp/testtable]
         at 
org.apache.blur.manager.indexserver.DistributedIndexServer.getShardList(DistributedIndexServer.java:612)
         at 
org.apache.blur.manager.indexserver.DistributedIndexServer.setupLayoutManager(DistributedIndexServer.java:570)
         at 
org.apache.blur.manager.indexserver.DistributedIndexServer.getShardsToServe(DistributedIndexServer.java:554)
         at 
org.apache.blur.manager.indexserver.DistributedIndexServer.getIndexes(DistributedIndexServer.java:412)
         at 
org.apache.blur.manager.indexserver.DistributedIndexServer$2.warmup(DistributedIndexServer.java:237)
         at 
org.apache.blur.manager.indexserver.DistributedIndexServer$2.run(DistributedIndexServer.java:218)
         at java.util.TimerThread.mainLoop(Timer.java:555)
         at java.util.TimerThread.run(Timer.java:505)
INFO  20130928_23:19:08:561_PDT [Thread-36] 
indexserver.BlurServerShutDown: Closing zookeeper.
INFO  20130928_23:19:08:568_PDT [main-EventThread] zookeeper.ClientCnxn: 
EventThread shut down
INFO  20130928_23:19:08:568_PDT [Thread-36] zookeeper.ZooKeeper: 
Session: 0x141685d2dec0001 closed

Thanks,
Colton McInroy

  * Director of Security Engineering

	
Phone
(Toll Free) 	
_US_ 	(888)-818-1344 Press 2
_UK_ 	0-800-635-0551 Press 2

My Extension 	101
24/7 Support 	support@dosarrest.com <mailto:support@dosarrest.com>
Email 	colton@dosarrest.com <mailto:colton@dosarrest.com>
Website 	http://www.dosarrest.com

On 9/28/2013 10:48 PM, Colton McInroy wrote:
> Well, I gues the first thing to do is check the list of current tables 
> to see if a table with the desired name already exists, then if it 
> does not, create a new table, otherwise, use the existing table to add 
> records to.
>
> Thanks,
> Colton McInroy
>
>  * Director of Security Engineering
>
>
> Phone
> (Toll Free)
> _US_     (888)-818-1344 Press 2
> _UK_     0-800-635-0551 Press 2
>
> My Extension     101
> 24/7 Support     support@dosarrest.com <mailto:support@dosarrest.com>
> Email     colton@dosarrest.com <mailto:colton@dosarrest.com>
> Website     http://www.dosarrest.com
>
> On 9/28/2013 10:39 PM, Colton McInroy wrote:
>> How do you create a new table via java code? I see that the shell has 
>> a create command but I do not see anywhere in the docs how to do it 
>> via the thrift api call.
>>
>> Thanks,
>> Colton McInroy
>>
>>  * Director of Security Engineering
>>
>>
>> Phone
>> (Toll Free)
>> _US_     (888)-818-1344 Press 2
>> _UK_     0-800-635-0551 Press 2
>>
>> My Extension     101
>> 24/7 Support     support@dosarrest.com <mailto:support@dosarrest.com>
>> Email     colton@dosarrest.com <mailto:colton@dosarrest.com>
>> Website     http://www.dosarrest.com
>>
>> On 9/28/2013 6:47 AM, Aaron McCurry wrote:
>>> On Sat, Sep 28, 2013 at 9:23 AM, Colton McInroy 
>>> <colton@dosarrest.com>wrote:
>>>
>>>> So, basically your suggesting I use this undocumented Bulk MapReduce
>>>> method to add all of the data live as it comes in? Do you have an 
>>>> example
>>>> or any information on how I would accomplish this? What I could do 
>>>> is have
>>>> a flush period, where as the logs come in and get parsed, I build 
>>>> them up
>>>> to like 10000 entries or a timed interval, then bulk load them into 
>>>> blur.
>>>
>>> I will add some documentation on how to use it and probably an 
>>> example, but
>>> I would try using the async client (maybe start with the regular 
>>> client)
>>> first to see if it can keep up.  Just as an FYI I found a bug in 0.2.0
>>> mutateBatch that causes a deadlock.  I will resolve later today, but 
>>> if you
>>> try it out before 0.2.1 is released (a couple of weeks) you will likely
>>> need to patch the code.  Here's the issue:
>>>
>>> https://issues.apache.org/jira/browse/BLUR-245
>>>
>>> Aaron
>>>
>>>
>>>
>>>
>>>> Thanks,
>>>> Colton McInroy
>>>>
>>>>   * Director of Security Engineering
>>>>
>>>>
>>>> Phone
>>>> (Toll Free)
>>>> _US_    (888)-818-1344 Press 2
>>>> _UK_    0-800-635-0551 Press 2
>>>>
>>>> My Extension    101
>>>> 24/7 Support    support@dosarrest.com <mailto:support@dosarrest.com>
>>>> Email   colton@dosarrest.com <mailto:colton@dosarrest.com>
>>>> Website         http://www.dosarrest.com
>>>>
>>>> On 9/28/2013 6:06 AM, Aaron McCurry wrote:
>>>>
>>>>> So there is a method that is not documented that the Bulk 
>>>>> MapReduce uses
>>>>> that could fill the gaps between MR and NRT updates. Let's say 
>>>>> that there
>>>>> a table with 100 shards.  In a given table on hdfs the path would 
>>>>> look
>>>>> like
>>>>> "/blur/tables/table12345/**shard-000010/<the main index goes here>".
>>>>>
>>>>> Now the way MapReduce works is that it creates a sub directory in 
>>>>> the main
>>>>> index:
>>>>> "/blur/tables/table12345/**shard-000010/some_index_name.**tmp/<new 
>>>>> data
>>>>> here>"
>>>>>
>>>>> Once the index is ready to be committed the writer is closed for 
>>>>> the new
>>>>> index and the suddir is renamed to:
>>>>> "/blur/tables/table12345/**shard-000010/some_index_name.**commit/<new
>>>>> data
>>>>> here>"
>>>>>
>>>>> The act of having an index in the shard directory that ends with 
>>>>> ".commit"
>>>>> makes the shard pick up the index and do an index merge through the
>>>>> writer.addDirectory(..) call.  It checks for this every 10 seconds.
>>>>>
>>>>> While this is not a really easy to integrate yet.  I think that if 
>>>>> I build
>>>>> an Apache Flume integration it will likely make use of this 
>>>>> feature, or at
>>>>> least have an option to use it.
>>>>>
>>>>>
>>>>> As far as searching multiple tables, this has been asked for 
>>>>> before so I
>>>>> think that is something that we should add.  It actually shouldn't 
>>>>> be that
>>>>> difficult.
>>>>>
>>>>> Aaron
>>>>>
>>>>>
>>>>> On Sat, Sep 28, 2013 at 5:59 AM, Colton McInroy <colton@dosarrest.com
>>>>>> wrote:
>>>>>   I actually didn't have any kind of loading interval, I loaded 
>>>>> the new
>>>>>> event log entries into the index in real time. My code runs as a 
>>>>>> daemon
>>>>>> accepting syslog entries which indexes them live as they come in 
>>>>>> with a
>>>>>> flush call every 10000 entries or 1 minute, which ever comes first.
>>>>>>
>>>>>> And I don't want to have any limitation on lookback time. I want 
>>>>>> to be
>>>>>> able to look at the history of any site going back years if need be.
>>>>>>
>>>>>> Sucks there is no multi table reader, that limits what I can do 
>>>>>> by a bit.
>>>>>>
>>>>>>
>>>>>> Thanks,
>>>>>> Colton McInroy
>>>>>>
>>>>>>    * Director of Security Engineering
>>>>>>
>>>>>>
>>>>>> Phone
>>>>>> (Toll Free)
>>>>>> _US_    (888)-818-1344 Press 2
>>>>>> _UK_    0-800-635-0551 Press 2
>>>>>>
>>>>>> My Extension    101
>>>>>> 24/7 Support    support@dosarrest.com <mailto:support@dosarrest.com>
>>>>>> Email   colton@dosarrest.com <mailto:colton@dosarrest.com>
>>>>>> Website         http://www.dosarrest.com
>>>>>>
>>>>>> On 9/28/2013 12:48 AM, Garrett Barton wrote:
>>>>>>
>>>>>>   Mapreduce is a bulk entrypoint to loading blur. Much in the 
>>>>>> same way I
>>>>>>> bet
>>>>>>> you have some fancy code to grab up a bunch of log files, over 
>>>>>>> some kind
>>>>>>> of
>>>>>>> interval and load them into your index,  MR replaces that 
>>>>>>> process with
>>>>>>> an
>>>>>>> auto scaling (via hardware additions only) high bandwidth load 
>>>>>>> that you
>>>>>>> could fire off at any interval you want. The MR bulk load writes 
>>>>>>> a new
>>>>>>> index and merges that into the index already running when it 
>>>>>>> completes.
>>>>>>> The
>>>>>>> catch is that it is NOT as efficient as your implementation is 
>>>>>>> in terms
>>>>>>> of
>>>>>>> latency into the index. So your current impl that will load a small
>>>>>>> sites
>>>>>>> couple of mb real fast, MR might take 30 seconds to a minute to 
>>>>>>> bring
>>>>>>> that
>>>>>>> online. Having said that blur has a realtime api for inserting 
>>>>>>> that has
>>>>>>> low
>>>>>>> latency but you trade in your high bandwidth for it. Might be 
>>>>>>> something
>>>>>>> you
>>>>>>> could detect on your front door and decide which way in the data 
>>>>>>> comes.
>>>>>>>
>>>>>>> When I was in your shoes, highly optimizing your indexes based 
>>>>>>> on size
>>>>>>> and
>>>>>>> load for a single badass machine and doing manual partitioning 
>>>>>>> tricks to
>>>>>>> keep things snappy was key.  The neat thing about blur is some 
>>>>>>> of that
>>>>>>> you
>>>>>>> don't do anymore.  I would call it an early optimization at this 
>>>>>>> point
>>>>>>> to
>>>>>>> do anything shorter than say a day or whatever your max lookback 
>>>>>>> time
>>>>>>> is.
>>>>>>> (Oh btw you can't search across tables in blur, forgot to 
>>>>>>> mention that.)
>>>>>>>
>>>>>>> Instead of the lots of tables route I suggest trying one large 
>>>>>>> one and
>>>>>>> seeing where that goes. Utilize blurs cache initializing 
>>>>>>> capabilities
>>>>>>> and
>>>>>>> load in your site and time columns to keep your logical 
>>>>>>> partitioning
>>>>>>> columns in the block cache and thus very fast. I bet you will 
>>>>>>> see good
>>>>>>> performance with this approach. Certainly better than es. Not as 
>>>>>>> fast as
>>>>>>> raw lucene, but there is always a price to pay for distributing 
>>>>>>> and so
>>>>>>> far
>>>>>>> blur has the lowest overhead I've seen.
>>>>>>>
>>>>>>> Hope that helps some.
>>>>>>> On Sep 27, 2013 11:31 PM, "Colton McInroy" <colton@dosarrest.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>    Coments inline...
>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> Colton McInroy
>>>>>>>>
>>>>>>>>     * Director of Security Engineering
>>>>>>>>
>>>>>>>>
>>>>>>>> Phone
>>>>>>>> (Toll Free)
>>>>>>>> _US_    (888)-818-1344 Press 2
>>>>>>>> _UK_    0-800-635-0551 Press 2
>>>>>>>>
>>>>>>>> My Extension    101
>>>>>>>> 24/7 Support    support@dosarrest.com 
>>>>>>>> <mailto:support@dosarrest.com>
>>>>>>>> Email   colton@dosarrest.com <mailto:colton@dosarrest.com>
>>>>>>>> Website         http://www.dosarrest.com
>>>>>>>>
>>>>>>>> On 9/27/2013 5:02 AM, Aaron McCurry wrote:
>>>>>>>>
>>>>>>>>    I have commented inline below:
>>>>>>>>
>>>>>>>>> On Thu, Sep 26, 2013 at 11:00 AM, Colton McInroy <
>>>>>>>>> colton@dosarrest.com
>>>>>>>>>
>>>>>>>>>   wrote:
>>>>>>>>>>         I do have a few question if you don't mind... I am still
>>>>>>>>>> trying
>>>>>>>>>> to
>>>>>>>>>> wrap my head around how this works. In my current 
>>>>>>>>>> implementation for
>>>>>>>>>> a
>>>>>>>>>> logging system I create new indexes for each hour because I 
>>>>>>>>>> have a
>>>>>>>>>> massive
>>>>>>>>>> amount of data coming in. I take in live log data from syslog 
>>>>>>>>>> and
>>>>>>>>>> parse/store it in hourly lucene indexes along with a facet 
>>>>>>>>>> index. I
>>>>>>>>>> want
>>>>>>>>>> to
>>>>>>>>>> turn this into a distributed redundant system and blur 
>>>>>>>>>> appears to be
>>>>>>>>>> the
>>>>>>>>>> way to go. I tried elasticsearch but it is just too slow 
>>>>>>>>>> compared to
>>>>>>>>>> my
>>>>>>>>>> current implementation. Given I take in gigs of raw log data an
>>>>>>>>>> hour, I
>>>>>>>>>> need something that is robust and able to keep up with in 
>>>>>>>>>> flow of
>>>>>>>>>> data.
>>>>>>>>>>
>>>>>>>>>>     Due to the current implementation of building up an index 
>>>>>>>>>> for an
>>>>>>>>>> hour
>>>>>>>>>>
>>>>>>>>>>   and
>>>>>>>>> then making available.  I would use MapReduce for this:
>>>>>>>>>
>>>>>>>>> http://incubator.apache.org/******blur/docs/0.2.0/using-blur.****<http://incubator.apache.org/****blur/docs/0.2.0/using-blur.**> 
>>>>>>>>>
>>>>>>>>> <http://incubator.apache.org/****blur/docs/0.2.0/using-blur.****<http://incubator.apache.org/**blur/docs/0.2.0/using-blur.**> 
>>>>>>>>>
>>>>>>>>> html#map-reduce<http://**incub**ator.apache.org/blur/**<http://incubator.apache.org/blur/**> 
>>>>>>>>>
>>>>>>>>> docs/0.2.0/using-blur.html#****map-reduce<http://incubator.**
>>>>>>>>> apache.org/blur/docs/0.2.0/**using-blur.html#map-reduce<http://incubator.apache.org/blur/docs/0.2.0/using-blur.html#map-reduce> 
>>>>>>>>>
>>>>>>>>> That way all the shards in a table get a little more data each 
>>>>>>>>> hour
>>>>>>>>> and
>>>>>>>>> it's very low impact on the running cluster.
>>>>>>>>>
>>>>>>>>>    Not sure I understand this. I would like data to be 
>>>>>>>>> accessible live
>>>>>>>>> as
>>>>>>>>>
>>>>>>>> it
>>>>>>>> comes in, not wait an hour before I can query against it.
>>>>>>>> I am also not sure where map-reduce comes in here. I thought 
>>>>>>>> mapreduce
>>>>>>>> is
>>>>>>>> something that blur used internally.
>>>>>>>>
>>>>>>>>          When taking in lots of data constantly, how is it 
>>>>>>>> recommended
>>>>>>>>
>>>>>>>>> that it
>>>>>>>>>
>>>>>>>>>   be stored? I mentioned above that I create a new index for 
>>>>>>>>> each hour
>>>>>>>>>> to
>>>>>>>>>> keep data separated and quicker to search. If I want to look 
>>>>>>>>>> up a
>>>>>>>>>> specific
>>>>>>>>>> time frame, I only have to load the directories timestamped 
>>>>>>>>>> with the
>>>>>>>>>> hours
>>>>>>>>>> I want to look at. So instead of having to look at a huge 
>>>>>>>>>> index of
>>>>>>>>>> like a
>>>>>>>>>> years worth of data, i'm looking at a much smaller data set 
>>>>>>>>>> which
>>>>>>>>>> results
>>>>>>>>>> in faster query response times. Should a new table be created 
>>>>>>>>>> for
>>>>>>>>>> each
>>>>>>>>>> hour
>>>>>>>>>> of data? When I typed in the create command into the shell, 
>>>>>>>>>> it takes
>>>>>>>>>> about
>>>>>>>>>> 6 seconds to create a table. If I have to create a table for 
>>>>>>>>>> each
>>>>>>>>>> application each hour, this could create a lot of lag. 
>>>>>>>>>> Perhaps this
>>>>>>>>>> is
>>>>>>>>>> just
>>>>>>>>>> in my test environment though. Any thoughts on this? I also 
>>>>>>>>>> didn't
>>>>>>>>>> see
>>>>>>>>>> any
>>>>>>>>>> examples of how to create tables via code.
>>>>>>>>>>
>>>>>>>>>>     First off Blur is designed to store very large amounts of 
>>>>>>>>>> data.
>>>>>>>>>>   And
>>>>>>>>>>
>>>>>>>>>>   while
>>>>>>>>> it can do NRT updates like Solr and ES it's main focus in on bulk
>>>>>>>>> ingestion
>>>>>>>>> through MapReduce.  Given that, the real limiting factor is 
>>>>>>>>> how much
>>>>>>>>> hardware you have.  Let's play out a scenario.  If you are 
>>>>>>>>> adding 10GB
>>>>>>>>> of
>>>>>>>>> data an hour and I would think that a good rough ballpark 
>>>>>>>>> guess is
>>>>>>>>> that
>>>>>>>>> you
>>>>>>>>> will need 10-15% of inbound data size as memory to make the 
>>>>>>>>> search
>>>>>>>>> perform
>>>>>>>>> well.  However as the index sizes increase this % may decrease 
>>>>>>>>> over
>>>>>>>>> time.
>>>>>>>>>      Blur has an off-heap lru cache to make accessing hdfs 
>>>>>>>>> faster,
>>>>>>>>> however if
>>>>>>>>> you don't have enough memory the searches (and the cluster for 
>>>>>>>>> that
>>>>>>>>> matter)
>>>>>>>>> won't fail, they will simply become slower.
>>>>>>>>>
>>>>>>>>> So it's really a question of how much hardware you have.  If 
>>>>>>>>> you have
>>>>>>>>> filling a table enough to where it does perform well given the 
>>>>>>>>> cluster
>>>>>>>>> you
>>>>>>>>> have.  You might have to break it into pieces. But I think that
>>>>>>>>> hourly
>>>>>>>>> is
>>>>>>>>> too small.  Daily, Weekly, Monthly, etc.
>>>>>>>>>
>>>>>>>>>    In my current system (which uses just lucene) I designed we 
>>>>>>>>> take in
>>>>>>>>>
>>>>>>>> mainly
>>>>>>>> web logs and separate them into indexes. Each web server gets 
>>>>>>>> it's own
>>>>>>>> index for each hour. Then when I need to query the data, I use 
>>>>>>>> a multi
>>>>>>>> index reader to access the timeframe I need allowing me to keep 
>>>>>>>> the
>>>>>>>> size
>>>>>>>> of
>>>>>>>> index down to roughly what I need to search. If data was stored 
>>>>>>>> over a
>>>>>>>> month, and I want to query data that happened in just a single 
>>>>>>>> hour,
>>>>>>>> or a
>>>>>>>> few minutes, it makes sense to me to keep things optimized. 
>>>>>>>> Also, if I
>>>>>>>> wanted to compare one web server to another, I would just use 
>>>>>>>> the multi
>>>>>>>> index reader to load both indexes. This is all handled by a single
>>>>>>>> server
>>>>>>>> though, so it is limited by the hardware of the single server. If
>>>>>>>> something
>>>>>>>> fails, it's a big problem. When trying to query large data 
>>>>>>>> sets, it's
>>>>>>>> again, only a single server, so it takes longer than I would 
>>>>>>>> like if
>>>>>>>> the
>>>>>>>> index it's reading is large.
>>>>>>>> I am not entirely sure how to go about doing this in blur. I'm
>>>>>>>> imagining
>>>>>>>> that each "table" is an index. So I would have a table format 
>>>>>>>> like...
>>>>>>>> YYYY_MM_DD_HH_IP. If I do this though, is there a way to query 
>>>>>>>> multiple
>>>>>>>> tables... like a milti table reader or something? or am I 
>>>>>>>> limited to
>>>>>>>> looking at a single table at a time?
>>>>>>>> For some web servers that have little traffic, an hour of data 
>>>>>>>> may only
>>>>>>>> have a few mb of data in it while other may have like a 5-10gb 
>>>>>>>> index.
>>>>>>>> If
>>>>>>>> I
>>>>>>>> combined the index from a large site with the small sites, this 
>>>>>>>> should
>>>>>>>> make
>>>>>>>> everything slower for the queries against the small sites index
>>>>>>>> correct?
>>>>>>>> Or
>>>>>>>> would it all be the same due to how blur separates indexes into 
>>>>>>>> shards?
>>>>>>>> Would it perhaps be better to have an index for each web 
>>>>>>>> server, and
>>>>>>>> configure small sites to have less shards while larger sites 
>>>>>>>> have more
>>>>>>>> shards?
>>>>>>>> We just got a new really large powerful server to be our log 
>>>>>>>> server,
>>>>>>>> but
>>>>>>>> as I realize that it's a single point of failure, I want to 
>>>>>>>> change our
>>>>>>>> configuration to use a clustered/distributed configuration. So 
>>>>>>>> we would
>>>>>>>> start with probably a minimal configuration, and start adding more
>>>>>>>> shard
>>>>>>>> servers when ever we can afford it or need it.
>>>>>>>>
>>>>>>>>          Do shards contain the index data while the location 
>>>>>>>> (hdfs)
>>>>>>>>
>>>>>>>>> contains
>>>>>>>>>
>>>>>>>>>   the documents (what lucene referred to them as)? I read that 
>>>>>>>>> the
>>>>>>>>>> shard
>>>>>>>>>> contains the index while the fs contains the data... I just 
>>>>>>>>>> wasn't
>>>>>>>>>> quiet
>>>>>>>>>> sure what the data was, because when I work with lucene, the 
>>>>>>>>>> index
>>>>>>>>>> directory contains the data as a document.
>>>>>>>>>>
>>>>>>>>>>    The shard is stored in HDFS, and it is a Lucene index.  We 
>>>>>>>>>> store
>>>>>>>>>> the
>>>>>>>>>>
>>>>>>>>> data
>>>>>>>>> inside the Lucene index, so it's basically Lucene all the way 
>>>>>>>>> down to
>>>>>>>>> HDFS.
>>>>>>>>>
>>>>>>>>>    Ok, so basically a controller is a service which connects 
>>>>>>>>> to all (or
>>>>>>>>>
>>>>>>>> some?) shards a distributed query, which tells the shard to run 
>>>>>>>> a query
>>>>>>>> against a certain data set, that shard then gets that data set 
>>>>>>>> either
>>>>>>>> from
>>>>>>>> memory or from the hadoop cluster, processes it, and returns 
>>>>>>>> the result
>>>>>>>> to
>>>>>>>> the controller which condenses the results from all the queried 
>>>>>>>> shards
>>>>>>>> into
>>>>>>>> a final result right?
>>>>>>>>
>>>>>>>>    Hope this helps.  Let us know if you have more questions.
>>>>>>>>
>>>>>>>>> Thanks,
>>>>>>>>> Aaron
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>    Thanks,
>>>>>>>>>
>>>>>>>>>> Colton McInroy
>>>>>>>>>>
>>>>>>>>>>      * Director of Security Engineering
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Phone
>>>>>>>>>> (Toll Free)
>>>>>>>>>> _US_    (888)-818-1344 Press 2
>>>>>>>>>> _UK_    0-800-635-0551 Press 2
>>>>>>>>>>
>>>>>>>>>> My Extension    101
>>>>>>>>>> 24/7 Support    support@dosarrest.com 
>>>>>>>>>> <mailto:support@dosarrest.com>
>>>>>>>>>> Email   colton@dosarrest.com <mailto:colton@dosarrest.com>
>>>>>>>>>> Website         http://www.dosarrest.com
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>
>>
>
>


Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message