eagle-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From MyGmail <amriv...@gmail.com>
Subject Re: Instructions on installing Eagle 0.5 on HDP 2.5
Date Sun, 19 Mar 2017 18:44:13 GMT
Ok, thank you!

С уважением,
  Андрей

> 19 марта 2017 г., в 20:04, Jayesh Senjaliya <jaysen@apache.org> написал(а):
> 
> Ok. Let me check on this tonight (after 10 hours from now)
> 
> On Sun, Mar 19, 2017 at 2:13 AM <amrivkin@gmail.com> wrote:
> Hi Jayesh,
> 
>  
> 
> I’ve cloned eagle from master again and still have backtype.storm:
> 
>  
> 
>  
> 
> [root@demo6 opt]# git clone https://github.com/apache/eagle.git
> 
> Cloning into 'eagle'...
> 
> remote: Counting objects: 40879, done.
> 
> remote: Compressing objects: 100% (122/122), done.
> 
> remote: Total 40879 (delta 21), reused 1 (delta 1), pack-reused 40724
> 
> Receiving objects: 100% (40879/40879), 25.42 MiB | 6.06 MiB/s, done.
> 
> Resolving deltas: 100% (15983/15983), done.
> 
> [root@demo6 opt]# cd eagle/
> 
> [root@demo6 eagle]# mvn clean package -DskipTests
> 
> [INFO] Scanning for projects...
> 
> ...............
> 
> [INFO] BUILD SUCCESS
> 
> [INFO] ------------------------------------------------------------------------
> 
> [INFO] Total time: 16:22 min
> 
> [INFO] Finished at: 2017-03-19T11:58:31+03:00
> 
> [INFO] Final Memory: 182M/1790M
> 
> [INFO] ------------------------------------------------------------------------
> 
> [root@demo6 eagle]#
> 
> [root@demo6 eagle]# cp eagle-assembly/target/eagle-0.5.0-SNAPSHOT-bin.tar.gz /usr/hdp/current/
> 
> [root@demo6 eagle]# cd /usr/hdp/current/
> 
> [root@demo6 current]# tar -zxvf eagle-0.5.0-SNAPSHOT-bin.tar.gz
> 
> [root@demo6 current]# cd eagle-0.5.0-SNAPSHOT
> 
> [root@demo6 eagle-0.5.0-SNAPSHOT]# ls
> 
> bin  conf  doc  lib
> 
> [root@demo6 eagle-0.5.0-SNAPSHOT]# vi conf/eagle.conf
> 
> [root@demo6 eagle-0.5.0-SNAPSHOT]# ps -ef | grep eagle
> 
> root     13251 10881  0 12:04 pts/0    00:00:00 grep --color=auto eagle
> 
> [root@demo6 eagle-0.5.0-SNAPSHOT]# bin/eagle-env.sh
> 
> [root@demo6 eagle-0.5.0-SNAPSHOT]# bin/eagle-server.sh start
> 
> Starting eagle service ...
> 
> java -Dconfig.resource=eagle.conf -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9999
-Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.local.only=false -Djava.rmi.server.hostname=demo6.pro.ru -Dcom.sun.management.jmxremote.rmi.port=9999
-server -Xms1024m -Xmx1024m -XX:MaxPermSize=1024m -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps
-Xloggc:bin/../log/eagle-server-gc.log -cp bin/../conf:bin/../lib/dropwizard-assets-0.7.1.jar:bin/../lib/dropwizard-auth-0.7.1.jar:bin/../lib/dropwizard-configuration-0.7.1.jar:bin/../lib/dropwizard-core-0.7.1.jar:bin/../lib/dropwizard-guice-0.7.0.2.jar:bin/../lib/dropwizard-jackson-0.7.1.jar:bin/../lib/dropwizard-jersey-0.7.1.jar:bin/../lib/dropwizard-jetty-0.7.1.jar:bin/../lib/dropwizard-lifecycle-0.7.1.jar:bin/../lib/dropwizard-logging-0.7.1.jar:bin/../lib/dropwizard-metrics-0.7.1.jar:bin/../lib/dropwizard-servlets-0.7.1.jar:bin/../lib/dropwizard-util-0.7.1.jar:bin/../lib/dropwizard-validation-0.7.1.jar:bin/../lib/eagle-storage-hbase-0.5.0-SNAPSHOT.jar:bin/../lib/eagle-storage-jdbc-0.5.0-SNAPSHOT.jar:bin/../lib/eagle-topology-0.5.0-SNAPSHOT-assembly.jar:bin/../lib/jersey-client-1.19.1.jar:bin/../lib/jersey-core-1.19.1.jar:bin/../lib/jersey-guice-1.18.1.jar:bin/../lib/jersey-json-1.19.1.jar:bin/../lib/jersey-multipart-1.19.1.jar:bin/../lib/jersey-server-1.19.1.jar:bin/../lib/jersey-servlet-1.19.1.jar:bin/../lib/scripts:bin/../lib/slf4j-api-1.7.5.jar:bin/../lib/storm-core-0.9.3.jar
org.apache.eagle.server.ServerMain server bin/../conf/server.yml
> 
> [root@demo6 eagle-0.5.0-SNAPSHOT]# bin/eagle-server.sh status
> 
> Checking eagle service status ...
> 
> Eagle service is running with PID 13282
> 
> [root@demo6 eagle-0.5.0-SNAPSHOT]# bin/eagle-server.sh stop
> 
> Stopping eagle service ...
> 
> Stopping is completed
> 
> [root@demo6 eagle-0.5.0-SNAPSHOT]# ps -ef | grep eagle
> 
> root     13282     1 31 12:04 pts/0    00:02:01 java -Dconfig.resource=eagle.conf -Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.port=9999 -Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.local.only=false -Djava.rmi.server.hostname=demo6.pro.ru
-Dcom.sun.management.jmxremote.rmi.port=9999 -server -Xms1024m -Xmx1024m -XX:MaxPermSize=1024m
-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Xloggc:bin/../log/eagle-server-gc.log
-cp bin/../conf:bin/../lib/dropwizard-assets-0.7.1.jar:bin/../lib/dropwizard-auth-0.7.1.jar:bin/../lib/dropwizard-configuration-0.7.1.jar:bin/../lib/dropwizard-core-0.7.1.jar:bin/../lib/dropwizard-guice-0.7.0.2.jar:bin/../lib/dropwizard-jackson-0.7.1.jar:bin/../lib/dropwizard-jersey-0.7.1.jar:bin/../lib/dropwizard-jetty-0.7.1.jar:bin/../lib/dropwizard-lifecycle-0.7.1.jar:bin/../lib/dropwizard-logging-0.7.1.jar:bin/../lib/dropwizard-metrics-0.7.1.jar:bin/../lib/dropwizard-servlets-0.7.1.jar:bin/../lib/dropwizard-util-0.7.1.jar:bin/../lib/dropwizard-validation-0.7.1.jar:bin/../lib/eagle-storage-hbase-0.5.0-SNAPSHOT.jar:bin/../lib/eagle-storage-jdbc-0.5.0-SNAPSHOT.jar:bin/../lib/eagle-topology-0.5.0-SNAPSHOT-assembly.jar:bin/../lib/jersey-client-1.19.1.jar:bin/../lib/jersey-core-1.19.1.jar:bin/../lib/jersey-guice-1.18.1.jar:bin/../lib/jersey-json-1.19.1.jar:bin/../lib/jersey-multipart-1.19.1.jar:bin/../lib/jersey-server-1.19.1.jar:bin/../lib/jersey-servlet-1.19.1.jar:bin/../lib/scripts:bin/../lib/slf4j-api-1.7.5.jar:bin/../lib/storm-core-0.9.3.jar
org.apache.eagle.server.ServerMain server bin/../conf/server.yml
> 
> root     16151 10881  0 12:11 pts/0    00:00:00 grep --color=auto eagle
> 
>  
> 
>  
> 
> Cluster mode - nothing.
> 
> Local mode - ERROR [2017-03-19 09:07:51,510] backtype.storm.util: Async loop died!
> 
> ! java.lang.NullPointerException: null
> 
> ! at org.apache.kafka.common.utils.Utils.formatAddress(Utils.java:312) ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
>  
> 
> Maybe I’m doing something wrong?
> 
>  
> 
> Regards,
> 
> Andrey
> 
>  
> 
> From: Jayesh Senjaliya
> Sent: 19 марта 2017 г. 1:16
> To: user@eagle.apache.org
> Cc: Jayesh Senjaliya
> 
> 
> Subject: Re: Instructions on installing Eagle 0.5 on HDP 2.5
> 
>  
> 
> i still see backtype.storm packages in ur stacktrace.
> 
>  
> 
> u need to clean and package or install whole of eagle, also you ll have to stop, delete
the application and re-deploy it.
> 
>  
> 
> - Jayesh
> 
>  
> 
> On Sat, Mar 18, 2017 at 2:57 PM, <amrivkin@gmail.com> wrote:
> 
> Hi Jayesh,
> 
>  
> 
> I’ve done pull and rebuild Eagle 0.5.
> 
> Config problem has gone, but I’ve still failed to start any application.
> 
>  
> 
> Kafka Topic for Auditlog Event Sink is filled with hdfs_audit_event_${site} – last
part is not resolving to my site_id.
> I’ve created two kafka topics: hdfs_audit_log_{SITE_ID}, hdfs_audit_log_enriched_{SITE_ID}
and not hdfs_audit_event_{site_id} – is this ok?
> Launching HDFS Audit logMonitor in cluster mode has no success and no errors, Here is
some log:
>  
> 
> INFO  [2017-03-18 21:29:17,042] org.apache.eagle.app.environment.impl.StormExecutionRuntime:
Starting HDFS_AUDIT_LOG_MONITOR_APP_DEMO(org.apache.eagle.security.auditlog.HdfsAuditLogApplication),
mode: CL
> 
> USTER
> 
> INFO  [2017-03-18 21:29:17,050] org.apache.eagle.app.environment.impl.StormExecutionRuntime:
Overriding application.storm.nimbusHost = demo5
> 
> INFO  [2017-03-18 21:29:17,050] org.apache.eagle.app.environment.impl.StormExecutionRuntime:
Overriding application.storm.nimbusThriftPort = 6627
> 
> INFO  [2017-03-18 21:29:17,050] org.apache.eagle.app.environment.impl.StormExecutionRuntime:
Submitting as cluster mode ...
> 
> INFO  [2017-03-18 21:29:21,415] org.apache.eagle.storage.jdbc.entity.impl.JdbcEntityReaderImpl:
Read 0 records in 3 ms (sql: SELECT alert_detail_alert_detail.* FROM alert_detail_alert_detail
WHERE (alert_det
> 
> ail_alert_detail.timestamp>=? AND alert_detail_alert_detail.timestamp<?) LIMIT
10000 Replacements: [1489829363000,1489872563000])
> 
> INFO  [2017-03-18 21:29:28,231] org.apache.eagle.app.service.impl.ApplicationStatusUpdateServiceImpl:
Updating application status
> 
>  
> 
> Nothing in Storm UI and logs.
> Launched in Local mode:
>  
> 
> INFO  [2017-03-18 21:31:35,906] backtype.storm.daemon.executor: Prepared bolt kafkaSink:(7)
> 
> INFO  [2017-03-18 21:31:35,912] storm.kafka.PartitionManager: Read partition information
from: /consumers/hdfs_audit_log_demo/eagleConsumer/partition_0  --> null
> 
> INFO  [2017-03-18 21:31:35,917] backtype.storm.daemon.executor: Prepared bolt kafkaSink:(6)
> 
> ERROR [2017-03-18 21:31:35,983] backtype.storm.util: Async loop died!
> 
> ! java.lang.NullPointerException: null
> 
> ! at org.apache.kafka.common.utils.Utils.formatAddress(Utils.java:312) ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at kafka.consumer.SimpleConsumer$$anonfun$disconnect$1.apply(SimpleConsumer.scala:49)
~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at kafka.consumer.SimpleConsumer$$anonfun$disconnect$1.apply(SimpleConsumer.scala:49)
~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at kafka.utils.Logging$class.debug(Logging.scala:52) ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at kafka.consumer.SimpleConsumer.debug(SimpleConsumer.scala:30) ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at kafka.consumer.SimpleConsumer.disconnect(SimpleConsumer.scala:49) ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:82) ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:68)
~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at kafka.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:127) ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at kafka.javaapi.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:79)
~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at storm.kafka.KafkaUtils.getOffset(KafkaUtils.java:74) ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at storm.kafka.KafkaUtils.getOffset(KafkaUtils.java:64) ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at storm.kafka.PartitionManager.<init>(PartitionManager.java:89) ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at storm.kafka.ZkCoordinator.refresh(ZkCoordinator.java:98) ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! ... 6 common frames omitted
> 
> ! Causing: java.lang.RuntimeException: java.lang.NullPointerException
> 
> ! at storm.kafka.ZkCoordinator.refresh(ZkCoordinator.java:103) ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at storm.kafka.ZkCoordinator.getMyManagedPartitions(ZkCoordinator.java:69) ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at storm.kafka.KafkaSpout.nextTuple(KafkaSpout.java:135) ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at backtype.storm.daemon.executor$fn__3373$fn__3388$fn__3417.invoke(executor.clj:565)
~[storm-core-0.9.3.jar:0.9.3]
> 
> ! at backtype.storm.util$async_loop$fn__464.invoke(util.clj:463) ~[storm-core-0.9.3.jar:0.9.3]
> 
> ! at clojure.lang.AFn.run(AFn.java:24) [eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101]
> 
> ERROR [2017-03-18 21:31:35,984] backtype.storm.daemon.executor:
> 
> ! java.lang.NullPointerException: null
> 
> ! at org.apache.kafka.common.utils.Utils.formatAddress(Utils.java:312) ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at kafka.consumer.SimpleConsumer$$anonfun$disconnect$1.apply(SimpleConsumer.scala:49)
~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at kafka.consumer.SimpleConsumer$$anonfun$disconnect$1.apply(SimpleConsumer.scala:49)
~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at kafka.utils.Logging$class.debug(Logging.scala:52) ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at kafka.consumer.SimpleConsumer.debug(SimpleConsumer.scala:30) ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at kafka.consumer.SimpleConsumer.disconnect(SimpleConsumer.scala:49) ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:82) ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:68)
~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at kafka.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:127) ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at kafka.javaapi.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:79)
~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at storm.kafka.KafkaUtils.getOffset(KafkaUtils.java:74) ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at storm.kafka.KafkaUtils.getOffset(KafkaUtils.java:64) ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at storm.kafka.PartitionManager.<init>(PartitionManager.java:89) ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at storm.kafka.ZkCoordinator.refresh(ZkCoordinator.java:98) ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! ... 6 common frames omitted
> 
> ! Causing: java.lang.RuntimeException: java.lang.NullPointerException
> 
> ! at storm.kafka.ZkCoordinator.refresh(ZkCoordinator.java:103) ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
>  
> 
>  
> 
> Also, first launched with Hbase as deep storage and got this:
> 
>  
> 
>  
> 
> INFO  [2017-03-18 21:16:31,744] org.apache.zookeeper.ClientCnxn: Session establishment
complete on server localhost/127.0.0.1:2181, sessionid = 0x35ad9214d4700c0, negotiated timeout
= 40000
> 
> WARN  [2017-03-18 21:16:31,942] org.apache.hadoop.hbase.util.DynamicClassLoader: Failed
to identify the fs of dir /tmp/hbase-root/hbase/lib, ignored
> 
> ! java.io.IOException: No FileSystem for scheme: file
> 
> ! at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2607) ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2614) ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91) ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2653) ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2635) ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370) ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:169) ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:354) ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296) ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at org.apache.hadoop.hbase.util.DynamicClassLoader.<init>(DynamicClassLoader.java:104)
~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at org.apache.hadoop.hbase.protobuf.ProtobufUtil.<clinit>(ProtobufUtil.java:223)
[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at org.apache.hadoop.hbase.ClusterId.parseFrom(ClusterId.java:64) [eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:75)
[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:106)
[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.retrieveClusterId(ConnectionManager.java:890)
[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.<init>(ConnectionManager.java:667)
[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) [na:1.8.0_101]
> 
> ! at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
[na:1.8.0_101]
> 
> ! at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
[na:1.8.0_101]
> 
> ! at java.lang.reflect.Constructor.newInstance(Constructor.java:423) [na:1.8.0_101]
> 
> ! at org.apache.hadoop.hbase.client.ConnectionManager.createConnection(ConnectionManager.java:426)
[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at org.apache.hadoop.hbase.client.ConnectionManager.createConnection(ConnectionManager.java:405)
[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at org.apache.hadoop.hbase.client.ConnectionManager.getConnectionInternal(ConnectionManager.java:283)
[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at org.apache.hadoop.hbase.client.HBaseAdmin.<init>(HBaseAdmin.java:191) [eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at org.apache.eagle.storage.hbase.HBaseEntitySchemaManager.init(HBaseEntitySchemaManager.java:66)
[eagle-storage-hbase-0.5.0-SNAPSHOT.jar:0.5.0-SNAPSHOT]
> 
> ! at org.apache.eagle.storage.hbase.HBaseStorage.init(HBaseStorage.java:54) [eagle-storage-hbase-0.5.0-SNAPSHOT.jar:0.5.0-SNAPSHOT]
> 
> ! at org.apache.eagle.storage.DataStorageManager.newDataStorage(DataStorageManager.java:53)
[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at org.apache.eagle.storage.DataStorageManager.getDataStorageByEagleConfig(DataStorageManager.java:81)
[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at org.apache.eagle.storage.DataStorageManager.getDataStorageByEagleConfig(DataStorageManager.java:91)
[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at org.apache.eagle.service.generic.GenericEntityServiceResource.search(GenericEntityServiceResource.java:438)
[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_101]
> 
>  
> 
> And this on stop:
> 
>  
> 
> WARN  [2017-03-18 21:20:27,274] org.apache.hadoop.hbase.client.ClientScanner: scanner
failed to close. Exception follows: java.io.InterruptedIOException
> 
> WARN  [2017-03-18 21:20:27,275] org.apache.hadoop.hbase.client.ClientScanner: scanner
failed to close. Exception follows: java.io.InterruptedIOException
> 
> ERROR [2017-03-18 21:20:27,278] org.apache.eagle.log.entity.GenericEntityScanStreamReader:
Fail reading log
> 
> ! java.nio.channels.ClosedByInterruptException: null
> 
> ! at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
~[na:1.8.0_101]
> 
> ! at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:659) ~[na:1.8.0_101]
> 
> ! at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530) ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494) ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:612)
~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:920)
~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! ... 85 common frames omitted
> 
> ! Causing: java.io.InterruptedIOException: Origin: ClosedByInterruptException
> 
> ! at org.apache.hadoop.hbase.util.ExceptionUtil.asInterrupt(ExceptionUtil.java:62) ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:974)
~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at org.apache.hadoop.hbase.ipc.RpcClient$Connection.writeRequest(RpcClient.java:1094)
~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at org.apache.hadoop.hbase.ipc.RpcClient$Connection.tracedWriteRequest(RpcClient.java:1061)
~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1516) ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1724) ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1777)
~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:30373)
~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1604)
~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1584)
~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1346)
~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1167)
~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:294)
~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:130)
~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:55)
~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:201)
~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:288) ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:268)
~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:140)
~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:135)
~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:802) ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at org.apache.hadoop.hbase.client.HTablePool$PooledHTable.getScanner(HTablePool.java:416)
~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at org.apache.eagle.log.entity.HBaseLogReader2.onOpen(HBaseLogReader2.java:56) ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at org.apache.eagle.log.entity.AbstractHBaseLogReader.open(AbstractHBaseLogReader.java:170)
~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at org.apache.eagle.log.entity.GenericEntityScanStreamReader.readAsStream(GenericEntityScanStreamReader.java:96)
~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at org.apache.eagle.log.entity.GenericEntityStreamReader.readAsStream(GenericEntityStreamReader.java:82)
[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at org.apache.eagle.query.GenericEntityQuery.result(GenericEntityQuery.java:66) [eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at org.apache.eagle.storage.hbase.HBaseStorage.query(HBaseStorage.java:171) [eagle-storage-hbase-0.5.0-SNAPSHOT.jar:0.5.0-SNAPSHOT]
> 
> ! at org.apache.eagle.storage.operation.QueryStatement.execute(QueryStatement.java:47)
[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at org.apache.eagle.service.generic.GenericEntityServiceResource.search(GenericEntityServiceResource.java:444)
[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_101]
> 
>  
> 
>  
> 
> Regards,
> 
> Andrey
> 
>  
> 
> From: Jayesh Senjaliya
> Sent: 18 марта 2017 г. 20:49
> To: Jayesh Senjaliya; user@eagle.apache.org
> 
> 
> Subject: Re: Instructions on installing Eagle 0.5 on HDP 2.5
> 
>  
> 
> Do git pull master, the fix is just merged. Eagle-934
> 
>  
> 
>  
> 
>  
> 
>  
> 
> On Sat, Mar 18, 2017 at 2:56 AM <amrivkin@gmail.com> wrote:
> 
> Here is an error in server .yaml
> 
> Do I need to change something there?
> 
>  
> 
> $
> 
> WARN  [2017-03-18 12:53:48,828] org.apache.eagle.app.module.ApplicationExtensionLoader:
Registering modules from HBaseMetricWebApplicationProvider[name=HBase Metric Monitoring Web
, type=HBASE_METRIC_WEB_AP$
> 
> WARN  [2017-03-18 12:53:48,834] org.apache.eagle.server.module.GuiceBundleLoader: Loaded
2 modules (scope: metadataStore)
> 
> WARN  [2017-03-18 12:53:48,835] org.apache.eagle.server.module.GuiceBundleLoader: Loaded
0 modules (scope: global)
> 
> bin/../conf/server.yml has an error:
> 
>   * Unrecognized field at: auth.authorization
> 
>     Did you mean?:
> 
>       - metrics
> 
>       - cachePolicy
> 
>       - caching
> 
>       - logging
> 
>       - enabled
> 
>         [4 more]
> 
>  
> 
>  
> 
> Regards,
> 
> Andrey
> 
> From: Jayesh Senjaliya
> Sent: 18 марта 2017 г. 12:32
> To: amrivkin@gmail.com
> Cc: user@eagle.apache.org
> 
> 
> Subject: Re: Instructions on installing Eagle 0.5 on HDP 2.5
> 
>  
> 
> storage {
> 
>   # storage type: ["hbase","jdbc"]
> 
>   # default is "hbase"
> 
>   type = "jdbc"
> 
>     jdbc{
> 
>         adapter="mysql"
> 
>         username="root"
> 
>         password=basic123
> 
>         database=eagle2
> 
>         connectionUrl="jdbc:mysql://localhost:3306/eagle2"
> 
>         connectionProps="encoding=UTF-8"
> 
>         driverClass="com.mysql.jdbc.Driver"
> 
>         connectionMax=8
> 
>     }
> 
> }
> 
>  
> 
>  
> 
> On Sat, Mar 18, 2017 at 2:30 AM, <amrivkin@gmail.com> wrote:
> 
> Ok, thank you! Will try it now.
> 
>  
> 
> Could you please provide template for jdbc as DeepStorage?
> 
>  
> 
> Regards,
> 
> Andrey
> 
>  
> 
> From: Jayesh Senjaliya
> Sent: 18 марта 2017 г. 12:27
> To: user@eagle.apache.org
> Subject: Re: Instructions on installing Eagle 0.5 on HDP 2.5
> 
>  
> 
> i have rebased it now, and build has passed....
> 
>  
> 
>  
> 
> about the metadata, 
> 
>  
> 
> eagle stores all metadata ( all you can create on UI ) in mysql, but for all the derived
or final metrics, it uses hbase (as deep storage) because it makes the query much faster.
> 
>  
> 
> if you dont have big scale metrics, you can use mysql for both.
> 
>  
> 
> - Jayesh
> 
>  
> 
>  
> 
> On Sat, Mar 18, 2017 at 2:14 AM, Markovich <amrivkin@gmail.com> wrote:
> 
> Wow, thank you!
> 
>  
> 
> Let me know when u finish please, I will rebuild Eagle and try again.
> 
>  
> 
> Also I'm interested about metadata. Why eagle uses Mysql and Hbase? And where should
my site_id configuration be stored?
> 
>  
> 
> Regards,
> 
> Andrey
> 
>  
> 
> 2017-03-18 11:39 GMT+03:00 Jayesh Senjaliya <jaysen@apache.org>:
> 
> aah ok, you can use this patch https://github.com/apache/eagle/pull/812
> 
>  
> 
> wait for sometime, i m doing rebase with master now..
> 
>  
> 
> good luck !
> 
> Jayesh
> 
>  
> 
>  
> 
> On Sat, Mar 18, 2017 at 1:24 AM, MyGmail <amrivkin@gmail.com> wrote:
> 
> Hi Jayesh,
> 
>  
> 
> I'm sorry, but I can't change storm version. HDP2.5 includes 1.0.1.
> 
>  
> 
> Any workaround?
> 
>  
> 
> Regards,
> 
> Andrey
> 
> 
> 18 марта 2017 г., в 5:53, Jayesh Senjaliya <jaysen@apache.org> написал(а):
> 
> Hi Markovich,
> 
>  
> 
> Eagle 0.5 is well supported with storm 0.9.3 , can you please try using that ?
> 
>  
> 
> or do you have to use it with storm 1.x ?
> 
>  
> 
> - Jayesh
> 
>  
> 
>  
> 
> On Fri, Mar 17, 2017 at 6:50 AM, Markovich <amrivkin@gmail.com> wrote:
> 
> Hello eagle users and dev,
> 
>  
> 
> I'm stuck on installing eagle on HDP2.5 cluster with jdk 1.8.0_101.
> 
>  
> 
> Here is my service versions and my cluster is secured using kerberos + ranger. 
> 
>  
> 
> HDFS 2.7.3
> 
> Hive 1.2.1000
> 
> Storm 1.0.1
> 
> Kafka 0.10.0
> 
> Kerberos 1.10.3-10
> 
>  
> 
> Here is what I've done already:
> 
>  
> 
> 1) Downloaded latest eagle from github (version 0.5 snapshot).
> 
> 2) Builded it using mvn clean package -DskipTests 
> 
> [INFO] BUILD SUCCESS
> 
> [INFO] ------------------------------------------------------------------------
> 
> [INFO] Total time: 16:11 min
> 
> [INFO] Finished at: 2017-03-17T16:00:45+03:00
> 
> [INFO] Final Memory: 183M/1755M
> 
> [INFO] ------------------------------------------------------------------------
> 
>  
> 
> 3) Moved tarball to /usr/hdp/current/ and extracted to eagle
> 
> 4) Changed conf/eagle.conf for my cluster:
> 
> zkQuorum
> 
> zookeeperZnodeParent = "/hbase-secure"
> 
> metadata -> jdbc -> user, pass and host
> 
> nimbusHost
> 
> 5) Launched bin/eagle-env.sh and bin/eagle-server start
> 
>  
> 
> Eagle started on 9090 port. 
> 
> 6) In Web UI entered SiteId
> 
> 7) Selected install on HDFS Audit Log Monitor and chenged General settings (Kafka hosts)
and Advanced fs.defaultFS. Execution Mode - cluster
> 
> 8) Created hdfs_audit_log_{SITE_ID}, hdfs_audit_log_enriched_{SITE_ID} 
> 9) Launched Logstash to write to hdfs_audit_log_{SITE_ID}
> 10) Checked, logs are pushed in kafka
> 
> 11) Appliaction is initialized
> 
> 12) Tried to start this appliaction:
> 
>  
> 
> INFO  [2017-03-17 13:28:45,926] org.apache.eagle.dataproc.impl.storm.kafka.KafkaSpoutProvider:
Use topic : hdfs_audit_log, zkConnection : localhost:2181 , fetchSize : 1048576
> 
> WARN  [2017-03-17 13:28:46,216] org.apache.eagle.app.messaging.KafkaStreamProvider: Using
default shared sink topic dataSinkConfig.topic: hdfs_audit_event
> 
> INFO  [2017-03-17 13:28:46,331] org.apache.eagle.app.environment.impl.StormExecutionRuntime:
Starting HDFS_AUDIT_LOG_MONITOR_APP_MYSITE
> 
>  (org.apache.eagle.security.auditlog.HdfsAuditLogApplication), mode: CLUSTER
> 
> INFO  [2017-03-17 13:28:46,332] org.apache.eagle.app.environment.impl.StormExecutionRuntime:
Overriding application.storm.nimbusHost = demo5
> 
> INFO  [2017-03-17 13:28:46,332] org.apache.eagle.app.environment.impl.StormExecutionRuntime:
Overriding application.storm.nimbusThriftPort = 6627
> 
> INFO  [2017-03-17 13:28:46,332] org.apache.eagle.app.environment.impl.StormExecutionRuntime:
Submitting as cluster mode ...
> 
>  
> 
> Nothing in Storm UI. Nothing in logs.
> 
> Tried to change KafkaSpoutProvider to normal with SITE_ID. -> Nothing changed. 
> 
>  
> 
> I think I need to enter some kerberos related configs for storm...
> 
>  
> 
> 13) Ok, changed  Execution Mode - Local. 
> 
> 14) Started. Status in UI changed to starting. In logs a lot of activity, but also errors:
> 
>  
> 
> WARN  [2017-03-17 13:33:40,952] storm.kafka.KafkaUtils: there are more tasks than partitions
(tasks: 2; partitions: 1), some tasks will be idle
> 
> INFO  [2017-03-17 13:33:40,952] storm.kafka.KafkaUtils: Task [1/2] assigned [Partition{host=null:-1,
partition=0}]
> 
> INFO  [2017-03-17 13:33:40,952] storm.kafka.ZkCoordinator: Task [1/2] Deleted partition
managers: []
> 
> INFO  [2017-03-17 13:33:40,952] storm.kafka.ZkCoordinator: Task [1/2] New partition managers:
[Partition{host=null:-1, partition=0}]
> 
> INFO  [2017-03-17 13:33:40,990] storm.kafka.PartitionManager: Read partition information
from: /consumers/hdfs_audit_log_tuskpro/eagleConsumer/partition_0  --> null
> 
> ERROR [2017-03-17 13:33:41,047] backtype.storm.util: Async loop died!
> 
> ! java.lang.NullPointerException: null
> 
> ! at org.apache.kafka.common.utils.Utils.formatAddress(Utils.java:312) ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at kafka.consumer.SimpleConsumer$$anonfun$disconnect$1.apply(SimpleConsumer.scala:49)
~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at kafka.consumer.SimpleConsumer$$anonfun$disconnect$1.apply(SimpleConsumer.scala:49)
~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at kafka.utils.Logging$class.debug(Logging.scala:52) ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at kafka.consumer.SimpleConsumer.debug(SimpleConsumer.scala:30) ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at kafka.consumer.SimpleConsumer.disconnect(SimpleConsumer.scala:49) ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:82) ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:68)
~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at kafka.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:127) ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at kafka.javaapi.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:79)
~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at storm.kafka.KafkaUtils.getOffset(KafkaUtils.java:74) ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at storm.kafka.KafkaUtils.getOffset(KafkaUtils.java:64) ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at storm.kafka.PartitionManager.<init>(PartitionManager.java:89) ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at storm.kafka.ZkCoordinator.refresh(ZkCoordinator.java:98) ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! ... 6 common frames omitted
> 
> ! Causing: java.lang.RuntimeException: java.lang.NullPointerException
> 
> ! at storm.kafka.ZkCoordinator.refresh(ZkCoordinator.java:103) ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at storm.kafka.ZkCoordinator.getMyManagedPartitions(ZkCoordinator.java:69) ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at storm.kafka.KafkaSpout.nextTuple(KafkaSpout.java:135) ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at backtype.storm.daemon.executor$fn__3373$fn__3388$fn__3417.invoke(executor.clj:565)
~[storm-core-0.9.3.jar:0.9.3]
> 
> ! at backtype.storm.util$async_loop$fn__464.invoke(util.clj:463) ~[storm-core-0.9.3.jar:0.9.3]
> 
> ! at clojure.lang.AFn.run(AFn.java:24) [eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101]
> 
>  
> 
> ....
> 
>  
> 
> ERROR [2017-03-17 13:33:41,079] backtype.storm.util: Halting process: ("Worker died")
> 
> ! java.lang.RuntimeException: ("Worker died")
> 
> ! at backtype.storm.util$exit_process_BANG_.doInvoke(util.clj:325) [storm-core-0.9.3.jar:0.9.3]
> 
> ! at clojure.lang.RestFn.invoke(RestFn.java:423) [eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at backtype.storm.daemon.worker$fn__3808$fn__3809.invoke(worker.clj:452) [storm-core-0.9.3.jar:0.9.3]
> 
> ! at backtype.storm.daemon.executor$mk_executor_data$fn__3274$fn__3275.invoke(executor.clj:240)
[storm-core-0.9.3.jar:0.9.3]
> 
> ! at backtype.storm.util$async_loop$fn__464.invoke(util.clj:473) [storm-core-0.9.3.jar:0.9.3]
> 
> ! at clojure.lang.AFn.run(AFn.java:24) [eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
> 
> ! at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101]
> 
>  
> 
> ERROR [2017-03-17 13:34:10,013] org.apache.eagle.security.enrich.DataEnrichJob: Fail
to load sensitivity data
> 
> ! java.net.ConnectException: Connection refused
> 
> ! at java.net.PlainSocketImpl.socketConnect(Native Method) ~[na:1.8.0_101]
> 
> ! at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) ~[na:1.8.0_101]
> 
> ! at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
~[na:1.8.0_101]
> 
> ! at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) ~[na:1.8.0_101]
> 
> ! at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) ~[na:1.8.0_101]
> 
> ! at java.net.Socket.connect(Socket.java:589) ~[na:1.8.0_101]
> 
> ! at sun.net.NetworkClient.doConnect(NetworkClient.java:175) ~[na:1.8.0_101]
> 
> ! at sun.net.www.http.HttpClient.openServer(HttpClient.java:432) ~[na:1.8.0_101]
> 
> ! at sun.net.www.http.HttpClient.openServer(HttpClient.java:527) ~[na:1.8.0_101]
> 
> ! at sun.net.www.http.HttpClient.<init>(HttpClient.java:211) ~[na:1.8.0_101]
> 
> ! at sun.net.www.http.HttpClient.New(HttpClient.java:308) ~[na:1.8.0_101]
> 
> ! at sun.net.www.http.HttpClient.New(HttpClient.java:326) ~[na:1.8.0_101]
> 
> ! at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1169)
~[na:1.8.0_101]
> 
> ! at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1105)
~[na:1.8.0_101]
> 
> ! at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:999)
~[na:1.8.0_101]
> 
> ! at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:933)
~[na:1.8.0_101]
> 
> ! at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1513)
~[na:1.8.0_101]
> 
> ! at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1441)
~[na:1.8.0_101]
> 
> ! at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480) ~[na:1.8.0_101]
> 
> ! at com.sun.jersey.client.urlconnection.URLConnectionClientHandler._invoke(URLConnectionClientHandler.java:253)
~[jersey-client-1.19.1.jar:1.19.1]
> 
> ! at com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:153)
~[jersey-client-1.19.1.jar:1.19.1]
> 
> ! ... 12 common frames omitted
> 
> ! Causing: com.sun.jersey.api.client.ClientHandlerException: java.net.ConnectException:
Connection refused
> 
> ! at com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:155)
~[jersey-client-1.19.1.jar:1.19.1]
> 
> ! at com.sun.jersey.api.client.filter.GZIPContentEncodingFilter.handle(GZIPContentEncodingFilter.java:123)
~[jersey-client-1.19.1.jar:1.19.1]
> 
> ! at com.sun.jersey.api.client.Client.handle(Client.java:652) ~[jersey-client-1.19.1.jar:1.19.1]
> 
>  
> 
> Ui wasn't accessable, but server was still running. 
> 
> It was failing again and again.
> 
>  
> 
> 13) Restarted server
> 
> 14) Nothing was saved. Again fresh install. UI asks for site_id.
> 
> 15) Checked Hbase and MySQL Server - all is empty.
> 
>  
> 
>  
> 
> So can someone please help me to get started with Eagle on my cluster?
> 
>  
> 
>  
> 
> Also here is Logstash info:
> 
> logstash-5.2.2
> 
>  
> 
>  
> 
>   output {                                                                          
                      
> 
>                                                                                     
                     
> 
>       if [type] == "hdp-nn-audit" {                                                 
                     
> 
>         kafka   {                                                                   
                      
> 
>                 codec => plain {format => "%{message}"}                       
                            
> 
>                 bootstrap_servers => "demo4:6667"                                
                  
> 
>                 topic_id => "hdfs_audit_log_tuskpro"                             
                        
> 
>                 security_protocol => "SASL_PLAINTEXT"                            
                         
> 
>                 sasl_kerberos_service_name => "kafka"                            
                        
> 
>                 jaas_path => "/usr/hdp/current/kafka-broker/config/kafka_client_jaas.conf"
> 
>                 kerberos_config => "/etc/krb5.conf"                              
                         
> 
>                 client_id => "hdp-nn-audit"                                      
                        
> 
>                 message_key => "%{user}"                                         
                        
> 
>                 }                                                                   
                      
> 
>                                   }                                                 
                      
> 
>          }       
> 
>  
> 
>  
> 
> Also I don't know why, but logstash is not using kafka_client_jaas. 
> 
> There is info about sasl_kerberos_service_name , but without this property logstash is
not working. 
> 
>  
> 
> ·
> 
> KafkaClient {                                                                       
                                                                                    
> 
>    com.sun.security.auth.module.Krb5LoginModule required                            
                                                                                    
> 
>    useTicketCache=true                                                              
                                                                                    
> 
>    renewTicket=true                                                                 
                                                                                   
> 
>    serviceName="kafka";                                                             
                                                                                    
> 
> };                                                                                  
                                                                                   
> 
> Client {                                                                            
                                                                                   
> 
>    com.sun.security.auth.module.Krb5LoginModule required                            
                                                                                    
> 
>    useTicketCache=true                                                              
                                                                                    
> 
>    renewTicket=true                                                                 
                                                                                   
> 
>    serviceName="zookeeper";                                                         
                                                                                   
> 
> };                                                                                  
                                                                                   
> 
>                                     
> 
>  
> 
> Regards,
> Andrey                                                                              
                                                      
> 
>               
> 
>  
> 
>  
> 
>  
> 
>  
> 
>  
> 
>  
> 
>  
> 
>  
> 
>  
> 
>  
> 
>  

Mime
View raw message