eagle-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jayesh Senjaliya <jay...@apache.org>
Subject Re: Instructions on installing Eagle 0.5 on HDP 2.5
Date Sat, 18 Mar 2017 17:49:06 GMT
Do git pull master, the fix is just merged. Eagle-934




On Sat, Mar 18, 2017 at 2:56 AM <amrivkin@gmail.com> wrote:

> Here is an error in server .yaml
>
> Do I need to change something there?
>
>
>
> $
>
> WARN  [2017-03-18 12:53:48,828]
> org.apache.eagle.app.module.ApplicationExtensionLoader: Registering modules
> from HBaseMetricWebApplicationProvider[name=HBase Metric Monitoring Web ,
> type=HBASE_METRIC_WEB_AP$
>
> WARN  [2017-03-18 12:53:48,834]
> org.apache.eagle.server.module.GuiceBundleLoader: Loaded 2 modules (scope:
> metadataStore)
>
> WARN  [2017-03-18 12:53:48,835]
> org.apache.eagle.server.module.GuiceBundleLoader: Loaded 0 modules (scope:
> global)
>
> bin/../conf/server.yml has an error:
>
>   * Unrecognized field at: auth.authorization
>
>     Did you mean?:
>
>       - metrics
>
>       - cachePolicy
>
>       - caching
>
>       - logging
>
>       - enabled
>
>         [4 more]
>
>
>
>
>
> Regards,
>
> Andrey
>
> *From: *Jayesh Senjaliya <jaysen@apache.org>
> *Sent: *18 марта 2017 г. 12:32
> *To: *amrivkin@gmail.com
> *Cc: *user@eagle.apache.org
>
>
> *Subject: *Re: Instructions on installing Eagle 0.5 on HDP 2.5
>
>
>
> storage {
>
>   # storage type: ["hbase","jdbc"]
>
>   # default is "hbase"
>
>   type = "jdbc"
>
>     jdbc{
>
>         adapter="mysql"
>
>         username="root"
>
>         password=basic123
>
>         database=eagle2
>
>         connectionUrl="jdbc:mysql://localhost:3306/eagle2"
>
>         connectionProps="encoding=UTF-8"
>
>         driverClass="com.mysql.jdbc.Driver"
>
>         connectionMax=8
>
>     }
>
> }
>
>
>
>
>
> On Sat, Mar 18, 2017 at 2:30 AM, <amrivkin@gmail.com> wrote:
>
> Ok, thank you! Will try it now.
>
>
>
> Could you please provide template for jdbc as DeepStorage?
>
>
>
> Regards,
>
> Andrey
>
>
>
> *From: *Jayesh Senjaliya <jaysen@apache.org>
> *Sent: *18 марта 2017 г. 12:27
> *To: *user@eagle.apache.org
> *Subject: *Re: Instructions on installing Eagle 0.5 on HDP 2.5
>
>
>
> i have rebased it now, and build has passed....
>
>
>
>
>
> about the metadata,
>
>
>
> eagle stores all metadata ( all you can create on UI ) in mysql, but for
> all the derived or final metrics, it uses hbase (as deep storage) because
> it makes the query much faster.
>
>
>
> if you dont have big scale metrics, you can use mysql for both.
>
>
>
> - Jayesh
>
>
>
>
>
> On Sat, Mar 18, 2017 at 2:14 AM, Markovich <amrivkin@gmail.com> wrote:
>
> Wow, thank you!
>
>
>
> Let me know when u finish please, I will rebuild Eagle and try again.
>
>
>
> Also I'm interested about metadata. Why eagle uses Mysql and Hbase? And
> where should my site_id configuration be stored?
>
>
>
> Regards,
>
> Andrey
>
>
>
> 2017-03-18 11:39 GMT+03:00 Jayesh Senjaliya <jaysen@apache.org>:
>
> aah ok, you can use this patch https://github.com/apache/eagle/pull/812
>
>
>
> wait for sometime, i m doing rebase with master now..
>
>
>
> good luck !
>
> Jayesh
>
>
>
>
>
> On Sat, Mar 18, 2017 at 1:24 AM, MyGmail <amrivkin@gmail.com> wrote:
>
> Hi Jayesh,
>
>
>
> I'm sorry, but I can't change storm version. HDP2.5 includes 1.0.1.
>
>
>
> Any workaround?
>
>
>
> Regards,
>
> Andrey
>
>
> 18 марта 2017 г., в 5:53, Jayesh Senjaliya <jaysen@apache.org> написал(а):
>
> Hi Markovich,
>
>
>
> Eagle 0.5 is well supported with storm 0.9.3 , can you please try using
> that ?
>
>
>
> or do you have to use it with storm 1.x ?
>
>
>
> - Jayesh
>
>
>
>
>
> On Fri, Mar 17, 2017 at 6:50 AM, Markovich <amrivkin@gmail.com> wrote:
>
> Hello eagle users and dev,
>
>
>
> I'm stuck on installing eagle on HDP2.5 cluster with jdk 1.8.0_101.
>
>
>
> Here is my service versions and my cluster is secured using kerberos +
> ranger.
>
>
>
> HDFS 2.7.3
>
> Hive 1.2.1000
>
> Storm 1.0.1
>
> Kafka 0.10.0
>
> Kerberos 1.10.3-10
>
>
>
> Here is what I've done already:
>
>
>
> 1) Downloaded latest eagle from github (version 0.5 snapshot).
>
> 2) Builded it using mvn clean package -DskipTests
>
> [INFO] BUILD SUCCESS
>
> [INFO]
> ------------------------------------------------------------------------
>
> [INFO] Total time: 16:11 min
>
> [INFO] Finished at: 2017-03-17T16:00:45+03:00
>
> [INFO] Final Memory: 183M/1755M
>
> [INFO]
> ------------------------------------------------------------------------
>
>
>
> 3) Moved tarball to /usr/hdp/current/ and extracted to eagle
>
> 4) Changed conf/eagle.conf for my cluster:
>
> zkQuorum
>
> zookeeperZnodeParent = "/hbase-secure"
>
> metadata -> jdbc -> user, pass and host
>
> nimbusHost
>
> 5) Launched bin/eagle-env.sh and bin/eagle-server start
>
>
>
> Eagle started on 9090 port.
>
> 6) In Web UI entered SiteId
>
> 7) Selected install on HDFS Audit Log Monitor and chenged General settings
> (Kafka hosts) and Advanced fs.defaultFS. Execution Mode - cluster
>
> 8) Created hdfs_audit_log_{SITE_ID}, hdfs_audit_log_enriched_{SITE_ID}
> 9) Launched Logstash to write to hdfs_audit_log_{SITE_ID}
> 10) Checked, logs are pushed in kafka
>
> 11) Appliaction is initialized
>
> 12) Tried to start this appliaction:
>
>
>
> INFO  [2017-03-17 13:28:45,926]
> org.apache.eagle.dataproc.impl.storm.kafka.KafkaSpoutProvider: Use topic :
> hdfs_audit_log, zkConnection : localhost:2181 , fetchSize : 1048576
>
> WARN  [2017-03-17 13:28:46,216]
> org.apache.eagle.app.messaging.KafkaStreamProvider: Using default shared
> sink topic dataSinkConfig.topic: hdfs_audit_event
>
> INFO  [2017-03-17 13:28:46,331]
> org.apache.eagle.app.environment.impl.StormExecutionRuntime: Starting
> HDFS_AUDIT_LOG_MONITOR_APP_MYSITE
>
>  (org.apache.eagle.security.auditlog.HdfsAuditLogApplication), mode:
> CLUSTER
>
> INFO  [2017-03-17 13:28:46,332]
> org.apache.eagle.app.environment.impl.StormExecutionRuntime: Overriding
> application.storm.nimbusHost = demo5
>
> INFO  [2017-03-17 13:28:46,332]
> org.apache.eagle.app.environment.impl.StormExecutionRuntime: Overriding
> application.storm.nimbusThriftPort = 6627
>
> INFO  [2017-03-17 13:28:46,332]
> org.apache.eagle.app.environment.impl.StormExecutionRuntime: Submitting as
> cluster mode ...
>
>
>
> Nothing in Storm UI. Nothing in logs.
>
> Tried to change KafkaSpoutProvider to normal with SITE_ID. -> Nothing
> changed.
>
>
>
> I think I need to enter some kerberos related configs for storm...
>
>
>
> 13) Ok, changed  Execution Mode - Local.
>
> 14) Started. Status in UI changed to starting. In logs a lot of activity,
> but also errors:
>
>
>
> WARN  [2017-03-17 13:33:40,952] storm.kafka.KafkaUtils: there are more
> tasks than partitions (tasks: 2; partitions: 1), some tasks will be idle
>
> INFO  [2017-03-17 13:33:40,952] storm.kafka.KafkaUtils: Task [1/2]
> assigned [Partition{host=null:-1, partition=0}]
>
> INFO  [2017-03-17 13:33:40,952] storm.kafka.ZkCoordinator: Task [1/2]
> Deleted partition managers: []
>
> INFO  [2017-03-17 13:33:40,952] storm.kafka.ZkCoordinator: Task [1/2] New
> partition managers: [Partition{host=null:-1, partition=0}]
>
> INFO  [2017-03-17 13:33:40,990] storm.kafka.PartitionManager: Read
> partition information from:
> /consumers/hdfs_audit_log_tuskpro/eagleConsumer/partition_0  --> null
>
> ERROR [2017-03-17 13:33:41,047] backtype.storm.util: Async loop died!
>
> ! java.lang.NullPointerException: null
>
> ! at org.apache.kafka.common.utils.Utils.formatAddress(Utils.java:312)
> ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
>
> ! at
> kafka.consumer.SimpleConsumer$$anonfun$disconnect$1.apply(SimpleConsumer.scala:49)
> ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
>
> ! at
> kafka.consumer.SimpleConsumer$$anonfun$disconnect$1.apply(SimpleConsumer.scala:49)
> ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
>
> ! at kafka.utils.Logging$class.debug(Logging.scala:52)
> ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
>
> ! at kafka.consumer.SimpleConsumer.debug(SimpleConsumer.scala:30)
> ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
>
> ! at kafka.consumer.SimpleConsumer.disconnect(SimpleConsumer.scala:49)
> ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
>
> ! at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:82)
> ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
>
> ! at
> kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:68)
> ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
>
> ! at
> kafka.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:127)
> ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
>
> ! at
> kafka.javaapi.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:79)
> ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
>
> ! at storm.kafka.KafkaUtils.getOffset(KafkaUtils.java:74)
> ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
>
> ! at storm.kafka.KafkaUtils.getOffset(KafkaUtils.java:64)
> ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
>
> ! at storm.kafka.PartitionManager.<init>(PartitionManager.java:89)
> ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
>
> ! at storm.kafka.ZkCoordinator.refresh(ZkCoordinator.java:98)
> ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
>
> ! ... 6 common frames omitted
>
> ! Causing: java.lang.RuntimeException: java.lang.NullPointerException
>
> ! at storm.kafka.ZkCoordinator.refresh(ZkCoordinator.java:103)
> ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
>
> ! at
> storm.kafka.ZkCoordinator.getMyManagedPartitions(ZkCoordinator.java:69)
> ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
>
> ! at storm.kafka.KafkaSpout.nextTuple(KafkaSpout.java:135)
> ~[eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
>
> ! at backtype.storm.daemon.executor$fn__3373$fn__3388$fn__3417.invoke(executor.clj:565)
> ~[storm-core-0.9.3.jar:0.9.3]
>
> ! at backtype.storm.util$async_loop$fn__464.invoke(util.clj:463)
> ~[storm-core-0.9.3.jar:0.9.3]
>
> ! at clojure.lang.AFn.run(AFn.java:24)
> [eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
>
> ! at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101]
>
>
>
> ....
>
>
>
> ERROR [2017-03-17 13:33:41,079] backtype.storm.util: Halting process:
> ("Worker died")
>
> ! java.lang.RuntimeException: ("Worker died")
>
> ! at backtype.storm.util$exit_process_BANG_.doInvoke(util.clj:325)
> [storm-core-0.9.3.jar:0.9.3]
>
> ! at clojure.lang.RestFn.invoke(RestFn.java:423)
> [eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
>
> ! at backtype.storm.daemon.worker$fn__3808$fn__3809.invoke(worker.clj:452)
> [storm-core-0.9.3.jar:0.9.3]
>
> ! at
> backtype.storm.daemon.executor$mk_executor_data$fn__3274$fn__3275.invoke(executor.clj:240)
> [storm-core-0.9.3.jar:0.9.3]
>
> ! at backtype.storm.util$async_loop$fn__464.invoke(util.clj:473)
> [storm-core-0.9.3.jar:0.9.3]
>
> ! at clojure.lang.AFn.run(AFn.java:24)
> [eagle-topology-0.5.0-SNAPSHOT-assembly.jar:na]
>
> ! at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101]
>
>
>
> ERROR [2017-03-17 13:34:10,013]
> org.apache.eagle.security.enrich.DataEnrichJob: Fail to load sensitivity
> data
>
> ! java.net.ConnectException: Connection refused
>
> ! at java.net.PlainSocketImpl.socketConnect(Native Method) ~[na:1.8.0_101]
>
> ! at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
> ~[na:1.8.0_101]
>
> ! at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
> ~[na:1.8.0_101]
>
> ! at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
> ~[na:1.8.0_101]
>
> ! at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
> ~[na:1.8.0_101]
>
> ! at java.net.Socket.connect(Socket.java:589) ~[na:1.8.0_101]
>
> ! at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
> ~[na:1.8.0_101]
>
> ! at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
> ~[na:1.8.0_101]
>
> ! at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
> ~[na:1.8.0_101]
>
> ! at sun.net.www.http.HttpClient.<init>(HttpClient.java:211)
> ~[na:1.8.0_101]
>
> ! at sun.net.www.http.HttpClient.New(HttpClient.java:308) ~[na:1.8.0_101]
>
> ! at sun.net.www.http.HttpClient.New(HttpClient.java:326) ~[na:1.8.0_101]
>
> ! at
> sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1169)
> ~[na:1.8.0_101]
>
> ! at
> sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1105)
> ~[na:1.8.0_101]
>
> ! at
> sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:999)
> ~[na:1.8.0_101]
>
> ! at
> sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:933)
> ~[na:1.8.0_101]
>
> ! at
> sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1513)
> ~[na:1.8.0_101]
>
> ! at
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1441)
> ~[na:1.8.0_101]
>
> ! at
> java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480)
> ~[na:1.8.0_101]
>
> ! at
> com.sun.jersey.client.urlconnection.URLConnectionClientHandler._invoke(URLConnectionClientHandler.java:253)
> ~[jersey-client-1.19.1.jar:1.19.1]
>
> ! at
> com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:153)
> ~[jersey-client-1.19.1.jar:1.19.1]
>
> ! ... 12 common frames omitted
>
> ! Causing: com.sun.jersey.api.client.ClientHandlerException:
> java.net.ConnectException: Connection refused
>
> ! at
> com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:155)
> ~[jersey-client-1.19.1.jar:1.19.1]
>
> ! at
> com.sun.jersey.api.client.filter.GZIPContentEncodingFilter.handle(GZIPContentEncodingFilter.java:123)
> ~[jersey-client-1.19.1.jar:1.19.1]
>
> ! at com.sun.jersey.api.client.Client.handle(Client.java:652)
> ~[jersey-client-1.19.1.jar:1.19.1]
>
>
>
> Ui wasn't accessable, but server was still running.
>
> It was failing again and again.
>
>
>
> 13) Restarted server
>
> 14) Nothing was saved. Again fresh install. UI asks for site_id.
>
> 15) Checked Hbase and MySQL Server - all is empty.
>
>
>
>
>
> So can someone please help me to get started with Eagle on my cluster?
>
>
>
>
>
> Also here is Logstash info:
>
> logstash-5.2.2
>
>
>
>
>
>   output {
>
>
>
>
>
>       if [type] == "hdp-nn-audit" {
>
>
>         kafka   {
>
>
>                 codec => plain {format => "%{message}"}
>
>
>                 bootstrap_servers => "demo4:6667"
>
>
>                 topic_id => "hdfs_audit_log_tuskpro"
>
>
>                 security_protocol => "SASL_PLAINTEXT"
>
>
>                 sasl_kerberos_service_name => "kafka"
>
>
>                 jaas_path =>
> "/usr/hdp/current/kafka-broker/config/kafka_client_jaas.conf"
>
>
>                 kerberos_config => "/etc/krb5.conf"
>
>
>                 client_id => "hdp-nn-audit"
>
>
>                 message_key => "%{user}"
>
>
>                 }
>
>
>                                   }
>
>
>          }
>
>
>
>
>
> Also I don't know why, but logstash is not using kafka_client_jaas.
>
> There is info about sasl_kerberos_service_name , but without this property
> logstash is not working.
>
>
>
> ·
>
> KafkaClient {
>
>
>
>    com.sun.security.auth.module.Krb5LoginModule required
>
>
>
>    useTicketCache=true
>
>
>
>    renewTicket=true
>
>
>
>    serviceName="kafka";
>
>
>
> };
>
>
>
> Client {
>
>
>
>    com.sun.security.auth.module.Krb5LoginModule required
>
>
>
>    useTicketCache=true
>
>
>
>    renewTicket=true
>
>
>
>    serviceName="zookeeper";
>
>
>
> };
>
>
>
>
>
>
>
> Regards,
> Andrey
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>

Mime
View raw message