ambari-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Juan Carlos <juc...@gmail.com>
Subject Help developing a new stack
Date Fri, 12 Dec 2014 10:53:42 GMT
I'm developing a new stack in order to deploy my customized packages and
services.

I just started with HDFS, using HDP stack as reference. The first change
I'd like to introduce is the method used to start services. In HDP stack I
can see how the start command for services are constructed and
executed({ambari-server-path}/HDP/package/scripts/hdfs_namenode.py), but
for my understanding this logic should be delegated to hdfs namenode
service, so I'm going to use a simpler start command (service
hadoop-hdfs-namenode start), and push this logic to the service script
(/etc/init.d/hadoop-hdfs-namenode).

I also checked that using Execute class I'd miss the service output and
neither can use "service hadoop-hdfs-namenode status", so I'm thinking
about executing service operations directly with subprocess. Somethng like
this:

-    File(params.exclude_file_path,
-         content=Template("exclude_hosts_list.j2"),
-         owner=params.hdfs_user,
-         group=params.user_group
-    )
-
-    service(
-      action="start", name="namenode", user=params.hdfs_user,
-      create_pid_dir=True,
-      create_log_dir=True
-    )
+    p=subprocess.Popen(["service","ambari-server","status"],
stdout=subprocess.PIPE,stderr=subprocess.PIPE)
+    out,err=p.communicate()
+    rc=p.returncode
And finally add some checks with the out,err, and rc

My doubts are "am I missing some functionality or can I go with this
approach without problems?"

Regards
JC

Mime
View raw message