Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 208AA200CA4 for ; Wed, 7 Jun 2017 17:29:40 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 1F087160BD0; Wed, 7 Jun 2017 15:29:40 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 967AD160BB6 for ; Wed, 7 Jun 2017 17:29:38 +0200 (CEST) Received: (qmail 49962 invoked by uid 500); 7 Jun 2017 15:29:37 -0000 Mailing-List: contact user-help@predictionio.incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@predictionio.incubator.apache.org Delivered-To: mailing list user@predictionio.incubator.apache.org Received: (qmail 49952 invoked by uid 99); 7 Jun 2017 15:29:37 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 07 Jun 2017 15:29:37 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id 53C8DC694A for ; Wed, 7 Jun 2017 15:29:37 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 1 X-Spam-Level: * X-Spam-Status: No, score=1 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, HTML_MESSAGE=2, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-1] autolearn=disabled Authentication-Results: spamd1-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=occamsmachete-com.20150623.gappssmtp.com Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id 5tVd7Qvp7Tzr for ; Wed, 7 Jun 2017 15:29:33 +0000 (UTC) Received: from mail-pf0-f179.google.com (mail-pf0-f179.google.com [209.85.192.179]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTPS id CE5DD5F6BF for ; Wed, 7 Jun 2017 15:29:32 +0000 (UTC) Received: by mail-pf0-f179.google.com with SMTP id 83so6903146pfr.0 for ; Wed, 07 Jun 2017 08:29:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=occamsmachete-com.20150623.gappssmtp.com; s=20150623; h=from:message-id:mime-version:subject:date:in-reply-to:cc:to :references; bh=HLXV4R8FMWOvkVIzKbQ8foMwhSWW6Hn5mHkY7AveXsY=; b=pltZwdAZrPqbN+OaQa2+pGpXXyXH7KcaGLgB/62GWppMHry+WyYhQleA7UfBLw1ic8 Tz7hp+vTmb6/bZLfmzrKQm/4NJMF25ppMfev6C6iZ2j46Ndjr0e84hydM1OkCkqRj+GA 5Mz13TXKlJLKq/45RvisRoHYWSpk7a2Usu+6FmqNGcjK44GCvI51PRuQUndDgasHK0RG fsGBsTunCy/gZ/BYrbaLdtDiDbmBBc3LjwXsMTdG2JIYBBN16TMeUJiJUGzrUDQgCxbu AZUDOjFVcZvS68k4Q9s76U2nCdMz5W7TAxT4ZBcFt9EmFa6JDj2jnNwZyePfbeIVgRYW kVPA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:message-id:mime-version:subject:date :in-reply-to:cc:to:references; bh=HLXV4R8FMWOvkVIzKbQ8foMwhSWW6Hn5mHkY7AveXsY=; b=E1zaO/W2JVRwMIo/HGPM2oJazWWY3ooPhbcLbtLhKwqRUSbzxMSlrtdutlWU2oF27F 9vXfqzOoOywnhWzg9+ivaxzZGckk35KwwIgS3Hzzphq1teZCrzkeQ42ONUzKIXzTaq0r 1J4M4uGZpcrKOPWarKV9/X/cxgZ7CvJuP03hGl12Ts5EhmKsKmy0XwqbwYOUQXYnRQ+u rPyLhkbOprZTmB2HpUHWkNCOTy9gKdJ6aGP7LRW3FEyqkDaWQXwAOrx8ZMOJmrC3JoBc Rci1ccjHyMTUSjsLZ3oFNbovn/00eNlQBCUi0uE8RZlTLa8AdTxEQ7UJPtdyZpnkbJU+ iQdQ== X-Gm-Message-State: AODbwcDP41Ddv3nzL/jM5qzU4rBVttX8hoE8umOpBIlQRezO2QhC79Gy 6ITHRUhbFHb1g2Mf X-Received: by 10.98.28.74 with SMTP id c71mr6942154pfc.154.1496849371136; Wed, 07 Jun 2017 08:29:31 -0700 (PDT) Received: from [192.168.0.7] ([63.142.207.34]) by smtp.gmail.com with ESMTPSA id f71sm5667231pfd.98.2017.06.07.08.29.30 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 07 Jun 2017 08:29:30 -0700 (PDT) From: Pat Ferrel Message-Id: <54A96F7F-E2D0-4855-AD0E-60D7E4E23462@occamsmachete.com> Content-Type: multipart/alternative; boundary="Apple-Mail=_7F88D815-4325-4C06-9F80-961DE50D28B9" Mime-Version: 1.0 (Mac OS X Mail 10.3 \(3273\)) Subject: Re: Error while pio status Date: Wed, 7 Jun 2017 08:29:24 -0700 In-Reply-To: <3ef2281d-d93e-41df-8496-e67d69b76445@googlegroups.com> Cc: actionml-user , user@predictionio.incubator.apache.org To: heddgy@gmail.com References: <3ef2281d-d93e-41df-8496-e67d69b76445@googlegroups.com> X-Mailer: Apple Mail (2.3273) archived-at: Wed, 07 Jun 2017 15:29:40 -0000 --Apple-Mail=_7F88D815-4325-4C06-9F80-961DE50D28B9 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=us-ascii This group is for support of ActionML projects like the Universal = Recommender. Please direct PIO questions to the Apache PIO mailing list.=20 On Jun 7, 2017, at 6:32 AM, heddgy@gmail.com wrote: I'm trying to setup PIO 0.11.0 in docker using this = all-in-one guide and get error = (Error initializing storage client for source HDFS) while pio status: root@c63e6c937cba:/# pio status SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in = [jar:file:/opt/pio/pio-0.11.0/lib/spark/pio-data-hdfs-assembly-0.11.0-incu= bating.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in = [jar:file:/opt/pio/pio-0.11.0/lib/pio-assembly-0.11.0-incubating.jar!/org/= slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an = explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] [INFO] [Management$] Inspecting PredictionIO... [INFO] [Management$] PredictionIO 0.11.0-incubating is installed at = /opt/pio/pio-0.11.0 [INFO] [Management$] Inspecting Apache Spark... [INFO] [Management$] Apache Spark is installed at /usr/local/spark [INFO] [Management$] Apache Spark 1.6.3 detected (meets minimum = requirement of 1.3.0) [INFO] [Management$] Inspecting storage backend connections... [INFO] [Storage$] Verifying Meta Data Backend (Source: ELASTICSEARCH)... [INFO] [Storage$] Verifying Model Data Backend (Source: HDFS)... [ERROR] [Storage$] Error initializing storage client for source HDFS [ERROR] [Management$] Unable to connect to all storage backends = successfully. The following shows the error message from the storage backend. Data source HDFS was not properly initialized. = (org.apache.predictionio.data.storage.StorageClientException) Dumping configuration of initialized storage backend sources. Please make sure they are correct. Source Name: ELASTICSEARCH; Type: elasticsearch; Configuration: HOME -> = /usr/local/elasticsearch, HOSTS -> c63e6c937cba, PORTS -> 9300, = CLUSTERNAME -> pio, TYPE -> elasticsearch Source Name: HDFS; Type: (error); Configuration: (error) Not using aml user and do everything under root. Hbase 1.2.5 not = available anymore, so using Hbase 1.2.6. All the configs in place of some-master I'm using c63e6c937cba. Checked = all of them multiple times and found no difference from guilde. I noticed this in pio.log, while pio-start-all: ... 2017-06-07 12:57:17,147 INFO = org.apache.predictionio.tools.commands.Management$ [main] - Creating = Event Server at 0.0.0.0:7070 2017-06-07 12:57:19,051 WARN = org.apache.hadoop.hbase.util.DynamicClassLoader [main] - Failed to = identify the fs of dir hdfs://c63e6c937cba:9000/hbase/lib, ignored java.io.IOException: No FileSystem for scheme: hdfs ... and while pio status: ... 2017-06-07 13:01:18,874 INFO = org.apache.predictionio.data.storage.Storage$ [main] - Verifying Model = Data Backend (Source: HDFS)... 2017-06-07 13:01:19,298 ERROR = org.apache.predictionio.data.storage.Storage$ [main] - Error = initializing storage client for source HDFS java.io.IOException: No FileSystem for scheme: hdfs at = org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2586) ... So i tried to solve this error, modifying core-site.xml, explained in = this SO question , but it = only get me to the next stack of errors, usolvable at the moment. pio-start-all: ... 2017-06-07 13:12:46,937 INFO = org.apache.predictionio.tools.commands.Management$ [main] - Creating = Event Server at 0.0.0.0:7070 2017-06-07 13:12:49,494 ERROR = org.apache.predictionio.data.storage.hbase.StorageClient [main] - Failed = to connect to HBase. Please check if HBase is running properly. 2017-06-07 13:12:49,494 ERROR = org.apache.predictionio.data.storage.Storage$ [main] - Error = initializing storage client for source HBASE java.io.IOException: java.lang.reflect.InvocationTargetException ... Caused by: java.lang.ClassNotFoundException: Class = org.apache.hadoop.hdfs.DistributedFileSystem not found ... pio status: ... 2017-06-07 13:14:36,482 INFO = org.apache.predictionio.data.storage.Storage$ [main] - Verifying Meta = Data Backend (Source: ELASTICSEARCH)... 2017-06-07 13:14:38,490 INFO = org.apache.predictionio.data.storage.Storage$ [main] - Verifying Model = Data Backend (Source: HDFS)... 2017-06-07 13:14:38,859 ERROR = org.apache.predictionio.data.storage.Storage$ [main] - Error = initializing storage client for source HDFS java.lang.RuntimeException: java.lang.ClassNotFoundException: Class = org.apache.hadoop.hdfs.DistributedFileSystem not found ... Googled new error on SO and tried pio status -- --jars = /path/to/hadoop-hdfs-*.jar - didn't help. Configs: core-site.xml (final after adding fs.*.impl) fs.defaultFS hdfs://c63e6c937cba:9000/ fs.file.impl org.apache.hadoop.fs.LocalFileSystem fs.hdfs.impl org.apache.hadoop.hdfs.DistributedFileSystem hdfs-site.xml dfs.data.dir file:///usr/local/hadoop/dfs/name/data true dfs.name.dir file:///usr/local/hadoop/dfs/name true dfs.replication 2 hbase-site.xml hbase.rootdir hdfs://c63e6c937cba:9000/hbase hbase.zookeeper.property.dataDir hdfs://c63e6c937cba:9000/zookeeper hbase.zookeeper.quorum localhost hbase.zookeeper.property.clientPort 2181 pio-env.sh #!/usr/bin/env bash # Safe config that will work if you expand your cluster later SPARK_HOME=3D/usr/local/spark ES_CONF_DIR=3D/usr/local/elasticsearch/config HADOOP_CONF_DIR=3D/usr/local/hadoop/etc/hadoop HBASE_CONF_DIR=3D/usr/local/hbase/conf # Filesystem paths where PredictionIO uses as block storage. PIO_FS_BASEDIR=3D$HOME/.pio_store PIO_FS_ENGINESDIR=3D$PIO_FS_BASEDIR/engines PIO_FS_TMPDIR=3D$PIO_FS_BASEDIR/tmp PIO_STORAGE_REPOSITORIES_METADATA_NAME=3Dpio_meta PIO_STORAGE_REPOSITORIES_METADATA_SOURCE=3DELASTICSEARCH PIO_STORAGE_REPOSITORIES_EVENTDATA_NAME=3Dpio_event PIO_STORAGE_REPOSITORIES_EVENTDATA_SOURCE=3DHBASE PIO_STORAGE_REPOSITORIES_MODELDATA_NAME=3Dpio_model PIO_STORAGE_REPOSITORIES_MODELDATA_SOURCE=3DHDFS # PIO_STORAGE_REPOSITORIES_MODELDATA_SOURCE=3DLOCALFS # Elasticsearch Example PIO_STORAGE_SOURCES_ELASTICSEARCH_TYPE=3Delasticsearch PIO_STORAGE_SOURCES_ELASTICSEARCH_HOME=3D/usr/local/elasticsearch # the next line should match the cluster.name in elasticsearch.yml PIO_STORAGE_SOURCES_ELASTICSEARCH_CLUSTERNAME=3Dpio # For single host Elasticsearch, may add hosts and ports later PIO_STORAGE_SOURCES_ELASTICSEARCH_HOSTS=3Dc63e6c937cba PIO_STORAGE_SOURCES_ELASTICSEARCH_PORTS=3D9300 # dummy models are stored here so use HDFS in case you later want to # expand the Event and PredictionServers PIO_STORAGE_SOURCES_HDFS_TYPE=3Dhdfs PIO_STORAGE_SOURCES_HDFS_PATH=3Dhdfs://c63e6c937cba:9000/models # localfs storage, because hdfs won't work # PIO_STORAGE_SOURCES_LOCALFS_TYPE=3Dlocalfs # PIO_STORAGE_SOURCES_LOCALFS_PATH=3D${PIO_FS_BASEDIR}/models # HBase Source config PIO_STORAGE_SOURCES_HBASE_TYPE=3Dhbase PIO_STORAGE_SOURCES_HBASE_HOME=3D/usr/local/hbase # Hbase single master config PIO_STORAGE_SOURCES_HBASE_HOSTS=3Dc63e6c937cba PIO_STORAGE_SOURCES_HBASE_PORTS=3D0 In pio-env.sh tried LOCALFS for modeldata storage and it worked, at = least no errors while pio status. hdfs works fine, i guess, no errors while hdfs fsck /models etc. Any help here? --=20 You received this message because you are subscribed to the Google = Groups "actionml-user" group. To unsubscribe from this group and stop receiving emails from it, send = an email to actionml-user+unsubscribe@googlegroups.com = . To post to this group, send email to actionml-user@googlegroups.com = . To view this discussion on the web visit = https://groups.google.com/d/msgid/actionml-user/3ef2281d-d93e-41df-8496-e6= 7d69b76445%40googlegroups.com = . For more options, visit https://groups.google.com/d/optout = . --Apple-Mail=_7F88D815-4325-4C06-9F80-961DE50D28B9 Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=us-ascii This group is for support of ActionML projects like the = Universal Recommender.

Please direct PIO questions to the Apache PIO mailing = list. 


On Jun 7, 2017, at 6:32 AM, heddgy@gmail.com = wrote:

I'm trying to setup PIO 0.11.0 in docker using this = all-in-one guide and get error (Error initializing = storage client for source HDFS) while pio = status:
root@c63e6c937cba:/# pio = status
SLF4J: Class path contains = multiple SLF4J bindings.
SLF4J: Found = binding in = [jar:file:/opt/pio/pio-0.11.0/lib/spark/pio-data-hdfs-assembly-0.11.0-incu= bating.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in = [jar:file:/opt/pio/pio-0.11.0/lib/pio-assembly-0.11.0-incubating.jar!/org/= slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an = explanation.
SLF4J: Actual binding is = of type [org.slf4j.impl.Log4jLoggerFactory]
[INFO] [Management$] Inspecting = PredictionIO...
[INFO] [Management$] = PredictionIO 0.11.0-incubating is installed at = /opt/pio/pio-0.11.0
[INFO] = [Management$] Inspecting Apache Spark...
[INFO] [Management$] Apache Spark is installed = at /usr/local/spark
[INFO] = [Management$] Apache Spark 1.6.3 detected (meets minimum requirement of = 1.3.0)
[INFO] [Management$] = Inspecting storage backend connections...
[INFO] [Storage$] Verifying Meta Data Backend = (Source: ELASTICSEARCH)...
[INFO] = [Storage$] Verifying Model Data Backend (Source: HDFS)...
[ERROR] [Storage$] Error initializing storage = client for source HDFS
[ERROR] = [Management$] Unable to connect to all storage backends = successfully.
The following shows the = error message from the storage backend.

Data source HDFS was not properly initialized. = (org.apache.predictionio.data.storage.StorageClientException)

Dumping configuration of initialized storage = backend sources.
Please make sure = they are correct.

Source Name: = ELASTICSEARCH; Type: elasticsearch; Configuration: HOME -> = /usr/local/elasticsearch, HOSTS -> c63e6c937cba, PORTS -> 9300, = CLUSTERNAME -> pio, TYPE -> elasticsearch
Source Name: HDFS; Type: (error); = Configuration: (error)

Not using aml user and do everything under = root. Hbase 1.2.5 not available anymore, so using Hbase = 1.2.6.
All the configs in place of = some-master I'm using c63e6c937cba. Checked all of them multiple times = and found no difference from guilde.

I noticed this in = pio.log, while pio-start-all:
...
2017-06-07 12:57:17,147 INFO =  org.apache.predictionio.tools.commands.Management$ [main] - = Creating Event Server at 0.0.0.0:7070
2017-06-07 12:57:19,051 WARN =  org.apache.hadoop.hbase.util.DynamicClassLoader [main] - Failed to = identify the fs of dir hdfs://c63e6c937cba:9000/hbase/lib, ignored
java.io.IOException: No FileSystem for scheme: = hdfs
...

and while = pio status:
...
2017-06-07 13:01:18,874 INFO =  org.apache.predictionio.data.storage.Storage$ [main] - Verifying = Model Data Backend (Source: HDFS)...
2017-06-07 13:01:19,298 ERROR = org.apache.predictionio.data.storage.Storage$ [main] - Error = initializing storage client for source HDFS
java.io.IOException: No FileSystem for scheme: = hdfs
        at = org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2586)...

So i tried to solve this = error, modifying core-site.xml, explained in this SO question, but it only get me to the next stack of errors, = usolvable at the moment.
pio-start-all:
...
2017-06-07 13:12:46,937 INFO =  org.apache.predictionio.tools.commands.Management$ [main] - = Creating Event Server at 0.0.0.0:7070
2017-06-07 13:12:49,494 ERROR = org.apache.predictionio.data.storage.hbase.StorageClient [main] - Failed = to connect to HBase. Please check if HBase is running = properly.
2017-06-07 13:12:49,494 = ERROR org.apache.predictionio.data.storage.Storage$ [main] - Error = initializing storage client for source HBASE
java.io.IOException: = java.lang.reflect.InvocationTargetException
...
Caused by: = java.lang.ClassNotFoundException: Class = org.apache.hadoop.hdfs.DistributedFileSystem not found
...
pio status:
...
2017-06-07 13:14:36,482 INFO =  org.apache.predictionio.data.storage.Storage$ [main] - Verifying = Meta Data Backend (Source: ELASTICSEARCH)...
2017-06-07 13:14:38,490 INFO =  org.apache.predictionio.data.storage.Storage$ [main] - Verifying = Model Data Backend (Source: HDFS)...
2017-06-07 13:14:38,859 ERROR = org.apache.predictionio.data.storage.Storage$ [main] - Error = initializing storage client for source HDFS
java.lang.RuntimeException: = java.lang.ClassNotFoundException: Class = org.apache.hadoop.hdfs.DistributedFileSystem not = found
...

Googled new error on SO and tried pio status -- --jars = /path/to/hadoop-hdfs-*.jar - didn't help.

Configs:
core-site.xml (final after adding fs.*.impl)
<configuration>
   <property>
      = <name>fs.defaultFS</name>
   </property>

   <property>
      = <name>fs.file.impl</name>
      = <value>org.apache.hadoop.fs.LocalFileSystem</value>
   </property>

   <property>
      = <name>fs.hdfs.impl</name>
      = <value>org.apache.hadoop.hdfs.DistributedFileSystem</value>
  =  </property>
</configuration>
<= /div>
hdfs-site.xml
<configuration>
   <property>
      = <name>dfs.data.dir</name>
<= div class=3D"subprettyprint">      = <final>true</final>
  =  </property>

  =  <property>
    =   <name>dfs.name.dir</name>
      = <final>true</final>
  =  </property>

  =  <property>
    =   <name>dfs.replication</name>
      = <value>2</value>
  =  </property>
</configuration>
<= /div>
hbase-site.xml
<configuration>
    <property>
        = <name>hbase.rootdir</name>
    </property>

    <property>
        = <name>hbase.zookeeper.property.dataDir</name>
    </property>

    <property>
        = <name>hbase.zookeeper.quorum</name>
        = <value>localhost</value>
    </property>

    <property>
        = <name>hbase.zookeeper.property.clientPort</name>
        = <value>2181</value>
  =   </property>
</configuration>
<= /div>
pio-env.sh
#!/usr/bin/env = bash

# Safe config that will work if you expand your = cluster later
SPARK_HOME=3D/usr/local/spark
ES_CONF_DIR=3D/usr/local/elasticsearch/config
HADOOP_CONF_DIR=3D/usr/local/hadoop/etc/hadoop
HBASE_CONF_DIR=3D/usr/local/hbase/conf

# Filesystem paths where PredictionIO uses as = block storage.
PIO_FS_BASEDIR=3D$HOME/.pio_store
PIO_FS_ENGINESDIR=3D$PIO_FS_BASEDIR/engines
=
PIO_FS_TMPDIR=3D$PIO_FS_BASEDIR/tmp

PIO_STORAGE_REPOSITORIES_METADATA_NAME=3Dpio_meta=
PIO_STORAGE_REPOSITORIES_METADATA_SOURCE=3DELASTI= CSEARCH

PIO_STORAGE_REPOSITORIES_EVENTDATA_NAME=3Dpio_eve= nt
PIO_STORAGE_REPOSITORIES_EVENTDATA_SOURCE=3DHBASE=

PIO_STORAGE_REPOSITORIES_MODELDATA_NAME=3Dpio_mod= el
PIO_STORAGE_REPOSITORIES_MODELDATA_SOURCE=3DHDFS<= /div>
# = PIO_STORAGE_REPOSITORIES_MODELDATA_SOURCE=3DLOCALFS

# Elasticsearch Example
PIO_STORAGE_SOURCES_ELASTICSEARCH_TYPE=3Delastics= earch
PIO_STORAGE_SOURCES_ELASTICSEARCH_HOME=3D/usr/loc= al/elasticsearch
# the next line = should match the cluster.name in elasticsearch.yml
PIO_STORAGE_SOURCES_ELASTICSEARCH_CLUSTERNAME=3Dp= io

# For single host Elasticsearch, may add hosts = and ports later
PIO_STORAGE_SOURCES_ELASTICSEARCH_HOSTS=3Dc63e6c9= 37cba
PIO_STORAGE_SOURCES_ELASTICSEARCH_PORTS=3D9300

# dummy models are stored here so use HDFS in = case you later want to
# expand the = Event and PredictionServers
PIO_STORAGE_SOURCES_HDFS_TYPE=3Dhdfs
PIO_STORAGE_SOURCES_HDFS_PATH=3Dhdfs://c63e6c937cba:9000/models

# localfs storage, because hdfs won't = work
# = PIO_STORAGE_SOURCES_LOCALFS_TYPE=3Dlocalfs
# = PIO_STORAGE_SOURCES_LOCALFS_PATH=3D${PIO_FS_BASEDIR}/models

# HBase Source config
PIO_STORAGE_SOURCES_HBASE_TYPE=3Dhbase
PIO_STORAGE_SOURCES_HBASE_HOME=3D/usr/local/hbase=
# Hbase single master = config
PIO_STORAGE_SOURCES_HBASE_HOSTS=3Dc63e6c937cba
PIO_STORAGE_SOURCES_HBASE_PORTS=3D0

In pio-env.sh tried LOCALFS for = modeldata storage and it worked, at least no errors while pio = status.

hdfs = works fine, i guess, no errors while hdfs fsck /models etc.

Any help = here?

--
You received this message because you are subscribed to the Google = Groups "actionml-user" group.
To unsubscribe from this group and stop receiving emails from it, send = an email to actionml-user+unsubscribe@googlegroups.com.
To post to this group, send email to actionml-user@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/actionml-user/3ef2281d-d93e-4= 1df-8496-e67d69b76445%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

= --Apple-Mail=_7F88D815-4325-4C06-9F80-961DE50D28B9--