ignite-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Juan Rodríguez Hortalá <juan.rodriguez.hort...@gmail.com>
Subject Re: How IGFS URIs work
Date Wed, 06 Dec 2017 18:17:36 GMT
But if I do that from a compute node that doesn't have ignite, then I fail
to connect to the IGFS cluster:

[hadoop@ip-10-0-0-242 ~]$ hadoop fs -ls igfs://igfs@/
ls: Failed to communicate with IGFS: Failed to connect to IGFS
[endpoint=igfs://igfs@, attempts=[[type=SHMEM, port=10500,
err=java.io.IOException: Failed to connect shared memory endpoint to port
(is shared memory server endpoint up and running?): 10500], [type=TCP,
host=127.0.0.1, port=10500, err=java.io.IOException: Failed to connect to
endpoint [host=127.0.0.1, port=10500]]] (ensure that IGFS is running and
have IPC endpoint enabled; ensure that ignite-shmem-1.0.0.jar is in Hadoop
classpath if you use shared memory endpoint).
[hadoop@ip-10-0-0-242 ~]$

When I specify a node that is part of the IGFS cluster this works fine, for
example `hadoop fs -ls igfs://igfs@ip-10-0-0-85.ec2.internal:10500/`. Also,
`hadoop fs -ls igfs://igfs@/` works ok when I run it from the host
`ip-10-0-0-85`. I think this makes sense because if I don't specify a host
then localhost is used as default as specified in
https://apacheignite-fs.readme.io/docs/file-system#section-file-system-uri.
But as the ignite cluster is not running in the compute nodes then I cannot
use localhost as an address for them. Using igfs://igfs@/, therefore
localhost, would work for a typical Hadoop setting, where I have a YARN
cluster co-located with an Ignite cluster, just like we would do with HDFS.
But in settings like EMR, it is usual to have an instance group of core
nodes running both HDFS and YARN (both storage and compute), and a another
instance group of task nodes running only YARN (just compute, see
https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-plan-instances.html
for details). It is possible to configure a IGFS URI that access a IGFS
cluster that is not running in the client node, but that balances the
requests among all nodes of the IGFS cluster?

Also, even if I have ignite running in all nodes, for example a cluster
with 3 slave nodes n1, n2, n3 running both the IGFS daemon and the YARN
Node Manager, if the IGFS daemon in node n1 crashes I would like to be able
to connect to the IGFS cluster from any YARN container running in n1,
because the IGFS cluster would still have 2 nodes available. That would be
analogous to what would happen with HDFS, where a failure in the data node
process running in n1 wouldn't prevent containers running in n1 to connect
to HDFS, because the HDFS URI is in fact referring to the Name Node, not to
n1. If I use igfs://igfs@/, which is the same as igfs://igfs@localhost/ to
connect to IGFS from all YARN containers running in n1, then I understand I
that if the IGFS daemon running in n1 crashes then I would lose connection
to IGFS from all YARN containers running in n1. That is not exactly a
single point of failure, but this would have a significant impact. Can you
clarify what would happen?

To summarize:
  - it is possible to configure a IGFS URI that access a IGFS cluster that
is not running in the client node, but that balances the requests among all
nodes of the IGFS cluster?
  - if I connect to IGFS from a client using igfs://igfs@/ and the local
IGFS daemon dies, does the client lose connection to all the IGFS cluster?,
even though other IGFS nodes are still alive, and I have their address
published in a node discovery service? (e.g. through
TcpDiscoveryZookeeperIpFinder)

Thanks again,

Juan


On Wed, Dec 6, 2017 at 3:42 AM, Alexey Kukushkin <kukushkinalexey@gmail.com>
wrote:

> You do not need to specify the "host" part of the Hadoop URI at all if you
> do not want to limit processing to that specific host. Just specify
> "igfs://myIgfs@/" and Hadoop will load-balance the load for you across
> all IGFS nodes.
>

Mime
View raw message