accumulo-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Adam J. Shook" <adamjsh...@gmail.com>
Subject Re: Question on how Accumulo binds to Hadoop
Date Wed, 31 Jan 2018 22:06:09 GMT
Yes, it does use RPC to talk to HDFS.  You will need to update the value of
instance.volumes in accumulo-site.xml to reference this address,
haz0-m:8020, instead of the default localhost:9000.

--Adam

On Wed, Jan 31, 2018 at 4:45 PM, Geoffry Roberts <threadedblue@gmail.com>
wrote:

> I have a situation where Accumulo cannot find Hadoop.
>
> Hadoop is running and I can access hdfs from the cli.
> Zookeeper also says it is ok and I can log in using the client.
> Accumulo init is failing with a connection refused for localhost:9000.
>
> netstat shows nothing listening on 9000.
>
> Now the plot thickens...
>
> The Hadoop I am running is Google's Dataproc and the Hadoop installation
> is not my own.  I have already found a number of differences.
>
> Here's my question:  Does Accumulo use RPC to talk to Hadoop? I ask
> because of things like this:
>
> From hfs-site.xml
>
>   <property>
>
>     <name>dfs.namenode.rpc-address</name>
>
>     <value>haz0-m:8020</value>
>
>     <description>
>
>       RPC address that handles all clients requests. If empty then we'll
> get
>
>       thevalue from fs.default.name.The value of this property will take
> the
>
>       form of hdfs://nn-host1:rpc-port.
>
>     </description>
>
>   </property>
>
> Or does it use something else?
>
> Thanks
> --
> There are ways and there are ways,
>
> Geoffry Roberts
>

Mime
View raw message