Hello Renjith,Hortonworks have self contained box where you can just download and spin up stuff and see how it looks like:Cheers,DejanOn Wed, Jun 22, 2016 at 6:33 PM Renjith <renjithgk@gmail.com> wrote:Hello All,before proceeding, seek an expert advice from the group, whether we can use docker for mac and then get the docker image for hadoop. i observe that docker is light weight as they share the same system kernel as well as less RAM space.kindly advice as i am going to remove VM ware fusion from mac as it occupies majority of my mac memory.Thanks,RenjithOn 22 Jun 2016, at 08:17, Phillip Wu <phillip.wu@unsw.edu.au> wrote:You should be able to run it on a Mac as there is a Java engine for the MacHadoop binary are java binariesFrom: Renjith Gk [mailto:renjithgk@gmail.com]
Sent: Wednesday, 22 June 2016 12:15 PM
To: Phillip Wu <phillip.wu@unsw.edu.au>
Cc: user@hadoop.apache.org
Subject: RE: Hadoop In Real ScenarioThanks philip.
As u had mentioned it runs on unix/Linux. Can it run on an unix flavour machine (mac os ) which I am using than installing vmware for Mac and then installing Linux os + javasdk +hadoop.
On 22 Jun 2016 07:15, "Phillip Wu" <phillip.wu@unsw.edu.au> wrote:This is my understanding:
Hadoop provides a filesystem called hdfs which allows you store files, delete files but not update files.
There are other provides that can be installed on top of Hadoop eg. Hive that provide SQL access to hdfs
Hadoop simply can be one node or many nodes.
Each node can be a namenode or datanode.
Namenode store information about files. Datanodes store the file contents.
The code is java and so can run on most Unix/Linux servers.
Installation is by:
1. Download the software onto one node
2. ungzip/untar the software
3. Configure the software
4. Copy the same software and configuration to other node(s)
5. Format the namenode
6. Start the Hadoop daemons
-----Original Message-----
From: Renjith [mailto:renjithgk@gmail.com]
Sent: Tuesday, 21 June 2016 1:30 AM
To: user@hadoop.apache.org
Subject: Hadoop In Real Scenario
Dear Mates,
I am a begineer in Hadoop and very new to this.
I would like to know how hadoop is used. is there any UI to allocate the nodes and store files. or is it done at the backend or we should have the backend knowledge.
can you provide a real scenario used.
came to know about Ambari , kindly provide some information on this.
can it return on any Unix flavour. from an hadoop administrator role, what are the activities
Thanks,
Renjith---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org