tajo-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Hyunsik Choi <hyun...@apache.org>
Subject Re: Cluster setup :: HDFS :: Fully Distributed Mode
Date Sat, 27 Sep 2014 00:13:55 GMT
Hi Chris,

As far as I know, unfortunately, it may be impossible to purge the
already posted mailing list threads. I cannot help that. I'm sorry.

Anyway, your problem seems to be caused by configuration. I found the
clue as follows:

2014-09-27 01:39:00,378 INFO org.apache.tajo.master.querymaster.Query:
Processing q_1411774569175_0001 of type SUBQUERY_COMPLETED
2014-09-27 01:39:00,379 INFO org.apache.tajo.master.querymaster.Query:
Processing q_1411774569175_0001 of type QUERY_COMPLETED
2014-09-27 01:39:00,406 FATAL
org.apache.tajo.master.TajoAsyncDispatcher: Error in dispatcher
thread:QUERY_COMPLETED
java.lang.IllegalArgumentException: Wrong FS:
file:/tmp/tajo-hduser/warehouse/default/fkkmaze_hist, expected:
hdfs://192.168.178.101:54310
at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:643)
at org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:191)
at org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:102)
at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1124)
at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1120)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1120)
at org.apache.hadoop.fs.FileSystem.isDirectory(FileSystem.java:1411)
at org.apache.tajo.storage.StorageUtil.getMaxFileSequence(StorageUtil.java:150)
at org.apache.tajo.master.querymaster.Query$QueryCompletedTransition.commitOutputData(Query.java:468)
at org.apache.tajo.master.querymaster.Query$QueryCompletedTransition.finalizeQuery(Query.java:404)
at org.apache.tajo.master.querymaster.Query$QueryCompletedTransition.transition(Query.java:385)
at org.apache.tajo.master.querymaster.Query$QueryCompletedTransition.transition(Query.java:378)
at org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
at org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
at org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
at org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
at org.apache.tajo.master.querymaster.Query.handle(Query.java:856)
at org.apache.tajo.master.querymaster.Query.handle(Query.java:63)
at org.apache.tajo.master.TajoAsyncDispatcher.dispatch(TajoAsyncDispatcher.java:137)
at org.apache.tajo.master.TajoAsyncDispatcher$1.run(TajoAsyncDispatcher.java:79)
at java.lang.Thread.run(Thread.java:745)


The error means that your tajo-site.xml is not same across TajoMaster
and all workers. I heard that you changed tajo.rootdir in some host.
If so, you should copy the tajo-site.xml to Tajo distributions in all
workers.

It is the same to Hadoop. If you change some config in Hadoop, you
must copy the changed config files to hadoop distributions in all
nodes.

Warm regards,
Hyunsik

On Fri, Sep 26, 2014 at 5:05 PM, Christian Schwabe
<Christian.Schwabe@gmx.com> wrote:
>
> Hello Hyunsik,
>
> could you please delete the last e-mail with the attachement from this
> mailing list? The attachement to this e-mail was not intended to be
> published here. Many thanks.
> Sorry for this unnecessary overhead.
>
>
> Best regards,
> Chris
>
>
>
> Am 27.09.2014 01:50:44, schrieb Christian Schwabe:
>
>
> Hello Hyunsik,
>
> i send you a private E-Mail because of the content of table columns which
> should not published.
> I've probably been looking forward again a bit early, but I'm close to solve
> this problem. I'm sure!
>
> That's what i've tried:
>
> default> INSERT INTO xxx SELECT * FROM xxxy;
>
> Progress: 0%, response time: 1.819 sec
>
> Progress: 0%, response time: 1.831 sec
>
> Progress: 0%, response time: 2.038 sec
>
> Progress: 0%, response time: 2.445 sec
>
> Progress: 0%, response time: 3.278 sec
>
> Progress: 0%, response time: 4.086 sec
>
> Progress: 1%, response time: 5.106 sec
>
> Progress: 8%, response time: 6.133 sec
>
> Progress: 20%, response time: 7.2 sec
>
> Progress: 22%, response time: 8.279 sec
>
> Progress: 39%, response time: 9.286 sec
>
> Progress: 42%, response time: 10.316 sec
>
> Progress: 50%, response time: 11.357 sec
>
> Progress: 59%, response time: 12.387 sec
>
> Progress: 60%, response time: 13.397 sec
>
> Progress: 67%, response time: 14.422 sec
>
> Progress: 83%, response time: 15.43 sec
>
> Progress: 91%, response time: 16.438 sec
>
> Progress: 98%, response time: 17.446 sec
>
> Progress: 98%, response time: 18.471 sec
>
> Progress: 100%, response time: 19.489 sec
>
> Progress: 100%, response time: 20.53 sec
>
> Progress: 100%, response time: 21.552 sec
>
> Progress: 100%, response time: 22.614 sec
>
> Progress: 100%, response time: 23.622 sec
>
> Progress: 100%, response time: 24.664 sec
>
> Progress: 100%, response time: 25.687 sec
>
> Progress: 100%, response time: 26.694 sec
>
> Progress: 100%, response time: 27.736 sec
>
> Progress: 100%, response time: 28.759 sec
>
> Progress: 100%, response time: 29.768 sec
>
> Progress: 100%, response time: 30.778 sec
>
> Progress: 100%, response time: 31.831 sec
>
> Progress: 100%, response time: 32.837 sec
>
> Progress: 100%, response time: 33.879 sec
>
> Progress: 100%, response time: 34.903 sec
>
> Progress: 100%, response time: 35.927 sec
>
> Progress: 100%, response time: 36.951 sec
>
> Progress: 100%, response time: 37.974 sec
>
> Progress: 100%, response time: 38.998 sec
>
> Progress: 100%, response time: 40.023 sec
>
> Progress: 100%, response time: 41.047 sec
>
> Progress: 100%, response time: 42.071 sec
>
> Progress: 100%, response time: 43.095 sec
>
> Progress: 100%, response time: 44.118 sec
>
> Progress: 100%, response time: 45.143 sec
>
> Progress: 100%, response time: 46.167 sec
>
> Progress: 100%, response time: 47.174 sec
>
> Progress: 100%, response time: 48.215 sec
>
> Progress: 100%, response time: 49.239 sec
>
> Progress: 100%, response time: 50.263 sec
>
> Progress: 100%, response time: 51.287 sec
>
> Progress: 100%, response time: 52.311 sec
>
> Progress: 100%, response time: 53.334 sec
>
> Progress: 100%, response time: 54.359 sec
>
> Progress: 100%, response time: 55.365 sec
>
> Progress: 100%, response time: 56.407 sec
>
> Progress: 100%, response time: 57.415 sec
>
> Progress: 100%, response time: 58.455 sec
>
> Progress: 100%, response time: 59.462 sec
>
> Progress: 100%, response time: 60.503 sec
>
> Progress: 100%, response time: 61.51 sec
>
> Progress: 100%, response time: 62.551 sec
>
> Progress: 100%, response time: 63.558 sec
>
> Progress: 100%, response time: 64.599 sec
>
> Progress: 100%, response time: 65.608 sec
>
> Progress: 100%, response time: 66.647 sec
>
> Progress: 100%, response time: 67.671 sec
>
> Progress: 100%, response time: 68.677 sec
>
> Progress: 100%, response time: 69.718 sec
>
> Progress: 100%, response time: 70.743 sec
>
> Progress: 100%, response time: 71.75 sec
>
> Progress: 100%, response time: 72.758 sec
>
> Progress: 100%, response time: 73.815 sec
>
> Progress: 100%, response time: 74.82 sec
>
> Progress: 100%, response time: 75.863 sec
>
> Progress: 100%, response time: 76.887 sec
>
> Progress: 100%, response time: 77.911 sec
>
> Progress: 100%, response time: 79.037 sec
>
> Progress: 100%, response time: 80.061 sec
>
> Progress: 100%, response time: 81.085 sec
>
> Progress: 100%, response time: 82.109 sec
>
> Progress: 100%, response time: 83.133 sec
>
> Progress: 100%, response time: 84.157 sec
>
> Progress: 100%, response time: 85.181 sec
>
> Progress: 100%, response time: 86.205 sec
>
> Progress: 100%, response time: 87.229 sec
>
> Progress: 100%, response time: 88.253 sec
>
> Progress: 100%, response time: 89.277 sec
>
> Progress: 100%, response time: 90.301 sec
>
> Progress: 100%, response time: 91.325 sec
>
> Progress: 100%, response time: 92.349 sec
>
> Progress: 100%, response time: 93.367 sec
>
> Progress: 100%, response time: 94.397 sec
>
> Progress: 100%, response time: 95.407 sec
>
> Progress: 100%, response time: 96.414 sec
>
> Progress: 100%, response time: 97.421 sec
>
> Progress: 100%, response time: 98.493 sec
>
> Progress: 100%, response time: 99.499 sec
>
> Progress: 100%, response time: 100.541 sec
>
> Progress: 100%, response time: 101.548 sec
>
> Progress: 100%, response time: 102.589 sec
>
> Progress: 100%, response time: 103.595 sec
>
> Progress: 100%, response time: 104.608 sec
>
> Progress: 100%, response time: 105.617 sec
>
> Progress: 100%, response time: 106.685 sec
>
> Progress: 100%, response time: 107.694 sec
>
> Progress: 100%, response time: 108.733 sec
>
> Progress: 100%, response time: 109.757 sec
>
> Progress: 100%, response time: 110.764 sec
>
> Progress: 100%, response time: 111.805 sec
>
> Progress: 100%, response time: 112.829 sec
>
> Progress: 100%, response time: 113.853 sec
>
> Progress: 100%, response time: 114.877 sec
>
> Progress: 100%, response time: 115.901 sec
>
> Progress: 100%, response time: 116.908 sec
>
> Progress: 100%, response time: 117.949 sec
>
> Progress: 100%, response time: 118.973 sec
>
> Progress: 100%, response time: 119.997 sec
>
> Progress: 100%, response time: 121.021 sec
>
> Progress: 100%, response time: 122.045 sec
>
> Progress: 100%, response time: 123.172 sec
>
> Progress: 100%, response time: 124.195 sec
>
> Progress: 100%, response time: 125.219 sec
>
> Progress: 100%, response time: 126.245 sec
>
> Progress: 100%, response time: 127.267 sec
>
> Progress: 100%, response time: 128.291 sec
>
> Progress: 100%, response time: 129.316 sec
>
> Progress: 100%, response time: 130.441 sec
>
> Progress: 100%, response time: 131.45 sec
>
> Progress: 100%, response time: 132.489 sec
>
> Progress: 100%, response time: 133.498 sec
>
> Progress: 100%, response time: 134.538 sec
>
> Progress: 100%, response time: 135.545 sec
>
> Progress: 100%, response time: 136.586 sec
>
> Progress: 100%, response time: 137.593 sec
>
> Progress: 100%, response time: 138.634 sec
>
> Progress: 100%, response time: 139.641 sec
>
> Progress: 100%, response time: 140.649 sec
>
> Progress: 100%, response time: 141.656 sec
>
> Progress: 100%, response time: 142.73 sec
>
> Progress: 100%, response time: 143.737 sec
>
> Progress: 100%, response time: 144.777 sec
>
> Progress: 100%, response time: 145.801 sec
>
> Progress: 100%, response time: 146.808 sec
>
> Progress: 100%, response time: 147.815 sec
>
> Progress: 100%, response time: 148.822 sec
>
> Progress: 100%, response time: 149.828 sec
>
> Progress: 100%, response time: 150.837 sec
>
> Progress: 100%, response time: 151.848 sec
>
> Progress: 100%, response time: 152.869 sec
>
> Progress: 100%, response time: 153.882 sec
>
> Progress: 100%, response time: 154.892 sec
>
> Progress: 100%, response time: 155.901 sec
>
> Progress: 100%, response time: 156.909 sec
>
> Progress: 100%, response time: 157.987 sec
>
> Progress: 100%, response time: 159.011 sec
>
> Progress: 100%, response time: 160.035 sec
>
> Progress: 100%, response time: 161.059 sec
>
> Progress: 100%, response time: 162.083 sec
>
> Progress: 100%, response time: 163.107 sec
>
> Progress: 100%, response time: 164.125 sec
>
> Progress: 100%, response time: 165.137 sec
>
> Progress: 100%, response time: 166.145 sec
>
> Progress: 100%, response time: 167.154 sec
>
> Progress: 100%, response time: 168.161 sec
>
> Progress: 100%, response time: 169.168 sec
>
> Progress: 100%, response time: 170.174 sec
>
> Progress: 100%, response time: 171.182 sec
>
> Progress: 100%, response time: 172.191 sec
>
> Progress: 100%, response time: 173.199 sec
>
> Progress: 100%, response time: 174.208 sec
>
> Progress: 100%, response time: 175.217 sec
>
> Progress: 100%, response time: 176.226 sec
>
> Progress: 100%, response time: 177.237 sec
>
> Progress: 100%, response time: 178.247 sec
>
> Progress: 100%, response time: 179.286 sec
>
> Progress: 100%, response time: 180.31 sec
>
> Progress: 100%, response time: 181.334 sec
>
> Progress: 100%, response time: 182.358 sec
>
> Progress: 100%, response time: 183.382 sec
>
> Progress: 100%, response time: 184.406 sec
>
> Progress: 100%, response time: 185.43 sec
>
> Progress: 100%, response time: 186.454 sec
>
> Progress: 100%, response time: 187.462 sec
>
> Progress: 100%, response time: 188.502 sec
>
> Progress: 100%, response time: 189.509 sec
>
> Progress: 100%, response time: 190.515 sec
>
> Progress: 100%, response time: 191.524 sec
>
> Progress: 100%, response time: 192.531 sec
>
> Progress: 100%, response time: 193.539 sec
>
> Progress: 100%, response time: 194.545 sec
>
> Progress: 100%, response time: 195.556 sec
>
> Progress: 100%, response time: 196.567 sec
>
> Progress: 100%, response time: 197.575 sec
>
> Progress: 100%, response time: 198.584 sec
>
> Progress: 100%, response time: 199.591 sec
>
> Progress: 100%, response time: 200.688 sec
>
> Progress: 100%, response time: 201.693 sec
>
> Progress: 100%, response time: 202.736 sec
>
> Progress: 100%, response time: 203.743 sec
>
> Progress: 100%, response time: 204.784 sec
>
> Progress: 100%, response time: 205.791 sec
>
> Progress: 100%, response time: 206.831 sec
>
> Progress: 100%, response time: 207.856 sec
>
> Progress: 100%, response time: 208.879 sec
>
> Progress: 100%, response time: 209.903 sec
>
> Progress: 100%, response time: 211.03 sec
>
> Progress: 100%, response time: 212.054 sec
>
> Progress: 100%, response time: 213.078 sec
>
> Progress: 100%, response time: 214.102 sec
>
> Progress: 100%, response time: 215.126 sec
>
> Progress: 100%, response time: 216.15 sec
>
> Progress: 100%, response time: 217.174 sec
>
> Progress: 100%, response time: 218.197 sec
>
> Progress: 100%, response time: 219.222 sec
>
> Progress: 100%, response time: 220.246 sec
>
> Progress: 100%, response time: 221.27 sec
>
> Progress: 100%, response time: 222.278 sec
>
> Progress: 100%, response time: 223.286 sec
>
> Progress: 100%, response time: 224.295 sec
>
> Progress: 100%, response time: 225.303 sec
>
> Progress: 100%, response time: 226.311 sec
>
> Progress: 100%, response time: 227.317 sec
>
> Progress: 100%, response time: 228.326 sec
>
> Progress: 100%, response time: 229.337 sec
>
> Progress: 100%, response time: 230.349 sec
>
> Progress: 100%, response time: 231.36 sec
>
> Progress: 100%, response time: 232.37 sec
>
> Progress: 100%, response time: 233.378 sec
>
> Progress: 100%, response time: 234.407 sec
>
> Progress: 100%, response time: 235.417 sec
>
> Progress: 100%, response time: 236.426 sec
>
> Progress: 100%, response time: 237.434 sec
>
> Progress: 100%, response time: 238.473 sec
>
> Progress: 100%, response time: 239.496 sec
>
> Progress: 100%, response time: 240.624 sec
>
> Progress: 100%, response time: 241.63 sec
>
> Progress: 100%, response time: 242.671 sec
>
> Progress: 100%, response time: 243.679 sec
>
> Progress: 100%, response time: 244.686 sec
>
> Killed: 9
>
>
> Do you know whats happened here?
>
>
>
>
> Am 26.09.2014 21:31:39, schrieb Hyunsik Choi:
>
> Hi Chris,
>
> I got the main cause :)
>
> In my view, the result of netstat is very abnormal because your
> binding address ip is 127.0.0.1. If so, other machines cannot access
> your TajoMaster.
>
> Your tajo-site looks valid. So, please check two things:
> - All machines use tajo-site.xml that you attached to this mailing list.
> - Please ensure restarting your cluster after you set the tajo-site.xml.
>
> Probably, you can fix the problem easily if you check above things.
>
> Have a nice weekend.
>
> - Hyunsik
>
>
>
> On Fri, Sep 26, 2014 at 12:12 PM, Christian Schwabe
> <Christian.Schwabe@gmx.com> wrote:
>
>
> Hello Hyunsik,
>
> TajoMaster on "Master" MacBook seems to run normally. I can't see any
> unnormal state.
> Attached the log from TajoMaster.
>
>
> christians-mbp:bin hduser$ netstat -tn | grep 26003
>
> tcp4 0 0 127.0.0.1.26003 127.0.0.1.50262
> ESTABLISHED
>
> tcp4 0 0 127.0.0.1.50262 127.0.0.1.26003
> ESTABLISHED
>
>
>
>
>
>
> Am 26.09.2014 20:22:08, schrieb Hyunsik Choi:
>
> I can figure out your environment. :)
>
> Could you check TajoMaster log? You need to check if TajoMaster is
> running normally.
>
> Also, you can ensure that TajoMaster is running by netstat as follows:
>
> netstat -tn | grep 26003
>
> Thanks,
> Hyunsik
>
>
>
> On Fri, Sep 26, 2014 at 10:26 AM, Christian Schwabe
> <Christian.Schwabe@gmx.comwrote:
>
>
> Hello Hyunsik,
>
> no problem. Anyway, thanks for your response.
> I have two physical MacBooks here.
> Everyone Macbook Pro is configured with the user hduser and the same
> password. All Path are the same.
> No virtualization for master and slave in use.
>
> IP-Address:
> -Master :: 192.168.178.101
> -Slave :: 192.168.178.39
>
> They are currently connected via WLAN but Ethernet is also prepared for
> final configuration.
>
> Is my configuration in tajo-site.xml need or are these only optional
> parameters?
> The submitted tajo-site.xml is the config for slave.
>
> I don't know exactly what for configuration environments you mean. If any
> question is still unanswered feel free to ask.
>
> I hope it is still not too late for you and you're reading this e-mail
> today.
>
> Warm regards,
> Chris
>
>
> Am 26.09.2014 19:09:19, schrieb Hyunsik Choi:
>
> Hi Chris,
>
> I'm sorry for late response. It seems to be a network problem because
> it's very simple cluster configuration.
>
> Q. Could you share your network environment?
>
> Q. Is master or slave running on virtual machine?
>
> Best regards,
> Hyunsik
>
>
> On Fri, Sep 26, 2014 at 9:26 AM, Christian Schwabe
> <Christian.Schwabe@gmx.comwrote:
>
>
> Hello guys,
>
> my current problem is that the Tajo Worker on Slave couldn't connect to
> Master.
> i need further/more information about
> http://tajo.apache.org/docs/current/configuration/cluster_setup.html#settings
> What does these parameters mean?
>
> I've attached the worker log from slave where only one worker run.
> I've attached the tajo-site.xml from slave.
> Is there any setting incorrect?
>
> IP-Address:
> -Master :: 192.168.178.101
> -Slave :: 192.168.178.39
>
> Hopefully u can help me.
>
> Best regards,
> Chris
>
>
> Am 26.09.2014 12:56:55, schrieb Christian Schwabe:
>
>
>
> Hello guys,
>
> sorry for spamming.
> I found the solution.
> The path of home directory for Tajo was not the same on both machines.
> When I now start Tajo on master in HA mode the worker on the slave machine
> starts, too.
> BUT in the webUI for the master i didn't see the second worker alive.
>
> Warm regards,
> Chris
>
>
>
> Am 26.09.2014 12:31:58, schrieb Christian Schwabe:
>
>
> Hello guys,
>
> next try to get an answer..
> Hadoop is still running successfully on both machines master and slave.
> Start Tajo in HA mode on master. In webUI i see one Query Master, one Worker
> and one Masters. Is that correct so far?
> Now i start on slave machine the Tajo worker with 'sh tajo-daemon.sh start
> worker' and worker starts successfully. Also the conf/masters and
> conf/slaves are created on master machine and slave machine
>
> However, I see in the WebUI on master no second worker. What is still wrong?
> Thanks for any advice.
>
>
> Best regards,
> Chris
>
>
>
> Am 23.09.2014 22:58:25, schrieb Christian Schwabe:
>
> Hello guys,
>
> some days later I already made some progress.
> HDFS run already successfully in background. I follow these instructions:
> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>
> TajoMasters starts already on HA mode, but how to configure Tajo in detail
> to use hdfs? I read this documentation
> http://tajo.apache.org/docs/current/configuration/cluster_setup.html, but i
> don’t know how to configure tajo to store files to hdfs.
> Also i don’t know how to setup the second machine to start only a worker and
> connect to master.
> I already applied masters and workers file.
>
> MacBook1
>
> workers content:
>
> localhost
> 192.168.178.39 // second worker
>
> masters content:
>
> localhost
>
> MacBook2
>
> workers content:
>
> localhost
> 192.168.178.101
>
> masters content:
>
> 192.168.178.101
>
>
>
> Actually I miss a complete guide in the documentation for setup a second
> worker to the master. When I realize that with your help, I will post these
> instructions and place it to documentation available. I promise!
>
> Hopefully you can help me.
>
>
> Beat regards,
> Chris
>
>
> Am 22.09.2014 um 08:46 schrieb Christian Schwabe
> <Christian.Schwabe@gmx.com>:
>
>
> Hello guys,
>
> can someone help to setup a cluster with a second worker?
>
>
> Best regards,
> Chris
>
>
>
>
>
>
>
>
>

Mime
View raw message