incubator-tashi-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Michael Stroucken <...@cmu.edu>
Subject Re: Installing tashi
Date Wed, 11 May 2011 19:58:23 GMT
Here's a log of installing Tashi on my machine and running a virtual 
machine. "marsupilami" is the name of the machine I am using to run the 
clustermanager, the scheduler and to host VMs.

Script started on Sun 08 May 2011 01:08:41 PM EDT
marsupilami:~# mkdir /opt/tashi

Check out the code

marsupilami:~# mkdir /tmp/foo
marsupilami:~# cd /tmp/foo
marsupilami:/tmp/foo# svn co http://svn.apache.org/repos/asf/incubator/tashi
A    tashi/trunk
A    tashi/trunk/NOTICE
A    tashi/trunk/LICENSE
A    tashi/trunk/doc
A    tashi/trunk/doc/DEVELOPMENT
...
A    tashi/import/zoni-intel-r843/Makefile
Checked out revision 1100772.
marsupilami:/tmp/foo# ls
tashi
marsupilami:/tmp/foo# cd tashi
marsupilami:/tmp/foo/tashi# ls
board  branches  import  site  trunk
marsupilami:/tmp/foo/tashi# cd branches
marsupilami:/tmp/foo/tashi/branches# ls
stablefix  zoni-dev

Install the stablefix branch

marsupilami:/tmp/foo/tashi/branches# cd stablefix
marsupilami:/tmp/foo/tashi/branches/stablefix# ls
doc  etc  LICENSE  Makefile  NOTICE  README  scripts  src
marsupilami:/tmp/foo/tashi/branches/stablefix# cp * /opt/tashi
marsupilami:/tmp/foo/tashi/branches/stablefix# cd /opt/tashi
marsupilami:/opt/tashi# rm -rf /tmp/foo
marsupilami:/opt/tashi# ls
doc  etc  LICENSE  Makefile  NOTICE  README  scripts  src
marsupilami:/opt/tashi# make
Symlinking in clustermanager...
Symlinking in nodemanager...
Symlinking in tashi-client...
Symlinking in primitive...
Symlinking in zoni-cli...
Done
marsupilami:/opt/tashi# ls
bin  doc  etc  LICENSE	Makefile  NOTICE  README  scripts  src
marsupilami:/opt/tashi# ls bin
clustermanager.py  nmd.py  nodemanager.py  primitive.py  tashi-client.py  zoni-cli.py

Get RpyC

marsupilami:/opt/tashi# mkdir /tmp/rpyc
marsupilami:/opt/tashi# cd /tmp/rpyc/
marsupilami:/tmp/rpyc# wget http://superb-sea2.dl.sourceforge.net/project/rpyc/main/3.1.0/RPyC-3.1.0.tar.gz
--2011-05-08 13:16:31--  http://superb-sea2.dl.sourceforge.net/project/rpyc/main/3.1.0/RPyC-3.1.0.tar.gz
Resolving superb-sea2.dl.sourceforge.net... 209.160.57.180
Connecting to superb-sea2.dl.sourceforge.net|209.160.57.180|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 32573 (32K) [application/x-gzip]
Saving to: “RPyC-3.1.0.tar.gz”

2011-05-08 13:16:31 (148 KB/s) - “RPyC-3.1.0.tar.gz” saved [32573/32573]

marsupilami:/tmp/rpyc# ls -alrt
total 32
-rw-r--r-- 1 root root 32573 Mar 21 16:47 RPyC-3.1.0.tar.gz
drwxrwxrwt 6 root root    70 May  8 13:16 ..
drwxr-xr-x 2 root root    30 May  8 13:16 .
marsupilami:/tmp/rpyc# tar zxvf RPyC-3.1.0.tar.gz 
RPyC-3.1.0/
RPyC-3.1.0/LICENSE
RPyC-3.1.0/MANIFEST.in
RPyC-3.1.0/PKG-INFO
RPyC-3.1.0/README
RPyC-3.1.0/rpyc/
RPyC-3.1.0/rpyc/core/
RPyC-3.1.0/rpyc/core/async.py
...
RPyC-3.1.0/setup.cfg
RPyC-3.1.0/setup.py
marsupilami:/tmp/rpyc# cd RPyC-3.1.0
marsupilami:/tmp/rpyc/RPyC-3.1.0# ls
LICENSE  MANIFEST.in  PKG-INFO	README	rpyc  RPyC.egg-info  setup.cfg	setup.py
marsupilami:/tmp/rpyc/RPyC-3.1.0# python setup.py install
/usr/lib/python2.6/distutils/dist.py:266: UserWarning: Unknown distribution option: 'use_2to3'
  warnings.warn(msg)
/usr/lib/python2.6/distutils/dist.py:266: UserWarning: Unknown distribution option: 'zip_ok'
  warnings.warn(msg)
running install
running build
running build_py
creating build
creating build/lib.linux-x86_64-2.6
creating build/lib.linux-x86_64-2.6/rpyc
copying rpyc/version.py -> build/lib.linux-x86_64-2.6/rpyc
copying rpyc/__init__.py -> build/lib.linux-x86_64-2.6/rpyc
creating build/lib.linux-x86_64-2.6/rpyc/core
...
changing mode of /usr/local/bin/rpyc_classic.py to 777
changing mode of /usr/local/bin/rpyc_registry.py to 777
changing mode of /usr/local/bin/rpyc_vdbconf.py to 777
running install_egg_info
Writing /usr/local/lib/python2.6/dist-packages/RPyC-3.1.0.egg-info
marsupilami:/tmp/rpyc/RPyC-3.1.0# cd /opt/tashi
marsupilami:/opt/tashi# ls
bin  doc  etc  LICENSE	Makefile  NOTICE  README  scripts  src
marsupilami:/opt/tashi# cd etc
marsupilami:/opt/tashi/etc# ls
NodeManager.cfg  TashiDefaults.cfg  TestConfig.cfg  ZoniDefaults.cfg
marsupilami:/opt/tashi/etc# 
marsupilami:/opt/tashi/etc# 
marsupilami:/opt/tashi/etc# mkdir ~/.tashi
marsupilami:/opt/tashi/etc# cp * ~/.tashi


Edit configuration files. I created a separate configuration file for the clustermanager

marsupilami:/opt/tashi/etc# cd ~/.tashi
marsupilami:~/.tashi# ls
ClusterManager.cfg  NodeManager.cfg  TashiDefaults.cfg	TestConfig.cfg	ZoniDefaults.cfg

marsupilami:/opt/tashi/bin# cat ~/.tashi/NodeManager.cfg 
# NodeManger portion
[NodeManager]
dfs = tashi.dfs.Vfs
vmm = tashi.nodemanager.vmcontrol.Qemu
#vmm = tashi.nodemanager.vmcontrol.XenPV
service = tashi.nodemanager.NodeManagerService
publisher = tashi.messaging.GangliaPublisher

[GangliaPublisher]
dmax = 60
retry = 3600

[formatters]
keys = standardFormatter

[formatter_standardFormatter]
format=%(asctime)s [%(name)s:%(levelname)s] %(message)s
datefmt=
class=logging.Formatter

[handlers]
#keys = consoleHandler,publisherHandler,fileHandler
keys = consoleHandler, fileHandler

[handler_consoleHandler]
class = StreamHandler
level = NOTSET
formatter = standardFormatter
args = (sys.stdout,)

[loggers]
keys = root

[logger_root]
level = DEBUG
#handlers = consoleHandler,publisherHandler,fileHandler,syslogHandler
handlers = fileHandler
propagate = 1

[Vfs]
prefix = /root/tashi

[Qemu]
qemuBin = /usr/bin/kvm
infoDir = /root/tashi/vmcontrol
scratchDir = /tmp
pollDelay = 1.0
migrationRetries = 10
monitorTimeout = 60.0
migrateTimeout = 300.0
maxParallelMigrations = 10
useMigrateArgument = False
statsInterval = 0.0

[XenPV]
vmNamePrefix = tashi
transientdir = /tmp
defaultVmType = pygrub
#defaultKernel = /boot/vmlinuz-xen
#defaultRamdisk = /boot/initrd-xen
defaultDiskType=vhd

[NodeManagerService]
convertExceptions = True
port = 9883
registerFrequency = 120.0
infoFile = /root/tashi/nm.dat
clusterManagerHost = marsupilami
clusterManagerPort = 9882
statsInterval = 0.0
;bind = 0.0.0.0 ; not supported (Thrift is missing support to specify what to bind to!)

[Security]
authAndEncrypt = False

marsupilami:/opt/tashi/bin# cat ~/.tashi/TashiDefaults.cfg 
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
# 
#   http://www.apache.org/licenses/LICENSE-2.0
# 
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied.  See the License for the
# specific language governing permissions and limitations
# under the License.    

[Security]
authAndEncrypt = False

[AccessClusterManager]
#If username and password are left empty, user will be prompted for username and password
on the command line.
username = nodemanager
password = nodemanager

[AccessNodeManager]
#If username and password are left empty, user will be prompted for username and password
on the command line.
username = clustermanager
password = clustermanager

[AllowedUsers]
nodeManagerUser = nodemanager
nodeManagerPassword = nodemanager
agentUser = agent
agentPassword = agent
clusterManagerUser = clustermanager
clusterManagerPassword = clustermanager

# ClusterManager portion
[ClusterManager]
service = tashi.clustermanager.ClusterManagerService
data = tashi.clustermanager.data.GetentOverride
dfs = tashi.dfs.Vfs
publisher = tashi.messaging.GangliaPublisher
nodeManagerPort = 9883

[ClusterManagerService]
host = marsupilami
convertExceptions = True
port = 9882
expireHostTime = 30.0
allowDecayed = 30.0
allowMismatchedVersions = False
maxMemory = 8192
maxCores = 8
allowDuplicateNames = False
;bind = 0.0.0.0 ; not supported (Thrift is missing support to specify what to bind to!)

[GetentOverride]
baseData = tashi.clustermanager.data.SQL
fetchThreshold = 60.0

[LdapOverride]
baseData = tashi.clustermanager.data.SQL
fetchThreshold = 3600.0
nameKey = sAMAccountName
idKey = msSFU30UidNumber
ldapCommand = ldapsearch -x -w password -h host -b searchbase -D binddn msSFU30LoginShell=*
-z 0

[FromConfig]
#hostlist = /one/host/per/line
host1 = Host(d={'id':1,'name':'blade043'})
host2 = Host(d={'id':2,'name':'blade044'})
host3 = Host(d={'id':3,'name':'blade045'})
host4 = Host(d={'id':4,'name':'blade074'})
machineType1 = MachineType(d={'id':1,'name':'1c-512m','memory':512,'cores':1})
network1 = Network(d={'id':1,'name':'global'})
network2 = Network(d={'id':2,'name':'NAT'})
user1 = User(d={'id':1,'name':'mryan3'})

[Pickled]
file = /var/tmp/cm.dat

[SQL]
uri = sqlite:///root/tashi/cm_sqlite.dat
password = changeme

# NodeManger portion
[NodeManager]
dfs = tashi.dfs.Vfs
vmm = tashi.nodemanager.vmcontrol.Qemu
#vmm = tashi.nodemanager.vmcontrol.XenPV
service = tashi.nodemanager.NodeManagerService
publisher = tashi.messaging.GangliaPublisher

[NodeManagerService]
convertExceptions = True
port = 9883
registerFrequency = 10.0
infoFile = /root/tashi/nm.dat
clusterManagerHost = marsupilami
clusterManagerPort = 9882
statsInterval = 0.0
;bind = 0.0.0.0 ; not supported (Thrift is missing support to specify what to bind to!)

[Qemu]
qemuBin = /usr/local/bin/qemu-system-x86_64
infoDir = /var/tmp/VmControlQemu/
pollDelay = 1.0
migrationRetries = 10
monitorTimeout = 60.0
migrateTimeout = 300.0
maxParallelMigrations = 10
useMigrateArgument = False
statsInterval = 0.0

[XenPV]
vmNamePrefix = tashi
transientdir = /tmp
defaultVmType = kernel
defaultKernel = /boot/vmlinuz-xen
defaultRamdisk = /boot/initrd-xen
defaultDiskType=qcow

[Vfs]
prefix = /var/tmp/

[LocalityService]
host = localityserver
port = 9884
staticLayout = /location/of/layout/file

# Client configuration
[Client]
clusterManagerHost = marsupilami
clusterManagerPort = 9882
clusterManagerTimeout = 5.0

# Agent portion
[Agent]
publisher = tashi.messaging.GangliaPublisher

[Primitive]
#hook1 = tashi.agents.DhcpDns
scheduleDelay = 2.0
densePack = False

[MauiWiki]
hook1 = tashi.agents.DhcpDns
refreshTime = 5
authuser = changeme
authkey = 1111
defaultJobTime = 8640000000

[DhcpDns]
dnsEnabled = True
dnsKeyFile = /location/of/private/key/for/dns
dnsServer = 1.2.3.4 53
dnsDomain = tashi.example.com
dnsExpire = 300
dhcpEnabled = True
dhcpServer = 1.2.3.4
dhcpKeyName = OMAPI
dhcpSecretKey = ABcdEf12GhIJKLmnOpQrsT==
ipRange1 = 172.16.128.2-172.16.255.254
reverseDns = True
clustermanagerhost=clustermanager
clustermanagerport=9886

[GangliaPublisher]
dmax = 60
retry = 3600

# Logging stuff
# Switch the "keys" and "handlers" variables below to output log data to the publisher
[loggers]
keys = root	

[handlers]
#keys = consoleHandler,publisherHandler,fileHandler
keys = consoleHandler

[formatters]
keys = standardFormatter

[logger_root]
level = DEBUG
#handlers = consoleHandler,publisherHandler,fileHandler,syslogHandler
handlers = consoleHandler
propagate = 1
	
[handler_consoleHandler]
class = StreamHandler
level = NOTSET
formatter = standardFormatter
args = (sys.stdout,)

[handler_publisherHandler]
class = tashi.messaging.MessagingLogHandler
level = NOTSET
formatter = standardFormatter
args = ()

[handler_fileHandler]
class = FileHandler
level = NOTSET
formatter = standardFormatter
args = ("/var/log/nodemanager.log",)

[handler_syslogHandler]
class = handlers.SysLogHandler
level = NOTSET
formatter = standardFormatter
args = ('/dev/log')

[formatter_standardFormatter]
format=%(asctime)s [%(name)s:%(levelname)s] %(message)s
datefmt=
class=logging.Formatter

# Message Broker
[MessageBroker]
host = localhost
port = 1717

[AWS]
awsfile = /var/tmp/aws.dat

marsupilami:~/.tashi# cat ClusterManager.cfg 
# ClusterManager portion
[ClusterManager]
service = tashi.clustermanager.ClusterManagerService
#data = tashi.clustermanager.data.GetentOverride
data = tashi.clustermanager.data.SQL
dfs = tashi.dfs.Vfs
publisher = tashi.messaging.GangliaPublisher
nodeManagerPort = 9883

[ClusterManagerService]
host = marsupilami
convertExceptions = True
port = 9882
expireHostTime = 150.0
allowDecayed = 30.0
allowMismatchedVersions = False
maxMemory = 20480
maxCores = 8
allowDuplicateNames = False
;bind = 0.0.0.0 ; not supported (Thrift is missing support to specify what to bind to!)

[GetentOverride]
baseData = tashi.clustermanager.data.SQL
fetchThreshold = 60.0

[GangliaPublisher]
dmax = 60
retry = 3600

[formatters]
keys = standardFormatter

[formatter_standardFormatter]
format=%(asctime)s [%(name)s:%(levelname)s] %(message)s
datefmt=
class=logging.Formatter

[handlers]
#keys = consoleHandler,publisherHandler,fileHandler
keys = consoleHandler,fileHandler

[handler_consoleHandler]
class = StreamHandler
level = NOTSET
formatter = standardFormatter
args = (sys.stdout,)

# Logging stuff
# Switch the "keys" and "handlers" variables below to output log data to the publisher
[loggers]
keys = root

[handler_fileHandler]
class = FileHandler
level = NOTSET
formatter = standardFormatter
args = ("/var/log/clustermanager.log",)

[logger_root]
level = DEBUG
#handlers = consoleHandler,publisherHandler,fileHandler,syslogHandler
handlers = fileHandler
propagate = 1

[Vfs]
prefix = /root/tashi

[SQL]
uri = sqlite:///root/tashi/cm_sqlite.dat
password = changeme


[Security]
authAndEncrypt = False

Set up the PYTHONPATH environment variable

marsupilami:~# cd /opt/tashi/src
marsupilami:/opt/tashi/src# export PYTHONPATH=`pwd`
marsupilami:/opt/tashi/src# cd
marsupilami:~# mkdir /root/tashi

Install python-pysqlite1.1 for the database interface

marsupilami:~# apt-get install sqlite3 python-pysqlite1.1
Suggested packages:
  python-pysqlite1.1-dbg
The following NEW packages will be installed:
  python-pysqlite1.1 sqlite3
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 47.5 kB of archives.
After this operation, 213 kB of additional disk space will be used.
Unpacking python-pysqlite1.1 (from .../python-pysqlite1.1_1.1.8a-3+b2_amd64.deb) ...
Setting up python-pysqlite1.1 (1.1.8a-3+b2) ...
Processing triggers for python-central ...

Start the Tashi clustermanager

marsupilami:~# cd /opt/tashi/bin
marsupilami:/opt/tashi/bin# ./clustermanager.py &
[1] 4407
marsupilami:/opt/tashi/bin# 
marsupilami:/opt/tashi/bin# tail /var/log/clustermanager.log 
2011-05-08 19:17:51,502 [./clustermanager.py:INFO] Using configuration file(s) ['/root/.tashi/TashiDefaults.cfg',
'/root/.tashi/ClusterManager.cfg']
2011-05-08 19:17:51,503 [./clustermanager.py:INFO] Starting cluster manager
2011-05-08 19:17:51,761 [MANAGER/9882:INFO] server started on 0.0.0.0:9882

Configure the clustermanager database

marsupilami:/opt/tashi/bin# sqlite3 /root/tashi/cm_sqlite.dat 
SQLite version 3.7.6.2
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> .schema
CREATE TABLE hosts (id INTEGER PRIMARY KEY, name varchar(256) NOT NULL, up tinyint(1) DEFAULT
0, decayed tinyint(1) DEFAULT 0, state int(11) DEFAULT 1, memory int(11), cores int(11), version
varchar(256));
CREATE TABLE instances (id int(11) NOT NULL, vmId int(11), hostId int(11), decayed tinyint(1)
NOT NULL, state int(11) NOT NULL, userId int(11), name varchar(256), cores int(11) NOT NULL,
memory int(11) NOT NULL, disks varchar(1024) NOT NULL, nics varchar(1024) NOT NULL, hints
varchar(1024) NOT NULL);
CREATE TABLE networks (id int(11) NOT NULL, name varchar(256) NOT NULL);
CREATE TABLE users (id int(11) NOT NULL, name varchar(256) NOT NULL, passwd varchar(256));
sqlite> select * from hosts;
sqlite> insert into hosts (id, name) values (1, 'marsupilami');
sqlite> insert into users (id, name) values (0, 'root');
sqlite> insert into networks values (0, 'default');
sqlite> .quit


Start the Tashi primitive scheduler

marsupilami:/opt/tashi/bin# ./primitive.py &
[2] 4435
marsupilami:/opt/tashi/bin# 
marsupilami:/opt/tashi/bin# ./tashi-client.py gethosts
 id reserved name        decayed up    state  version memory cores notes
------------------------------------------------------------------------
 1  []       marsupilami False   False Normal None    None   None  None 

Start the Tashi nodemanager

marsupilami:/opt/tashi/bin# ./nodemanager.py &
[3] 4448
marsupilami:/opt/tashi/bin# 
marsupilami:/opt/tashi/bin# ./tashi-client.py gethosts
 id reserved name        decayed up   state  version memory cores notes
-----------------------------------------------------------------------
 1  []       marsupilami False   True Normal HEAD    16079  4     None 
marsupilami:/opt/tashi/bin# brctl show
bridge name	bridge id		STP enabled	interfaces

Set up networking.

Stop existing dhcp clients on the machine's network interface

Create the bridge the Tashi VMs will use

marsupilami:/opt/tashi/bin# ifconfig eth0 0.0.0.0
marsupilami:/opt/tashi/bin# brctl addbr br0
marsupilami:/opt/tashi/bin# brctl setfd br0 1
marsupilami:/opt/tashi/bin# brctl sethello br0 1
marsupilami:/opt/tashi/bin# brctl addif br0 eth0
marsupilami:/opt/tashi/bin# ifconfig eth0 up
marsupilami:/opt/tashi/bin# ifconfig br0 up

Get an IP address for the bridge

marsupilami:/opt/tashi/bin# dhclient br0

Verify connectivity

marsupilami:/opt/tashi/bin# ping www.google.com
PING www.l.google.com (74.125.93.105) 56(84) bytes of data.
64 bytes from qw-in-f105.1e100.net (74.125.93.105): icmp_req=1 ttl=52 time=27.3 ms
64 bytes from qw-in-f105.1e100.net (74.125.93.105): icmp_req=2 ttl=52 time=27.6 ms
64 bytes from qw-in-f105.1e100.net (74.125.93.105): icmp_req=4 ttl=52 time=27.8 ms
64 bytes from qw-in-f105.1e100.net (74.125.93.105): icmp_req=5 ttl=52 time=26.6 ms
64 bytes from qw-in-f105.1e100.net (74.125.93.105): icmp_req=6 ttl=52 time=28.7 ms
64 bytes from qw-in-f105.1e100.net (74.125.93.105): icmp_req=7 ttl=52 time=27.6 ms
^C
--- www.l.google.com ping statistics ---
7 packets transmitted, 6 received, 14% packet loss, time 7059ms
rtt min/avg/max/mdev = 26.603/27.651/28.775/0.659 ms

Create a script to handle the VM network interfaces as they are created

marsupilami:/opt/tashi/bin# cat /etc/qemu-ifup.0
#!/bin/sh

/sbin/ifconfig $1 0.0.0.0 up
/sbin/brctl addif br0 $1
exit 0

marsupilami:/opt/tashi/bin# chmod 700 /etc/qemu-ifup.0
marsupilami:/opt/tashi/bin# jobs
[1]   Running                 ./clustermanager.py &
[2]-  Running                 ./primitive.py &
[3]+  Running                 ./nodemanager.py &

Provide a VM disk image in /root/tashi/images
Create a VM

marsupilami:/opt/tashi/bin# ./tashi-client.py createVm --name squeeze --cores 1 --memory 2048
--disks debiian-squeeze-amd64.qcow2 --hints nicModel=virtio
{
    hostId: None
    name: squeeze
    vmId: None
    decayed: False
    disks: [
        {'uri': 'debian-squeeze-amd64.qcow2', 'persistent': False}
    ]
    userId: 0
    groupName: None
    state: Pending
    nics: [
        {'ip': None, 'mac': '52:54:00:55:74:25', 'network': 0}
    ]
    memory: 2048
    cores: 1
    id: 3
    hints: {'nicModel': 'virtio'}
}
marsupilami:/opt/tashi/bin# 2011-05-10 21:27:02,415 [./primitive.py:INFO] Scheduling instance
squeeze (2048 mem, 1 cores, 0 uid) on host marsupilami


Destroy a VM

marsupilami:/opt/tashi/bin# ./tashi-client.py destroyVm --instance squeeze

Create a new VM

marsupilami:/opt/tashi/bin# ./tashi-client.py createVm --name squeeze --cores 1 --memory 2048
--disks debian-squeeze-amd64.qcow2 --hints nicModel=virtio
{
    hostId: None
    name: squeeze
    vmId: None
    decayed: False
    disks: [
        {'uri': 'debian-squeeze-amd64.qcow2', 'persistent': False}
    ]
    userId: 0
    groupName: None
    state: Pending
    nics: [
        {'ip': None, 'mac': '52:54:00:0f:e8:4b', 'network': 0}
    ]
    memory: 2048
    cores: 1
    id: 4
    hints: {'nicModel': 'virtio'}
}
marsupilami:/opt/tashi/bin# 2011-05-10 21:31:11,021 [./primitive.py:INFO] Scheduling instance
squeeze (2048 mem, 1 cores, 0 uid) on host marsupilami

Verify VM is running

marsupilami:/opt/tashi/bin# ./tashi-client.py getinstances
 id hostId name    user state   disk                       memory cores
-----------------------------------------------------------------------
 4  1      squeeze root Running debian-squeeze-amd64.qcow2 2048   1    

Connect to the new VM

marsupilami:/opt/tashi/bin# ssh root@192.168.1.3
The authenticity of host '192.168.1.3 (192.168.1.3)' can't be established.
RSA key fingerprint is af:f2:1a:3a:2b:7c:c3:3b:6a:04:4f:37:bb:75:16:58.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.1.3' (RSA) to the list of known hosts.
root@192.168.1.3's password: 
Linux debian 2.6.32-5-amd64 #1 SMP Wed Jan 12 03:40:32 UTC 2011 x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Thu Mar 10 20:17:41 2011
debian:~# echo My new vm!
My new vm!
debian:~# free
             total       used       free     shared    buffers     cached
Mem:       2061552     110280    1951272          0      66452      15312
-/+ buffers/cache:      28516    2033036
Swap:            0          0          0
debian:~# ifconfig -a
eth0      Link encap:Ethernet  HWaddr 52:54:00:0f:e8:4b  
          inet addr:192.168.1.3  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: fe80::5054:ff:fe0f:e84b/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:142 errors:0 dropped:0 overruns:0 frame:0
          TX packets:89 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:16726 (16.3 KiB)  TX bytes:14390 (14.0 KiB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:8 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:560 (560.0 B)  TX bytes:560 (560.0 B)

debian:~# cat /proc/cpuinfo 
processor	: 0
vendor_id	: GenuineIntel
cpu family	: 6
model		: 2
model name	: QEMU Virtual CPU version 0.14.0
stepping	: 3
cpu MHz		: 2673.298
cache size	: 4096 KB
fpu		: yes
fpu_exception	: yes
cpuid level	: 4
wp		: yes
flags		: fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pse36 clflush mmx fxsr
sse sse2 syscall nx lm up rep_good pni cx16 popcnt hypervisor lahf_lm
bogomips	: 5346.59
clflush size	: 64
cache_alignment	: 64
address sizes	: 40 bits physical, 48 bits virtual
power management:

debian:~# lspci
00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]
00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
00:02.0 VGA compatible controller: Cirrus Logic GD 5446
00:03.0 Ethernet controller: Red Hat, Inc Virtio network device
debian:~# halt

Broadcast message from root@debian (pts/0) (Tue May 10 21:33:53 2011):

The system is going down for system halt NOW!
debian:~# Connection to 192.168.1.3 closed by remote host.
Connection to 192.168.1.3 closed.
marsupilami:/opt/tashi/bin# ./tashi-client.py getinstances
 id hostId name user state disk memory cores
--------------------------------------------
marsupilami:/opt/tashi/bin# exit
exit

Script done on Tue 10 May 2011 09:34:07 PM EDT


Mime
View raw message