incubator-cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Musayev, Ilya" <>
Subject Re: [VOTE] Apache Cloudstack 4.0.0-incubating Release, third round
Date Tue, 23 Oct 2012 01:54:11 GMT

Response in line..


On Oct 22, 2012, at 8:41 PM, "Kelven Yang" <<>>


Please see my answers inline


On 10/22/12 3:42 PM, "Musayev, Ilya" <<>>

Test case 3 and 4 wording has been changed, see below.

On Oct 22, 2012, at 6:34 PM, "Musayev, Ilya" <<>>

Following issues has been witnessed thus far:

I'm just going to provide the output of my tests and you be the judge
if it's bug or an error on my part - I can then submit a bug report if

1 - when configuring VMware ESXi environment initially primary store
cannot be set as VMFS and must be configured as NFS for setup to work. I
could not get VMFS to work as primary storage without adding NFS as
primary first. (need to confirm if VMFS will still works as primary -
post adding NFS as primary - as of now I just use NFS as primary storage)

We support VMFS as the primary storage, but configuration of using VMFS
and NFS is different. For NFS datastore, CloudStack will use information
provided by the user to explicitly create NFS datastore and mount it to
hosts within the cluster. For VMFS, we require VMFS datastore be
configured in vCenter ahead, it means that the VMFS datastore has been
setup in the vCenter datacenter and for all hosts within the cluster have
enabled the interface (for example, iScsi software adapter). Once VMFS
datastore has been setup correctly in vCenter, you should be able to add
it to CloudStack and use it as primary storage.

I'm not expecting for CS to magically create LUNs. Those LUNs have been configured, presented
and used by all hypervisors in the cluster before CS setup was attempted. My apology I was
not clear.

2 - if VMFS LUN is incorrectly set while going through initial setup
process (not using first time wizard - since VMware ESXi Cluster is not
an option) and user mistakenly enters improper LUN, the UI will detect
as an error but even if you correct it - there is no way to continue the
setup process and no API calls are issued to confirm the validity of the
new LUN name. at this point, I had to start over from scratch.

VMFS datastore is transport to CloudStack, we don't provide any API for
you to validate VMFS LUN configuration. All you have to make sure is that
under vCenter, the VMFS datastore is properly setup.

Yep, it's been properly setup and used for over a year. The LUN name was put incorrectly 
i.e. VM-LUN1 vs VM-LUN-1 during CS setup. The UI setup process complained that I need to address
the LUN Name issue when it attempted to configure the VMware cluster. Once I address the LUN
name in UI setup process and press submit, it returns back and claims LUN name is incorrect.
Except it occurs so quick - that I doubt it checked anything.

If it's still not clear, I will do screenshots or video. This seems more like UI/JS issue.

3. Don't know if this would-be a bug or feature enhacement. While
adding a vmware cluster - the CS is specifically looking for "Management
Network" port group on vSwitch0. If name does not match - you could see
an error that "Management Network" portgroup is not found. The prefered
logic  would be to lookup the portgroup of vmk0 (or whatever management
virtual NIC is) and use it as your portgroup.

Locking for "Management Network" port group on vSwitch0 is only the
default configuration, you can change this by properly configuring
following parameters

   vmware.private.vswitch: specify the vSwitch you want to use for ESXi hosts, specify the
"management network"
       vmware.service.console: for ESX hosts, specify the "management

This is awesome piece of information I was missing.


4. If your VMware cluster has virtual switch vSwitch0 used for outbound
network communication and another vSwitch1 for other internal private
tasks with no outbound network connectivity - when the Secondary storage
VM is deployed it incorrectly picks vSwitch1 and attempts to use this
vswitch for communication with CS core. Obviously fails since no NICs
are connected to vSwitch1.

   Besides the global configuration, we also require the 'Physical network"
configuration of the traffic type under the zone to be correct. the
traffic type label is used to identify the vSwitch under Vmware.

5. This seems to have been the case for me all the time - but by
default -
the first time wizard does not have VMware cluster as an option. I
always to choose 'Skip, I've used cloud stack before' button.

6. If I attempt to do a complete cleanup without flushing DB, I'm
unable to delete physical network connection - eventhough no networks
are defined/assigned - and therefore I cannot delete the zone. I had to
drop mysql db and start fresh.

7. The cloud ssh and password scripts still need a bit more enhancement
- I will submit the patch shortly. I also think we can do more
improvements on the way code is written without changing functionality.

* None of these are show stoppers - but things I've noticed when I went
through install process. At the end, I was able to get it to work.

Let me know which are bugs and I will file a bug report with supporting

I will go through install process once more next week or later this
week to double check and note all the issues. I will also be testing
Netscaler integration.


On Oct 22, 2012, at 12:17 PM, "Chip Childers (ASF)"
<<>> wrote:

Hi All,

I would like to call a vote for Apache CloudStack (Incubating) Release
4.0.0-incubating (third round).

We encourage the whole community to download and test these release
artifacts, so that any critical issues can be resolved before the
release is made. The more time that each individual spends reviewing
the artifacts, the higher confidence we can have in both the release
itself and our ability to pass an IPMC vote later on.  Everyone is free
to vote on this release, so please give it a shot.

Instructions for Validating and Testing the artifacts can be found

If you have any trouble setting up a test environment using the
procedure above, please ask on the cloudstack-dev@i.a.o list.  Someone
will be sure to help, and we'll improve our test procedure
documentation at the same time!

Now, on to the specifics of what we are voting on...

The following artifacts are up for the vote:

PGP release keys (signed using A99A5D58):

Branch: 4.0
Commit: 6355965dcd956811dd471a9d03c73dadcf68f480

List of changes:;a=blo

The artifacts being voted on during this round also include the
following additional fixes (most were identified as part of testing
during the last round of voting):

* Many documentation fixes (particularly the release notes and
installation guide)
* CLOUDSTACK-341: Failing to display Management Traffic Details on the
* CLOUDSTACK-349: Russian l10n not properly displaying
* Correction to the devcloud rdeploy build target, to make testing
* CLOUDSTACK-363: Upgrades from 2.2.14, 3.0.2 to the Current build
will fail
* CLOUDSTACK-118: Status of host resorce stuck in "ErrorInMaintenance"
* DISCLAIMER added to the Marvin tool dir

The vote will be open for 72 hours.

For sanity in tallying the vote, can PPMC and IPMC members please be
sure to indicate "(binding)" with their vote?
[ ] +1  approve
[ ] +0  no opinion
[ ] -1  disapprove (and reason why)


  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message