cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Wido den Hollander <>
Subject Re: [DISCUSS] vs. pacakge repos
Date Tue, 11 Sep 2012 18:50:47 GMT

On 09/11/2012 05:45 PM, David Nalley wrote:
> On Tue, Sep 11, 2012 at 8:23 AM, Wido den Hollander <> wrote:
>> On 09/11/2012 12:16 PM, Suresh Sadhu wrote:
>>> HI All,
>>> Installer fail to read the cloud packages  and MS installation on Ubuntu
>>> 12.04 was not successful(No packages were installed) Raised a blocker bug.
>>> Please find the issue details in the below mentioned issue:
>> I'd like to bring this up again, do we REALLY want this script?
> This really deserves its own thread, because it won't receive the
> attention it deserves in the original thread.
> I talked with infra about this a few weeks back, and while they said
> they really wanted downstreams to package, they weren't vehemently
> opposed to use creating our own repo, but we'd have to figure out how
> to make it work with the mirror system.

A Debian/Ubutnu repository is just a bunch of directories and files, 
that could be distributed I think?

The question is, do we want this to go on ASF infra or us an external 
mirror for it?

> Personally - the packages as they exist are great for people doing a
> first, small scale install, but it doesn't scale. While I am not
> necessarily opposed to the installer, I also recognize the problems
> from a real world deployment perspective.

I disagree on the first point. When manually installing packages with 
dpkg you will run into dependency hell. You (you=install script) 
manually have to "apt-get install" several packages.

The problem you run into here is that you start doing redundant work. In 
the "control" file you specify which packages you depend on. If you'd 
use apt(itude) it will resolve those dependencies for you. But when 
doing a manual install with dpkg it will complain about every single 
package which is missing.

This leads to having a couple of directives to install 
packages we already specific in the control file. On the longer run you 
get packages installed by which are no longer required, but 
apt has no way of knowing they can be removed.

Packages should always enter a Debian system through apt to know which 
package was depending on which package so apt(itude) can do their work.

Adding a repository and install CloudStack is just 4 commands, isn't 
that simple enough?

$ echo "deb $(lsb_release -s -c) 
4.0" > /etc/apt/sources.list.d/cloudstack.list
$ wget -O -|apt-key add -
$ apt-get update
$ apt-get install cloud-agent

Again, the repo of mine is just an example :)

> However, there is an impact, at a minimum all of our documentation
> will need rewriting, so personally, I'd prefer that for 4.0.0 - that
> we do repos if we can figure it out in time, and keep the installer as
> an option as well.

Re-writing the docs is a couple of hours work I'd be more then happy to 
do for 4.0 if we go for a repo.

I honestly must admit that in some recent docs I already assumed there 
would be a repo for 4.0...

It would be awesome if Jenkins could produce packages and send them to 
the mirror, but it's more then doable to build the packages locally and 
upload them, it's not like we are doing 10 releases a month.

It's just placing the packages in the "pool" directory and have a script 
re-scan the repo.

The question remains: Do we want this to be on ASF infra or do we host 
this externally?


View raw message