incubator-libcloud mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ryan Tucker <rtuc...@gmail.com>
Subject Re: [libcloud] adding i386 and x86_64 architecture flags for NodeSize and NodeImage, and utilizing the driver object in NodeSize
Date Mon, 18 Apr 2011 22:27:46 GMT
On Mon, Apr 18, 2011 at 01:54:06PM -0700, Justin Donaldson wrote:
> Node/kernel sizes should be able to be determined without a huge amount of
> effort... AFAIK most providers offer 64 bit kernels by default, and then
> offer 32-bit userland by default (brightbox, for example).  Amazon is a bit
> weird in that sense since they offer 32-bit kernels.  Anybody know of anyone
> else offering 32-bit?

Linode is 32-bit by "default," both userland and kernel.  (I put quotes
around "default" because there's no such thing as a default image, but
32-bit is the typical case.)  I believe Slicehost is similar, since we get
neither 32-bit nor new kernels in STL-A, but I don't have first-hand
knowledge of this.

> My thought would be, rather than retrieving a consistent global list of
> generic nodes, there should be some ability to query for matching nodes
> per-provider.  E.g. instead of the list_nodes() function, a function such
> as: query_sizes(min_cpu, min_disk, max_price, etc...) could be specified.

I've kinda implemented something like this in my own use of libcloud:

def small():
    """Selects the instance with the least RAM."""
    env["size"] = _fetch_size_by_ram(ram=0)

def huuuuge():
    """Selects an instance with the most RAM.  It's huge, Rochester..."""
    env["size"] = _fetch_size_by_ram(ram=50000)

... with "medium" and "large" looking for 1024 and 2048 MB, respectively.
It iterates through the available plans and returns the closest size.  I
implemented this to make my deployment script less provider-specific (I can
just say "fab linode newark small lucid create" and not have to remember if
it's a 256 or a 512 I want), so I think extending this to other attributes
and including it in libcloud itself would be very awesome.

> Thanks again to the libcloud team, I really appreciate the organizational
> efforts behind this project

I don't think I've said that yet, so if the libcloud team needs a Monday
afternoon pick-me-up: THANKS!  This deployment system has about five lines
of provider-specific code, per provider.  It works like Star Trek.  -rt

-- 
Ryan Tucker <rtucker@gmail.com>


Mime
View raw message