libcloud-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Hutson Betts <hut...@tamu.edu>
Subject Re: [dev] LIBCLOUD-122: Unified Virtual Network Support
Date Thu, 10 Nov 2011 07:11:34 GMT
Thanks for the input! I greatly appreciate the feedback on the patch,
and your thoughts regarding virtual networking.

> 
>  - Imo, if we decide to create a new API it should be called
> networking, not
>  network
I used the term "network" rather than "networking" or another,
multi-word phrase, to follow what seemed to be the trend, the use of
singular words as library names. However, the term "networking" is
perfectly acceptable to me. Unless there are any suggestions to the
contrary, I'll change the name of the library.

> 
> - Network class currently has an 'address' attribute. This doesn't seem
> flexible enough for a standard library. It should probably have public_ips
> and private_ips attributes which are both lists.
Even this suggestion sounds too constricting. By assuming a network has
two types of IP address, public and private, we establish a fixed
number of "buckets" by which all IP addresses must be describable.
Also, separating public IPs and private IPs implies the assumption that
there are a finite number of IP addresses already assigned to the
virtual network. Therefore, it can't handle the scenario of IP ranges
for a network.

Case in point, what if I have a virtual network using the following IP
space, 192.168.0.0/24, from which compute nodes can be assigned an IP
address upon attaching to the network? Would I have a list of 255 IP
addresses, or only those that have been assigned to compute nodes? How
would this work with OpenNebula which only returns the range and not a
list of assigned addresses? Anyways, just food for thought.

However, the "Network", in my opinion, shouldn't care how the IP
addresses within the IP space are used. Rather, the network is
analogous to a physical network switch containing a DHCP server. All
nodes are connected to the switch, and receive IP addresses from the
DHCP server (Depending on the internal mechanism of the Cloud
provider). This analogy holds under the assumption that we can only poll
for the IP range of the DHCP server, but not for a list of assigned IP
addresses, or who those addresses have been assigned to.

What makes IP addresses private or public is the method by which those
IP addresses are made accessible. In the case of OpenNebula, a network
is either public or private. If public, it's public because it's
bridged to a publically accessible network interface on a virtual
machine host. Otherwise it's a private network between virtualized compute nodes.
However, even a private network can be made public if one of the compute nodes
attached to the private network allows IP forwarding, and/or does
natting to a public network to which the compute node is also
connected.

> 
> - What is size attribute doing on a Network object?
Network Size is not analogous to Node Size. Network Size is a property
of a Network, and not a member possessing its own internal properties.
Let me see if I can explain, though I doubt I'll do a great job of it.
Virtual networks, within OpenNebula, are describable by it's unique ID,
unique Name, the subnet of the network (i.e. 192.168.0.0), and an
indicator of the size of that network.

When creating a routable network, you would typically describe the
network by the subnet and netmask. In OpenNebula, the netmask portion
if handled by the Size property. Size is an integer representing the
number of IP addresses that can be assigned within the subnet. If the
network is 192.168.0.0/24, with a netmask of 255.255.255.0, then the
Address/IP/Network is 192.168.0.0, and the Size if 255. If the network
is 10.1.0.0/16, with a netmask of 255.255.0.0, then the
Address/IP/Network is 10.1.0.0, and the Size is 65535.

If there is a term that could better describe that property, please let
me know. Even a netmask, or address class property in place of Size would work.
I would only need to implement methods to convert between the general
property and the property used by the cloud provider.

> 
> - get_uuid used the way it currently is used in the networking API is
> pretty much useless. In the compute API we build a Node UUID, by running
> SHA1 on the node id and a driver type. We should either do something like
> this in the networking API or get rid of this method all together.
> 
> Even better, we should just put it in a base NodeDriver class and do
> something like SHA(id + api_name + provider). iirc all the classes should
> have id and a provider attribute.
> 
> - list and destroy methods look OK, but we should think (and research) more
> about which arguments create method should take so it is possible to make
> it work with multiple providers.
Well, I'm glad that these two functions seem concise enough to be
effective for most general cases. So far, these set of functions, along
with create, are all that are required to manage virtual networks within
OpenNebula. However, I cannot say that holds true when considering
Amazon Web Services. Though I would assume list and destroy would be
part of any subset of methods required for any virtual networking API.

Looking into Amazon's Virtual Private Cloud, the requirements to create
a VPC is stated as such: "You assign a single Classless Internet Domain
Routing (CIDR) IP address block when you create a VPC. Subnets within a
VPC are addressed from this range by you."

Furthermore, "Please note that while you can create multiple VPCs with
overlapping IP address ranges...". This holds true for OpenNebula as well.

However, with Amazon, you can create multiple subnets within a single
Virtual Private Network IP range. Up to 20 in all. The existing Network
type can still handle this scenario, as long as it's extended with a an
additional member, say a list, containing a list of subnets, maybe also of type Network.

> 
> Some other things we still need to decide / think about them:
> 
> - Virtual networking APIs are usually pretty coupled with compute ones.
> Should it be a separate API (libcloud.networking.*) or should it live in
> the compute one (libcloud.compute.networking.*)?
This is a debate that even I haven't made my mind up on. It would be
easy to just move the network related methods into the OpenNebula
driver, with a leading ex_*, and call it done. However, again, I
considered this from the OCCI/OpenNebula standpoint.

As compute nodes and storage entities are considered pool resources, so
are virtual networks. Therefore, just like you create, manage and
destroy compute nodes, and storage entities, so can those actions be
carried out on networks. From an interface perspective, it
seemed best to treat networks as another type of resource for which
a dedicated library should exist for management purposes.

> 
> - Virtual networking APIs usually also provide some kind of access to a
> firewall. How do we plan to integrate this into it?
I hadn't looked deeply into this matter. It hadn't seemed like
networking would be coupled with additional features, such as routing
or firewalls. In the case of OpenNebula, the network is nothing more
than the aforementioned analogy of a switch with a DHCP server. If you
want public access on a private virtual network, then you configure a compute
node to be attached to both a publically accessible network and a
private network. Furthermore, you configure the node with a firewall,
IP forwarding, NAT, etc.

With regard to Amazon Web Services' Virtual Private Cloud, it would
seem from their description, that a virtual network is very similar. It
mentions that additional features, such as public accessibly for a
private network is possible by creating an Internet Gateway. Also, it's
possible for compute nodes connected to the private network to use the
Elastic IPs of NAT instance to access the Internet. Lastly, a private
network is accessible by configuring a VPN Gateway that is attached to
the virtual private network.

In all these cases, the features of the Virtual Private Cloud seem like
nothing more than specialized compute nodes. The only difference, those
special node types might only be manageable by the Virtual Private Cloud
API.

> 
> - I do think we should start simple (method for listing, creating and
> deleting), but we should also make it flexible enough so it won't limit or
> prevent us from adding more things in the future. Starting simple also
> means we should be able to support multiple providers from the beginning.

To support additional functionality of cloud
providers, maybe future methods could be:
list_available_ips()
list_assigned_ips() ...


I've already tested the network library, as contained in the patch,
against OpenNebula 2.2, and all three networking functions work
correctly.

-- 
Hutson Betts
Computer Science and Engineering
Texas A&M University


On Thu, 2011-11-10 at 01:51 +0100, Toma┼ż Muraus wrote:
> Here are some comments for your patch:
> 
> - Imo, if we decide to create a new API it should be called networking, not
> network
> 
> - Network class currently has an 'address' attribute. This doesn't seem
> flexible enough for a standard library. It should probably have public_ips
> and private_ips attributes which are both lists.
> 
> - What is size attribute doing on a Network object?
> 
> - get_uuid used the way it currently is used in the networking API is
> pretty much useless. In the compute API we build a Node UUID, by running
> SHA1 on the node id and a driver type. We should either do something like
> this in the networking API or get rid of this method all together.
> 
> Even better, we should just put it in a base NodeDriver class and do
> something like SHA(id + api_name + provider). iirc all the classes should
> have id and a provider attribute.
> 
> - list and destroy methods look OK, but we should think (and research) more
> about which arguments create method should take so it is possible to make
> it work with multiple providers.
> 
> Some other things we still need to decide / think about them:
> 
> - Virtual networking APIs are usually pretty coupled with compute ones.
> Should it be a separate API (libcloud.networking.*) or should it live in
> the compute one (libcloud.compute.networking.*)?
> 
> - Virtual networking APIs usually also provide some kind of access to a
> firewall. How do we plan to integrate this into it?
> 
> - I do think we should start simple (method for listing, creating and
> deleting), but we should also make it flexible enough so it won't limit or
> prevent us from adding more things in the future. Starting simple also
> means we should be able to support multiple providers from the beginning.
> 
> Thanks,
> Tomaz
> 
> On Wed, Nov 9, 2011 at 6:42 AM, Hutson Betts <hut101@tamu.edu> wrote:
> 
> > I was wondering if someone could take a look at the following attached
> > patch. It's a new component to the Libcloud library to support virtual
> > network drivers.
> >
> > I added a driver for OpenNebula, covering versions 1.4 through to the
> > present version of OpenNebula.
> > Furthermore, I added additional test classes for testing the new driver.
> >
> > My next step is to test in a mockup of my cloud computing production
> > environment.
> >
> > --
> > Hutson Betts
> > Computer Science and Engineering
> > Texas A&M University
> >
> >

Mime
View raw message