[Openstack] Network Service for L2/L3 Network Infrastructure blueprint

Erik Carlin erik.carlin at RACKSPACE.COM
Tue Feb 15 16:01:45 UTC 2011


My understanding is that we want a single, canonical OS network service API.  That API can then be implemented by different "service engines" on that back end via a plug-in/driver model.  The way additional features are added to the canonical API that may not be core or for widespread adoption (e.g. something vendor specific) is via extensions.  You can take a look at the proposed OS compute API spec<http://wiki.openstack.org/OpenStackAPI_1-1> to see how extensions are implemented there.  Also, Jorge Williams has done a good write up of the concept here<http://wiki.openstack.org/JorgeWilliams?action=AttachFile&do=view&target=Extensions.pdf>.

Erik

From: Romain Lenglet <romain at midokura.jp<mailto:romain at midokura.jp>>
Date: Tue, 15 Feb 2011 17:03:57 +0900
To: 石井 久治 <ishii.hisaharu at lab.ntt.co.jp<mailto:ishii.hisaharu at lab.ntt.co.jp>>
Cc: <openstack at lists.launchpad.net<mailto:openstack at lists.launchpad.net>>
Subject: Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint


Hi Ishii-san,

On Tuesday, February 15, 2011 at 16:28, 石井 久治 wrote:

Hello Hiroshi-san

>> Do you mean that the former API is an interface that is
>> defined in OpenStack project, and the latter API is
>> a vendor specific API?
> My understanding is that yes, that's what he means.

I also think so.

In addition, I feel it is issue that what network functions should be
defined as generic API, and what network functions should be defined as
plugin specific API.
How do you think ?
I propose to apply the following criteria to determine which operations belong to the generic API:
- any operation called by the compute service (Nova) directly MUST belong to the generic API;
- any operation called by users (REST API, etc.) MAY belong to the generic API;
- any operation belonging to the generic API MUST be independent from details of specific network service plugins (e.g. specific network models, specific supported protocols, etc.), i.e. the operation can be supported by every network service plugin imaginable, which means that if one can come up with a counter-example plugin that cannot implement that operation, then the operation cannot belong to the generic API.

How about that?

Regards,
--
Romain Lenglet


Thanks
Hisaharu Ishii


(2011/02/15 16:18), Romain Lenglet wrote:
Hi Hiroshi,
On Tuesday, February 15, 2011 at 15:47, Hiroshi DEMPO wrote:
Hello Hisaharu san

I am not sure about the differences between generic network API and
plugin X specific network service API.

Do you mean that the former API is an interface that is
defined in OpenStack project, and the latter API is
a vendor specific API?

My understanding is that yes, that's what he means.

--
Romain Lenglet


Thanks
Hiroshi

-----Original Message-----
From: openstack-bounces+dem=ah.jp.nec.com at lists.launchpad.net<mailto:ah.jp.nec.com at lists.launchpad.net>
[mailto:openstack-bounces+dem=ah.jp.nec.com at lists.launchpad.ne<mailto:ah.jp.nec.com at lists.launchpad.ne>
t] On Behalf Of 石井 久治
Sent: Thursday, February 10, 2011 8:48 PM
To: openstack at lists.launchpad.net<mailto:openstack at lists.launchpad.net>
Subject: Re: [Openstack] Network Service for L2/L3 Network
Infrastructure blueprint

Hi, all

As we have said before, we have started designing and writing
POC codes of network service.

- I know that there were several documents on the new network
service issue that were locally exchanged so far.
Why not collecting them into one place and share them
publicly?

Based on these documents, I created an image of
implementation (attached). And I propose the following set of
methods as the generic network service APIs.
- create_vnic(): vnic_id
Create a VNIC and return the ID of the created VNIC.
- list__vnics(vm_id): [vnic_id]
Return the list of vnic_id, where vnic_id is the ID of a VNIC.
- destroy_vnic(vnic_id)
Remove a VNIC from its VM, given its ID, and destroy it.
- plug(vnic_id, port_id)
Plug the VNIC with ID vnic_id into the port with ID
port_id managed by this network service.
- unplug(vnic_id)
Unplug the VNIC from its port, previously plugged by
calling plug().
- create_network(): network_id
Create a new logical network.
- list_networks(project_id): [network_id]
Return the list of logical networks available for
project with ID project_id.
- destroy_network(network_id)
Destroy the logical network with ID network_id.
- create_port(network_id): port_id
Create a port in the logical network with ID
network_id, and return the port's ID.
- list_ports(network_id): [port_id]
Return the list of IDs of ports in a network given its ID.
- destroy_port(port_id)
Destroy port with ID port_id.

This design is a first draft.
So we would appreciate it if you would give us some comments.

In parallel with it, we are writing POC codes and uploading
it to "lp:~ntt-pf-lab/nova/network-service".

Thanks,
Hisaharu Ishii


(2011/02/02 19:02), Koji IIDA wrote:
Hi, all


We, NTT PF Lab., also agree to discuss about network service at the
Diablo DS.

However, we would really like to include network service in
the Diablo
release because our customers strongly demand this feature. And we
think that it is quite important to merge new network
service to trunk
soon after Diablo DS so that every developer can contribute their
effort based on the new code.

We are planning to provide source code for network service
in a couple
of weeks. We would appreciate it if you would review it
and give us
some feedback before the next design summit.

Ewan, thanks for your making new entry at wiki page (*1).
We will also
post our comments soon.

(*1) http://wiki.openstack.org/NetworkService


Thanks,
Koji Iida


(2011/01/31 21:19), Ewan Mellor wrote:
I will collect the documents together as you suggest, and
I agree that we need to get the requirements laid out again.

Please subscribe to the blueprint on Launchpad -- that way
you will be notified of updates.

https://blueprints.launchpad.net/nova/+spec/bexar-network-service

Thanks,

Ewan.

-----Original Message-----
From: openstack-bounces+ewan.mellor=citrix.com at lists.launchpad.net<mailto:citrix.com at lists.launchpad.net>
[mailto:openstack-bounces+ewan.mellor=citrix.com at lists.launchpad.net<mailto:citrix.com at lists.launchpad.net>
]
On Behalf Of Masanori ITOH
Sent: 31 January 2011 10:31
To: openstack at lists.launchpad.net<mailto:openstack at lists.launchpad.net>
Subject: Re: [Openstack] Network Service for L2/L3 Network
Infrastructure blueprint

Hello,

We, NTT DATA, also agree with majority of folks.
It's realistic shooting for the the Diablo time frame to have the
new network service.

Here are my suggestions:

- I know that there were several documents on the new network
service issue
that were locally exchanged so far.
Why not collecting them into one place and share them
publicly?

- I know that the discussion went into a bit
implementation details.
But now, what about starting the discussion from the
higher level
design things (again)? Especially, from the
requirements level.

Any thoughts?

Masanori


From: John Purrier<john at openstack.org<mailto:john at openstack.org>>
Subject: Re: [Openstack] Network Service for L2/L3 Network
Infrastructure blueprint
Date: Sat, 29 Jan 2011 06:06:26 +0900

You are correct, the networking service will be more
complex than
the
volume
service. The existing blueprint is pretty comprehensive,
not only
encompassing the functionality that exists in today's network
service
in
Nova, but also forward looking functionality around flexible
networking/openvswitch and layer 2 network bridging
between cloud
deployments.

This will be a longer term project and will serve as the bedrock
for
many
future OpenStack capabilities.

John

-----Original Message-----
From: openstack-bounces+john=openstack.org at lists.launchpad.net<mailto:openstack.org at lists.launchpad.net>
[mailto:openstack-bounces+john=openstack.org at lists.launchpad.net<mailto:openstack.org at lists.launchpad.net>]
On
Behalf
Of Thierry Carrez
Sent: Friday, January 28, 2011 1:52 PM
To: openstack at lists.launchpad.net<mailto:openstack at lists.launchpad.net>
Subject: Re: [Openstack] Network Service for L2/L3 Network
Infrastructure
blueprint

John Purrier wrote:
Here is the suggestion. It is clear from the response
on the list
that
refactoring Nova in the Cactus timeframe will be too risky,
particularly as
we are focusing Cactus on Stability, Reliability, and
Deployability
(along
with a complete OpenStack API). For Cactus we should leave the
network and
volume services alone in Nova to minimize destabilizing the code
base. In
parallel, we can initiate the Network and Volume Service
projects
in Launchpad and allow the teams that form around these
efforts to
move
in
parallel, perhaps seeding their projects from the
existing Nova code.

Once we complete Cactus we can have discussions at the Diablo DS
about
progress these efforts have made and how best to move
forward with
Nova
integration and determine release targets.

I agree that there is value in starting the proof-of-concept work
around
the network services, without sacrificing too many developers to
it,
so
that a good plan can be presented and discussed at the
Diablo Summit.

If volume sounds relatively simple to me, network sounds
significantly
more complex (just looking at the code ,network manager code is
currently used both by nova-compute and nova-network to
modify the
local
networking stack, so it's more than just handing out IP
addresses
through an API).

Cheers,

--
Thierry Carrez (ttx)
Release Manager, OpenStack

_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to : openstack at lists.launchpad.net<mailto:openstack at lists.launchpad.net>
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp


_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to : openstack at lists.launchpad.net<mailto:openstack at lists.launchpad.net>
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp

_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to : openstack at lists.launchpad.net<mailto:openstack at lists.launchpad.net>
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp

_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to : openstack at lists.launchpad.net<mailto:openstack at lists.launchpad.net>
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp
_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to : openstack at lists.launchpad.net<mailto:openstack at lists.launchpad.net>
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp

Attachments:
- smime.p7s

_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to : openstack at lists.launchpad.net<mailto:openstack at lists.launchpad.net>
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp

_______________________________________________ Mailing list: https://launchpad.net/~openstack Post to : openstack at lists.launchpad.net<mailto:openstack at lists.launchpad.net> Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp


Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is prohibited.
If you receive this transmission in error, please notify us immediately by e-mail
at abuse at rackspace.com, and delete the original message.
Your cooperation is appreciated.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20110215/192f0a8c/attachment.html>


More information about the Openstack mailing list