[Openstack] Network Service for L2/L3 Network Infrastructure blueprint

Romain Lenglet romain at midokura.jp
Wed Feb 16 00:22:41 UTC 2011


Hi Erik,

Thanks for your comments.

There doesn't seem to be a consensus to use "core API + extensions" vs.
multiple APIs?
Anyway, I don't see any issues with specifying a "core API" for network
services, and a "core API" for network agents, corresponding exactly to
NTT's Ishii-san's "generic APIs", and specifying all the non-generic,
plugin-specific operations in extensions.
If the norm becomes to have a core API + extensions, then the network
service spec will be modified to follow that norm. No problem.

The important point we need to agree on is what goes into the API, and what
goes into extensions.

Let me rephrase the criteria that I proposed, using the "API" and
"extensions" terms:
1) any operation called by the compute service (Nova) directly MUST be
specified in the API;
2) any operation called by users / admin tools MAY be specified in the API,
but not necessarily;
3) any operation specified in the API MUST be independent from details of
specific network service plugins (e.g. specific network models, specific
supported protocols, etc.), i.e. that operation can be supported by every
network service plugin imaginable, which means that:
4) any operation that cannot be implemented by all plugins MUST be specified
in an extension, i.e. if one comes up with a counter-example plugin that
cannot implement that operation, then the operation cannot be specified in
the API and MUST be specified in an extension.

Do we agree on those criteria?

I think Ishii-san's proposal meets those criteria.
Do you see any issues with Ishii-san's proposal regarding the split between
core operations and extension operations?
If you think that some operations that are currently defined as extensions
in Ishii-san's proposal should be in the API, I'll be happy to try to give
counter-examples of network service plugins that can't implement them. :)

Regards,
--
Romain Lenglet


2011/2/16 Erik Carlin <erik.carlin at rackspace.com>

>  My understanding is that we want a single, canonical OS network service
> API.  That API can then be implemented by different "service engines" on
> that back end via a plug-in/driver model.  The way additional features are
> added to the canonical API that may not be core or for widespread adoption
> (e.g. something vendor specific) is via extensions.  You can take a look at
> the proposed OS compute API spec<http://wiki.openstack.org/OpenStackAPI_1-1>to see how extensions are implemented there.  Also, Jorge Williams has done
> a good write up of the concept here<http://wiki.openstack.org/JorgeWilliams?action=AttachFile&do=view&target=Extensions.pdf>
> .
>
>  Erik
>
>   From: Romain Lenglet <romain at midokura.jp>
> Date: Tue, 15 Feb 2011 17:03:57 +0900
> To: 石井 久治 <ishii.hisaharu at lab.ntt.co.jp>
> Cc: <openstack at lists.launchpad.net>
>
> Subject: Re: [Openstack] Network Service for L2/L3 Network Infrastructure
> blueprint
>
>   Hi Ishii-san,
>
> On Tuesday, February 15, 2011 at 16:28, 石井 久治 wrote:
>
>  Hello Hiroshi-san
>
> >> Do you mean that the former API is an interface that is
> >> defined in OpenStack project, and the latter API is
> >> a vendor specific API?
> > My understanding is that yes, that's what he means.
>
> I also think so.
>
> In addition, I feel it is issue that what network functions should be
> defined as generic API, and what network functions should be defined as
> plugin specific API.
> How do you think ?
>
> I propose to apply the following criteria to determine which operations
> belong to the generic API:
> - any operation called by the compute service (Nova) directly MUST belong
> to the generic API;
> - any operation called by users (REST API, etc.) MAY belong to the generic
> API;
> - any operation belonging to the generic API MUST be independent from
> details of specific network service plugins (e.g. specific network models,
> specific supported protocols, etc.), i.e. the operation can be supported by
> every network service plugin imaginable, which means that if one can come up
> with a counter-example plugin that cannot implement that operation, then the
> operation cannot belong to the generic API.
>
>  How about that?
>
>  Regards,
> --
> Romain Lenglet
>
>
>
> Thanks
> Hisaharu Ishii
>
>
> (2011/02/15 16:18), Romain Lenglet wrote:
>
> Hi Hiroshi,
> On Tuesday, February 15, 2011 at 15:47, Hiroshi DEMPO wrote:
> Hello Hisaharu san
>
>
> I am not sure about the differences between generic network API and
> plugin X specific network service API.
>
> Do you mean that the former API is an interface that is
> defined in OpenStack project, and the latter API is
> a vendor specific API?
>
>
> My understanding is that yes, that's what he means.
>
> --
> Romain Lenglet
>
>
>
> Thanks
> Hiroshi
>
>  -----Original Message-----
> From: openstack-bounces+dem=ah.jp.nec.com at lists.launchpad.net
> [mailto:openstack-bounces <openstack-bounces>+dem=
> ah.jp.nec.com at lists.launchpad.ne
> t] On Behalf Of 石井 久治
> Sent: Thursday, February 10, 2011 8:48 PM
> To: openstack at lists.launchpad.net
> Subject: Re: [Openstack] Network Service for L2/L3 Network
> Infrastructure blueprint
>
> Hi, all
>
> As we have said before, we have started designing and writing
> POC codes of network service.
>
>  - I know that there were several documents on the new network
> service issue that were locally exchanged so far.
> Why not collecting them into one place and share them
>
> publicly?
>
> Based on these documents, I created an image of
> implementation (attached). And I propose the following set of
> methods as the generic network service APIs.
> - create_vnic(): vnic_id
> Create a VNIC and return the ID of the created VNIC.
> - list__vnics(vm_id): [vnic_id]
> Return the list of vnic_id, where vnic_id is the ID of a VNIC.
> - destroy_vnic(vnic_id)
> Remove a VNIC from its VM, given its ID, and destroy it.
> - plug(vnic_id, port_id)
> Plug the VNIC with ID vnic_id into the port with ID
> port_id managed by this network service.
> - unplug(vnic_id)
> Unplug the VNIC from its port, previously plugged by
> calling plug().
> - create_network(): network_id
> Create a new logical network.
> - list_networks(project_id): [network_id]
> Return the list of logical networks available for
> project with ID project_id.
> - destroy_network(network_id)
> Destroy the logical network with ID network_id.
> - create_port(network_id): port_id
> Create a port in the logical network with ID
> network_id, and return the port's ID.
> - list_ports(network_id): [port_id]
> Return the list of IDs of ports in a network given its ID.
> - destroy_port(port_id)
> Destroy port with ID port_id.
>
> This design is a first draft.
> So we would appreciate it if you would give us some comments.
>
> In parallel with it, we are writing POC codes and uploading
> it to "lp:~ntt-pf-lab/nova/network-service".
>
> Thanks,
> Hisaharu Ishii
>
>
> (2011/02/02 19:02), Koji IIDA wrote:
>
> Hi, all
>
>
> We, NTT PF Lab., also agree to discuss about network service at the
> Diablo DS.
>
> However, we would really like to include network service in
>
> the Diablo
>
> release because our customers strongly demand this feature. And we
> think that it is quite important to merge new network
>
> service to trunk
>
> soon after Diablo DS so that every developer can contribute their
> effort based on the new code.
>
> We are planning to provide source code for network service
>
> in a couple
>
> of weeks. We would appreciate it if you would review it
>
> and give us
>
> some feedback before the next design summit.
>
> Ewan, thanks for your making new entry at wiki page (*1).
>
> We will also
>
> post our comments soon.
>
> (*1) http://wiki.openstack.org/NetworkService
>
>
> Thanks,
> Koji Iida
>
>
> (2011/01/31 21:19), Ewan Mellor wrote:
>
> I will collect the documents together as you suggest, and
>
>  I agree that we need to get the requirements laid out again.
>
>
> Please subscribe to the blueprint on Launchpad -- that way
>
>  you will be notified of updates.
>
>
> https://blueprints.launchpad.net/nova/+spec/bexar-network-service
>
> Thanks,
>
> Ewan.
>
>  -----Original Message-----
> From: openstack-bounces+ewan.mellor=citrix.com at lists.launchpad.net
>
>   [mailto:openstack-bounces <openstack-bounces>+ewan.mellor=
> citrix.com at lists.launchpad.net
>
>  ]
> On Behalf Of Masanori ITOH
> Sent: 31 January 2011 10:31
> To: openstack at lists.launchpad.net
> Subject: Re: [Openstack] Network Service for L2/L3 Network
> Infrastructure blueprint
>
> Hello,
>
> We, NTT DATA, also agree with majority of folks.
> It's realistic shooting for the the Diablo time frame to have the
> new network service.
>
> Here are my suggestions:
>
> - I know that there were several documents on the new network
> service issue
> that were locally exchanged so far.
> Why not collecting them into one place and share them
>
>  publicly?
>
>
> - I know that the discussion went into a bit
>
>  implementation details.
>
>  But now, what about starting the discussion from the
>
>  higher level
>
>  design things (again)? Especially, from the
>
>  requirements level.
>
>
> Any thoughts?
>
> Masanori
>
>
> From: John Purrier<john at openstack.org>
> Subject: Re: [Openstack] Network Service for L2/L3 Network
> Infrastructure blueprint
> Date: Sat, 29 Jan 2011 06:06:26 +0900
>
>  You are correct, the networking service will be more
>
>   complex than
>
>   the
>
> volume
>
> service. The existing blueprint is pretty comprehensive,
>
>   not only
>
>   encompassing the functionality that exists in today's network
> service
>
> in
>
> Nova, but also forward looking functionality around flexible
> networking/openvswitch and layer 2 network bridging
>
>   between cloud
>
>   deployments.
>
> This will be a longer term project and will serve as the bedrock
> for
>
> many
>
> future OpenStack capabilities.
>
> John
>
> -----Original Message-----
> From: openstack-bounces+john=openstack.org at lists.launchpad.net
>
>   [mailto:openstack-bounces <openstack-bounces>+john=
> openstack.org at lists.launchpad.net]
>
>   On
>
> Behalf
>
> Of Thierry Carrez
> Sent: Friday, January 28, 2011 1:52 PM
> To: openstack at lists.launchpad.net
> Subject: Re: [Openstack] Network Service for L2/L3 Network
>
> Infrastructure
>
> blueprint
>
> John Purrier wrote:
>
> Here is the suggestion. It is clear from the response
>
>    on the list
>
>  that
>
> refactoring Nova in the Cactus timeframe will be too risky,
>
> particularly as
>
> we are focusing Cactus on Stability, Reliability, and
>
>   Deployability
>
>  (along
>
> with a complete OpenStack API). For Cactus we should leave the
>
> network and
>
> volume services alone in Nova to minimize destabilizing the code
>
> base. In
>
> parallel, we can initiate the Network and Volume Service
>
>   projects
>
>   in Launchpad and allow the teams that form around these
>
>   efforts to
>
>   move
>
> in
>
> parallel, perhaps seeding their projects from the
>
>   existing Nova code.
>
>
> Once we complete Cactus we can have discussions at the Diablo DS
>
>  about
>
> progress these efforts have made and how best to move
>
>   forward with
>
>  Nova
>
> integration and determine release targets.
>
> I agree that there is value in starting the proof-of-concept work
>
> around
>
> the network services, without sacrificing too many developers to
> it,
>
> so
>
> that a good plan can be presented and discussed at the
>
>   Diablo Summit.
>
>
> If volume sounds relatively simple to me, network sounds
>
> significantly
>
> more complex (just looking at the code ,network manager code is
> currently used both by nova-compute and nova-network to
>
>   modify the
>
>  local
>
> networking stack, so it's more than just handing out IP
>
>   addresses
>
>    through an API).
>
> Cheers,
>
> --
> Thierry Carrez (ttx)
> Release Manager, OpenStack
>
> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack at lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help : https://help.launchpad.net/ListHelp
>
>
> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack at lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help : https://help.launchpad.net/ListHelp
>
>
> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack at lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help : https://help.launchpad.net/ListHelp
>
>
> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack at lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help : https://help.launchpad.net/ListHelp
>
>   _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack at lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help : https://help.launchpad.net/ListHelp
>
> Attachments:
> - smime.p7s
>
>
> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack at lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help : https://help.launchpad.net/ListHelp
>
>
>   _______________________________________________ Mailing list:
> https://launchpad.net/~openstack Post to : openstack at lists.launchpad.netUnsubscribe :
> https://launchpad.net/~openstack More help :
> https://help.launchpad.net/ListHelp
>
> Confidentiality Notice: This e-mail message (including any attached or
> embedded documents) is intended for the exclusive and confidential use of the
> individual or entity to which this message is addressed, and unless otherwise
> expressly indicated, is confidential and privileged information of Rackspace.
> Any dissemination, distribution or copying of the enclosed material is prohibited.
> If you receive this transmission in error, please notify us immediately by e-mail
> at abuse at rackspace.com, and delete the original message.
> Your cooperation is appreciated.
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20110216/12402226/attachment.html>


More information about the Openstack mailing list