[tc] Questions for TC Candidates

Graham Hayes gr at ham.ie
Fri Feb 22 11:26:50 UTC 2019


On 21/02/2019 18:04, Sylvain Bauza wrote:
> 

<snip>
>
> I'd be interested in discussing the use cases requiring such important
> architectural splits.
> The main reason why Cells v2 was implemented was to address the MQ/DB
> scalability issue of 1000+ compute nodes.  The Edge thingy came after
> this, so it wasn't the main driver for change.
> If the projects you mention have the same footprints at scale, then yeah
> I'm supportive of any redesign discussion that would come up.
> 
> That said, before stepping in into major redesigns, I'd wonder : could
> the inter-services communication be improved in terms of reducing payload ?

This is actually orthogonal to cells v2. There is other good reasons to
remove RMQ in some places:

nova control plane <-> compute traffic can be point to point, so a HTTP
request is perfectly workable for things that use calls() (cast() is a
different story). This removes a lot of intermediate components,
(oslo.messaging, RMQ, persistent connections, etc). It is not with out
its own complexity, and potential pitfalls, but I am not going to design
a spec on this thread :)

for other services using RMQ:

1. Having service VMs connect to RMQ means that if one VM gets
compromised, the attacker could cause havoc on cloud by deleting
VMs, networks, or other resources. You can help this by running multiple
RMQ services, combinations or vhosts and permissions, but the service
resources are still under threat in all cases.

2. Possibly having to open ports from in cloud workloads to the under
cloud so that RMQ is accessible for the in cloud services.


This ties into the single agent for all openstack services - if we had
a standard agent on machines that do things for openstack, we could
have cross project TLS mutual auth, / app credentials / other auth
tooling and do it once, and then just make sure that each image build
script for in cloud services includes it.

  
> 
>     From what I understand there was even talk of doing it for Nova so that
>     a central control plane could manage remote edge compute nodes without
>     having to keep a RMQ connection alive across the WAN, but I am not sure
>     where that got to.
> 
> 
> That's a separate usecase (Edge) which wasn't the initial reason why we
> started implementing Cells V2. I haven't heard any request from the Edge
> WG during the PTGs about changing our messaging interface because $WAN
> but I'm open to ideas.

It was discussed with a few people from the Nova team in the Denver PTG
Edge room from what I remember.

> -Sylvain
> 
>     > To be clear, the redesign wasn't coming from any other sources but our
>     > users, complaining about scale. IMHO If we really want to see some
>     > comittee driving us about feature requests, this should be the UC and
>     > not the TC.
> 
>     It should be a combination - UC and TC should be communicating about
>     these requests - UC for the feedback, and the TC to see hwo they fit
>     with the TCs vision for the direction of OpenStack.
> 
>     > Whatever it is, at the end of the day, we're all paid by our sponsors.
>     > Meaning that any architectural redesign always hits the reality wall
>     > where you need to convince your respective Product Managers of the
>     great
>     > benefit of the redesign. I'm maybe too pragmatic, but I remember
>     so many
>     > discussions we had about redesigns that I now feel we just need hands,
>     > not ideas.
> 
>     I fully agree, and it has been an issue in the community for as long as
>     I can remember. It doesn't mean that we should stop pushing the project
>     forward. We have already moved the needle with the cycle goals, so we
>     can influence what features are added to projects. Lets continue to do
>     so.
> 
> 
>     <snip>
> 

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 488 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20190222/c6dc5771/attachment.sig>


More information about the openstack-discuss mailing list