[openstack-dev] OpenStack installer

Kevin Carter kevin.carter at RACKSPACE.COM
Wed Jan 27 22:02:31 UTC 2016


Hello Gyorgy,

Few more responses inline:

On 01/27/2016 02:51 AM, Gyorgy Szombathelyi wrote:
>>
>> Hi Gyorgy,
>>
> Hi Kevin,
>
>> I'll definitely give this a look and thanks for sharing. I would like to ask
>> however why you found OpenStack-Anisble overly complex so much so that
>> you've taken on the complexity of developing a new installer all together? I'd
>> love to understand the issues you ran into and see what we can do in
>> upstream OpenStack-Ansible to overcome them for the greater community.
>> Being that OpenStack-Ansible is no longer a Rackspace project but a
>> community effort governed by the OpenStack Foundation I'd been keen on
>> seeing how we can simplify the deployment offerings we're currently
>> working on today in an effort foster greater developer interactions so that
>> we can work together on building the best deployer and operator
>> experience.
>>
> Basically there were two major points:
>
> - containers: we don't need it. For us, that was no real benefits to use them, but just
> added unnecessary complexity. Instead of having 1 mgmt address of a controller, it had
> a dozen, installation times were huge (>2 hours) with creating and updating each controller,the
I can see the benefit of both a containerized and non-containerized 
stack. This is one of the reasons the we made the OSA deployment 
solution capable of doing a deployment without containers. It's really 
as simple as setting the variable "is_metal=true". While I understand 
the desire to reduce the deployment times I've found deployments a whole 
lot more flexible and stable when isolating services especially as it 
pertains to upgrades.

> generated inventory was fragile (any time I wanted to change something in the generated
> inventory, I had a high chance to break it). When I learned how to install without containers,
This is true, the generated inventory can be frustrating when you 
getting used to setting things up. I've not found it fragile when 
running prod though. Was there something that you ran into on that front 
which caused you instabilities or were these all learning pains?

> another problem came in: every service listens on 0.0.0.0, so haproxy can't bind to the service ports.
>
As a best practice when moving clouds to production I'd NOT recommend 
running your load balancer on the same hosts as your service 
infrastructure. One terrible limitation with that kind of a setup, 
especially without containers or service namespaces, is the problem that 
arise when a connection go into a sleep wait state while a vip is 
failing over. This will cause immanent downtime for potentially long 
periods of time and can break things like DB replication, messaging, 
etc... This is not something you have to be aware of as your tooling 
around but when a deployment goes into production its something you 
should be aware of. Fencing with pacemaker and other things can help but 
they also bring in other issues. Having an external LB is really the way 
to go which is why HAP on a controller without containers is not 
recommended. HAP on a VM or stand alone node works great! Its worth 
noting in the OSA stack the bind addresses which are assumed 0.0.0.0 can 
be arbitrarily set using a template override for a given service.

> - packages: we wanted to avoid mixing pip and vendor packages. Linux great power was
> always the package management system. We don't have the capacity to choose the right
> revision from git. Also a .deb package come with goodies, like the init scripts, proper system
> users, directories, upgrade possibility and so on. Bugs can be reported against .debs.
>
I apologize but I could disagree this more. We have all of the system 
goodies you'd expect running OpenStack on a Ubuntu system, like init 
scripts, proper system users, directories, etc.. we even have 
upgradability between major and minor versions. Did you find something 
that didn't work? Within the OSA project we're choosing the various 
version from git for the deployer by default and basing every tag off of 
the stable branches as provided by the various services; so its not like 
you had much to worry about in that regard. As for the ability to create 
bugs I fail to see how creating a bug report on a deb from a third party 
would be more beneficial and have a faster turn around than creating a 
bug report within a given service project, there by interacting with its 
developers and maintainers. By going to source we're able to fix general 
bugs, CVEs, and anything else within hours not days or weeks. Also I 
question the upgrade-ability of the general OpenStack package ecosystem. 
As a deployer whom has come from that space and knows what types of 
shianigans goes on in there, using both debs and rpms, I've found 
running OpenStack clouds at various sizes for long periods of time 
becomes very difficult as packages, package dependencies, patches the 
third party is carrying, and other things change causing instability and 
general breakage. That said I agree package management in Linux has 
always been a strong point but I've found out the hard way that package 
deployments of OpenStack don't scale or upgrade well. It may be better 
today than it was before however color me skeptical.

> And some minor points:
> - Need root rights to start. I don't really understand why it is needed.
You do need root to run the OSA playbooks, however you could use the 
ansible "become" process to achieve it. Even in package deployments of 
OpenStack, as provided by the distro, you still need root privileges to 
create users, init scripts, etc...

> - I think the role plays are unnecessary fragmented into files. Ansible designed with simplicity in mind,
>    now keystone for example has 29 files, lots of them with 1 task.
This is true, some of our roles are rather large but they do just about 
everything that the service provides. We've found it better to structure 
the roles with includes instead of simply overloading the main.yml. It 
makes debugging and focusing parts of the role on specific tasks a given 
service may require easier to understand and develop. While the Roles 
could be greatly simplified we're looking to support as many things as 
possible within a given service, such as Keystone w/ various token 
provider backens, federation, using apache+mod-wsgi for the api service 
etc... I'd like to point our that "simplicity in mind" is the driving 
thought and something that we try to adhere too however holding fast on 
simplicity is not always possible when the services being deployed are 
complex. As a deployer simplicity should be a driver in how something 
works which doesn't always translate to implementation.

>...I could not understand what the
> - The 'must have tags' are also against Ansible's philosophy. No one should need to start a play with a tag
> (tagging should be an exception, not the rule).
I'm not sure what this means. The only thing I could think of is when 
rebootstratping the galera cluster after every node within the cluster 
is down. Not that the tag is required is this case, its only used to 
speed up the bootstrap process and recover the cluster. We do have a few 
sanity checks in places that will cause a role to hard fail and may 
require passing an extra variable on the command line to run however the 
fail output provides a fairly robust message regarding why the task is 
being hard stopped. This was done so that you don't inadvertently cause 
yourself downtime or data-loss. In either case, these are the exceptions 
and not the rules. So like I said I think I'm missing the point here.

Running a role doesn't take more than 10-20 secs, if it is already
> completed, tagging is just unnecessary bloat. If you need to start something at the middle of a play, then that play
> is not right.
This is plain wrong... Tags are not bloat and you'll wish you had them 
when you need to rapidly run a given task to recover or reconfigure 
something especially as your playbooks and roles grow in sophistication 
and capabilities. I will say though that we had a similar philosophy in 
our early Ansible adventures however we've since reversed that position 
entirely.

>
> So those were the reason why we started our project, hope you can understand it. We don't want to compete,
> just it serves us better.
>
>> All that said, thanks for sharing the release and if I can help in any way please
>> reach out.
>>
> Thanks, maybe we can work together in the future.
>
I too hope that we can work together. It'd be great to get different 
perspectives on roles and plays that we're creating and that you may 
need to serve your deployments. I'll also note that we've embarked on a 
massive decoupling of the roles from the main OSA repository which may 
be beneficial to you and your project, or other projects like it. A full 
list of roles we've done thus far can be seen here [0]. In the Mitaka 
release time we hope to have the roles fully stand alone and brought 
into OSA via the ansible-galaxy resolver which will make it possible for 
developers and deployers a like to benifit from the roles on an `a la 
carte` basis.


If you ever have other questions as you build out your own project or if 
there's something that we can help with please let us know. We're almost 
always in the #openstack-ansible channel and generally I'd say that most 
of the folks in there are happy to help. Take care and happy Ansible'ing!


[0] - https://github.com/openstack?utf8=%E2%9C%93&query=openstack-ansible

>> --
>>
>> Kevin Carter
>> IRC: cloudnull
>>
> Br,
> György
>
>>
>> ________________________________________
>> From: Gyorgy Szombathelyi <gyorgy.szombathelyi at doclerholding.com>
>> Sent: Tuesday, January 26, 2016 4:32 AM
>> To: 'openstack-dev at lists.openstack.org'
>> Subject: [openstack-dev] OpenStack installer
>>
>> Hello!
>>
>> I just want to announce a new installer for OpenStack:
>> https://github.com/DoclerLabs/openstack
>> It is GPLv3, uses Ansible (currently 1.9.x,  2.0.0.2 has some bugs which has to
>> be resolved), has lots of components integrated (of course there are missing
>> ones).
>> Goal was simplicity and also operating the cloud, not just installing it.
>> We started with Rackspace's openstack-ansible, but found it a bit complex
>> with the containers. Also it didn't include all the components we required, so
>> started this project.
>> Feel free to give it a try! The documentation is sparse, but it'll improve with
>> time.
>> (Hope you don't consider it as an advertisement, we don't want to sell this,
>> just wanted to share our development).
>>
>> Br,
>> György
>>
>> __________________________________________________________
>> ________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-
>> request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> __________________________________________________________
>> ________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-
>> request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



More information about the OpenStack-dev mailing list