[openstack-dev] [tuskar][tripleo] Tuskar/TripleO on Devstack
Ben Nemec
openstack at nemebean.com
Tue Oct 28 19:47:22 UTC 2014
On 10/28/2014 01:34 PM, Clint Byrum wrote:
> Excerpts from Ben Nemec's message of 2014-10-28 11:13:22 -0700:
>> On 10/28/2014 06:18 AM, Steven Hardy wrote:
>>> On Tue, Oct 28, 2014 at 11:08:05PM +1300, Robert Collins wrote:
>>>> On 28 October 2014 22:51, Steven Hardy <shardy at redhat.com> wrote:
>>>>> On Tue, Oct 28, 2014 at 03:22:36PM +1300, Robert Collins wrote:
>>>>>> So this should work and I think its generally good.
>>>>>>
>>>>>> But - I'm curious, you only need a single image for devtest to
>>>>>> experiment with tuskar - the seed - which should be about the same
>>>>>> speed (or faster, if you have hot caches) than devstack, and you'll
>>>>>> get Ironic and nodes registered so that the panels have stuff to show.
>>>>>
>>>>> TBH it's not so much about speed (although, for me, devstack is faster as
>>>>> I've not yet mirrored all-the-things locally, I only have a squid cache),
>>>>> it's about establishing a productive test/debug/hack/re-test workflow.
>>>>
>>>> mm, squid-cache should still give pretty good results. If its not, bug
>>>> time :). That said..
>>>>
>>>>> I've been configuring devstack to create Ironic nodes FWIW, so that works
>>>>> OK too.
>>>>
>>>> Cool.
>>>>
>>>>> It's entirely possible I'm missing some key information on how to compose
>>>>> my images to be debug friendly, but here's my devtest frustration:
>>>>>
>>>>> 1. Run devtest to create seed + overcloud
>>>>
>>>> If you're in dev-of-a-component cycle, I wouldn't do that. I'd run
>>>> devtest_seed.sh only. The seed has everything on it, so the rest is
>>>> waste (unless you need all the overcloud bits - in which case I'd
>>>> still tune things - e.g. I'd degrade to single node, and I'd iterate
>>>> on devtest_overcloud.sh, *not* on the full plumbing each time).
>>>
>>> Yup, I went round a few iterations of those, e.g running devtest_overcloud
>>> with -c so I could more quickly re-deploy, until I realized I could drive
>>> heat directly, so I started doing that :)
>>>
>>> Most of my investigations atm are around investigating Heat issues, or
>>> testing new tripleo-heat-templates stuff, so I do need to spin up the
>>> overcloud (and update it, which is where the fun really began ref bug
>>> #1383709 and #1384750 ...)
>>>
>>>>> 2. Hit an issue, say a Heat bug (not that *those* ever happen! ;D)
>>>>> 3. Log onto seed VM to debug the issue. Discover there are no logs.
>>>>
>>>> We should fix that - is there a bug open? Thats a fairly serious issue
>>>> for debugging a deployment.
>>>
>>> I've not yet raised one, as I wasn't sure if it was either by design, or if
>>> I was missing some crucial element from my DiB config.
>>>
>>> If you consider it a bug, I'll raise one and look into a fix.
>>>
>>>>> 4. Restart the heat-engine logging somewhere
>>>>> 5. Realize heat-engine isn't quite latest master
>>>>> 6. Git pull heat, discover networking won't allow it
>>>>
>>>> Ugh. Thats horrid. Is it a fedora thing? My seed here can git pull
>>>> totally fine - I've depended heavily on that to debug various things
>>>> over time.
>>>
>>> Not yet dug into it in a lot of detail tbh, my other VMs can access the
>>> internet fine so it may be something simple, I'll look into it.
>>
>> Are you sure this is a networking thing? When I try a git pull I get this:
>>
>> [root at localhost heat]# git pull
>> fatal:
>> '/home/bnemec/.cache/image-create/source-repositories/heat_dc24d8f2ad92ef55b8479c7ef858dfeba8bf0c84'
>> does not appear to be a git repository
>> fatal: Could not read from remote repository.
>>
>> That's actually because the git repo on the seed would have come from
>> the local cache during the image build. We should probably reset the
>> remote to a sane value once we're done with the cache one.
>>
>> Networking-wise, my Fedora seed can pull from git.o.o just fine though.
>>
>
> I think we should actually just rip the git repos out of the images in
> production installs. What good does it do sending many MB of copies of
> the git repos around? Perhaps just record HEAD somewhere in a manifest
> and rm -r the source repos during cleanup.d.
I actually thought we were removing git repos, but evidently not.
>
> But, for supporting dev/test, we could definitely leave them there and
> change the remotes back to their canonical (as far as diskimage-builder
> knows) sources.
I wonder if it would make sense to pip install -e. Then the copy of the
application in the venvs is simply a pointer to the actual git repo.
This would also make it easier to make changes to the running code -
instead of having to make a change, reinstall, and restart services you
could just make the change and restart like in Devstack.
I guess I don't know if that has any negative impacts for production use
though.
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
More information about the OpenStack-dev
mailing list