[openstack-dev] [ironic] [tripleo] [kolla] Possible to support multiple compute drivers?
Kevin Benton
blak111 at gmail.com
Tue Sep 15 06:40:09 UTC 2015
>I'm no Neutron expert, but I suspect that one could use either the
LinuxBridge *or* the OVS ML2 mechanism driver for the L2 agent, along with
a single flat provider network for your baremetal nodes.
If it's a baremetal node, it wouldn't be running an agent at all, would it?
On Mon, Sep 14, 2015 at 8:12 AM, Jay Pipes <jaypipes at gmail.com> wrote:
> On 09/10/2015 12:00 PM, Jeff Peeler wrote:
>
>> On Wed, Sep 9, 2015 at 10:25 PM, Steve Gordon <sgordon at redhat.com
>> <mailto:sgordon at redhat.com>> wrote:
>>
>> ----- Original Message -----
>> > From: "Jeff Peeler" <jpeeler at redhat.com <mailto:jpeeler at redhat.com
>> >>
>> > To: "OpenStack Development Mailing List (not for usage questions)" <
>> openstack-dev at lists.openstack.org
>> <mailto:openstack-dev at lists.openstack.org>>
>> >
>> > I'd greatly prefer using availability zones/host aggregates as I'm
>> trying
>> > to keep the footprint as small as possible. It does appear that in
>> the
>> > section "configure scheduler to support host aggregates" [1], that
>> I can
>> > configure filtering using just one scheduler (right?). However,
>> perhaps
>> > more importantly, I'm now unsure with the network configuration
>> changes
>> > required for Ironic that deploying normal instances along with
>> baremetal
>> > servers is possible.
>> >
>> > [1]
>> >
>> http://docs.openstack.org/kilo/config-reference/content/section_compute-scheduler.html
>>
>> Hi Jeff,
>>
>> I assume your need for a second scheduler is spurred by wanting to
>> enable different filters for baremetal vs virt (rather than
>> influencing scheduling using the same filters via image properties,
>> extra specs, and boot parameters (hints)?
>>
>> I ask because if not you should be able to use the hypervisor_type
>> image property to ensure that images intended for baremetal are
>> directed there and those intended for kvm etc. are directed to those
>> hypervisors. The documentation [1] doesn't list ironic as a valid
>> value for this property but I looked into the code for this a while
>> ago and it seemed like it should work... Apologies if you had
>> already considered this.
>>
>> Thanks,
>>
>> Steve
>>
>> [1]
>>
>> http://docs.openstack.org/cli-reference/content/chapter_cli-glance-property.html
>>
>>
>> I hadn't considered that, thanks.
>>
>
> Yes, that's the recommended way to direct scheduling requests -- via the
> hypervisor_type image property.
>
> > It's still unknown to me though if a
>
>> separate compute service is required. And if it is required, how much
>> segregation is required to make that work.
>>
>
> Yes, a separate nova-compute worker daemon is required to manage the
> baremetal Ironic nodes.
>
> Not being a networking guru, I'm also unsure if the Ironic setup
>> instructions to use a flat network is a requirement or is just a sample
>> of possible configuration.
>>
>
> AFAIK, flat DHCP networking is currently the only supported network
> configuration for Ironic.
>
> > In a brief out of band conversation I had, it
>
>> does sound like Ironic can be configured to use linuxbridge too, which I
>> didn't know was possible.
>>
>
> Well, LinuxBridge vs. OVS isn't really about whether you have a flat
> network topology or not. It's just a different way of doing the actual
> switching (virtual bridging vs. standard linux bridges).
>
> I'm no Neutron expert, but I suspect that one could use either the
> LinuxBridge *or* the OVS ML2 mechanism driver for the L2 agent, along with
> a single flat provider network for your baremetal nodes.
>
> Hopefully an Ironic + Neutron expert will confirm or deny this?
>
> Best,
> -jay
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
--
Kevin Benton
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/f4232b49/attachment.html>
More information about the OpenStack-dev
mailing list