<div dir="ltr">><span style="font-size:12.8000001907349px">I'm no Neutron expert, but I suspect that one could use either the LinuxBridge *or* the OVS ML2 mechanism driver for the L2 agent, along with a single flat provider network for your baremetal nodes.</span><div class="gmail_extra"><br></div><div class="gmail_extra">If it's a baremetal node, it wouldn't be running an agent at all, would it?</div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Sep 14, 2015 at 8:12 AM, Jay Pipes <span dir="ltr"><<a href="mailto:jaypipes@gmail.com" target="_blank">jaypipes@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">On 09/10/2015 12:00 PM, Jeff Peeler wrote:<br>
</span><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">
On Wed, Sep 9, 2015 at 10:25 PM, Steve Gordon <<a href="mailto:sgordon@redhat.com" target="_blank">sgordon@redhat.com</a><br></span><span class="">
<mailto:<a href="mailto:sgordon@redhat.com" target="_blank">sgordon@redhat.com</a>>> wrote:<br>
<br>
----- Original Message -----<br></span><span class="">
> From: "Jeff Peeler" <<a href="mailto:jpeeler@redhat.com" target="_blank">jpeeler@redhat.com</a> <mailto:<a href="mailto:jpeeler@redhat.com" target="_blank">jpeeler@redhat.com</a>>><br>
> To: "OpenStack Development Mailing List (not for usage questions)" <<a href="mailto:openstack-dev@lists.openstack.org" target="_blank">openstack-dev@lists.openstack.org</a><br></span><div><div class="h5">
<mailto:<a href="mailto:openstack-dev@lists.openstack.org" target="_blank">openstack-dev@lists.openstack.org</a>>><br>
><br>
> I'd greatly prefer using availability zones/host aggregates as I'm trying<br>
> to keep the footprint as small as possible. It does appear that in the<br>
> section "configure scheduler to support host aggregates" [1], that I can<br>
> configure filtering using just one scheduler (right?). However, perhaps<br>
> more importantly, I'm now unsure with the network configuration changes<br>
> required for Ironic that deploying normal instances along with baremetal<br>
> servers is possible.<br>
><br>
> [1]<br>
><a href="http://docs.openstack.org/kilo/config-reference/content/section_compute-scheduler.html" rel="noreferrer" target="_blank">http://docs.openstack.org/kilo/config-reference/content/section_compute-scheduler.html</a><br>
<br>
Hi Jeff,<br>
<br>
I assume your need for a second scheduler is spurred by wanting to<br>
enable different filters for baremetal vs virt (rather than<br>
influencing scheduling using the same filters via image properties,<br>
extra specs, and boot parameters (hints)?<br>
<br>
I ask because if not you should be able to use the hypervisor_type<br>
image property to ensure that images intended for baremetal are<br>
directed there and those intended for kvm etc. are directed to those<br>
hypervisors. The documentation [1] doesn't list ironic as a valid<br>
value for this property but I looked into the code for this a while<br>
ago and it seemed like it should work... Apologies if you had<br>
already considered this.<br>
<br>
Thanks,<br>
<br>
Steve<br>
<br>
[1]<br>
<a href="http://docs.openstack.org/cli-reference/content/chapter_cli-glance-property.html" rel="noreferrer" target="_blank">http://docs.openstack.org/cli-reference/content/chapter_cli-glance-property.html</a><br>
<br>
<br>
I hadn't considered that, thanks.<br>
</div></div></blockquote>
<br>
Yes, that's the recommended way to direct scheduling requests -- via the hypervisor_type image property.<span class=""><br>
<br>
> It's still unknown to me though if a<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
separate compute service is required. And if it is required, how much<br>
segregation is required to make that work.<br>
</blockquote>
<br></span>
Yes, a separate nova-compute worker daemon is required to manage the baremetal Ironic nodes.<span class=""><br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Not being a networking guru, I'm also unsure if the Ironic setup<br>
instructions to use a flat network is a requirement or is just a sample<br>
of possible configuration.<br>
</blockquote>
<br></span>
AFAIK, flat DHCP networking is currently the only supported network configuration for Ironic.<span class=""><br>
<br>
> In a brief out of band conversation I had, it<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
does sound like Ironic can be configured to use linuxbridge too, which I<br>
didn't know was possible.<br>
</blockquote>
<br></span>
Well, LinuxBridge vs. OVS isn't really about whether you have a flat network topology or not. It's just a different way of doing the actual switching (virtual bridging vs. standard linux bridges).<br>
<br>
I'm no Neutron expert, but I suspect that one could use either the LinuxBridge *or* the OVS ML2 mechanism driver for the L2 agent, along with a single flat provider network for your baremetal nodes.<br>
<br>
Hopefully an Ironic + Neutron expert will confirm or deny this?<br>
<br>
Best,<br>
-jay<br>
<br>
__________________________________________________________________________<span class="im HOEnZb"><br>
OpenStack Development Mailing List (not for usage questions)<br></span><div class="HOEnZb"><div class="h5">
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.openstack.org?subject:unsubscribe</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
</div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature"><div>Kevin Benton</div></div>
</div></div>