On Tue, 2018-12-04 at 12:08 +0100, Flavio Percoco wrote:
Greetings,
I've been working on a tool that requires creating CoreOS nodes on OpenStack. Sadly, I've hit the user data limit, which has blocked the work I'm doing.
One big difference between CoreOS images and other cloud images out there is that CoreOS images don't use cloud-init but a different tool called Ignition[0], which uses JSON as its serialization format.
The size of the configs that I need to pass to the instance is bigger than the limit imposed by Nova. I've worked on reducing the size as much as possible and even generating a compressed version of it but the data is still bigger than the limit (144 kb vs 65kb).
I'd like to understand better what the nature of the limit is (is it because of the field size in the database? Is it purely an API limit? Is it because it causes problems depending on the vendor? As far as I can tell the limit is just being enforced by the API schema[1] and not the DB as it uses a MEDIUMTEXT field.
I realize this has been asked before but I wanted to get a feeling of the current opinion about this. Would the Nova team consider increasing the limit of the API considering that more use cases like this may be more common these days?
I think EC2 only gives you 1/4 of what Nova does (16KB or so). So it would seem Nova is already being somewhat generous here. I don't see any harm in increasing it so long as the DB supports it (no DB schema change would be required). I wonder if pairing userdata with a token that allowed you to download the information from another (much larger) data source would be a better pattern here though. Then you could make it as large as you needed. Dan
[0] https://coreos.com/ignition/docs/latest/ [1] https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/sch...
Thanks, Flavio