I'm refraining from calling out any particular participant on this (for now), but we've gotten a lot of recent and very enthusiastic answers to people's questions which look very much like the results of blindly pasting them into an LLM (e.g. ChatGPT) and then passing the response off as the sender's own experience. This quickly falls apart when there are multiple open source projects with the same name and the LLM gives a response about the "wrong" project with that name, revealing that the person posting it really doesn't even know the difference between them. Users post their questions to this list because they've exhausted other avenues and are hoping to hear responses derived from (human) subscribers' own personal/professional experiences. If you've legitimately run the software and actually know what the question means, then please do answer. But if users wanted to see the results of pasting their question into ChatGPT then they'd just do that (and likely already have). I don't want to institute "AI" oriented policies for this list like other forums have been forced to in recent months, so please (everyone!) just make it clear when the information you're posting is the result of consulting another source, and state what source it came from. Thanks! -- Jeremy Stanley