<font size=2 face="sans-serif">I like the idea of measuring complexity.
I looked briefly at `python -m mccabe`. It seems to measure
each method independently. Is this really fair? If I have a
class with some big methods, and I break it down into more numerous and
smaller methods, then the largest method gets smaller, but the number of
methods gets larger. A large number of methods is itself a form of
complexity. It is not clear to me that said re-org has necessarily
made the class easier to understand. I can also break one class into
two, but it is not clear to me that the project has necessarily become
easier to understand. While it is true that when you truly make a
project easier to understand you sometimes break it into more classes,
it is also true that you can do a bad job of re-organizing a set of classes
while still reducing the size of the largest method. Has the McCabe
metric been evaluated on Python projects? There is a danger in focusing
on what is easy to measure if that is not really what you want to optimize.</font>
<br>
<br><font size=2 face="sans-serif">BTW, I find that one of the complexity
issues for me when I am learning about a Python class is doing the whole-program
type inference so that I know what the arguments are. It seems to
me that if you want to measure complexity of Python code then something
like the complexity of the argument typing should be taken into account.</font>
<br>
<br><font size=2 face="sans-serif">Regards,<br>
Mike</font>