Sunday, 22 February 2026

Python Class-Level Type Hints

Notice that in this post I'm talking about "standard" Python classes, not about dataclasses. I recently became aware of the possibility of using class-level type hints in your classes. The thing is that when reading the documentation I found it rather confusing. To make sense of it we have to be pretty aware of the difference between the intent that we express with those class hints and its runtime effects. So we have this example in the documentation:


class BasicStarship:
    captain: str = 'Picard'               # instance variable with default
    damage: int                           # instance variable without default
    stats: ClassVar[dict[str, int]] = {}  # class variable

The 'damage: int' part is the one that I knew about "class-level typehints" and was clear to me. We declare an attribute and its type, but we don't initialize it. Python takes this just as typing information, it has no runtime impact (other than being added to that class __annotations__), we are not creating an attribute in the class object.

The 'captain: str = 'Picard'' is what I could not understand. For me it's like the normal way of adding a class attribute, only that additionally you indicate the type, so how can it be that the doc says that it's an "instance variable with default". Well, it's the type-checking meaning vs the runtime effect. I am right that we get an attribute created at the class level (in the class __dict__), just see:


>>> class User:
...     continent: str = "Europe"
...     active = True
...

>>> User.__dict__
mappingproxy({'__module__': '__main__', '__firstlineno__': 1, '__annotations__': {'continent': }, 'continent': 'Europe', 'active': True, '__static_attributes__': (), '__dict__': , '__weakref__': , '__doc__': None})

>>> User.continent
'Europe'

>>> User.active
True

But for the type checker what that typed declaration means is that instances of that class will have a captain (or continent in my example) attribute. This could feel contradictory, but given how attribute look up works it's perfectly fine. Initially the 'captain' attribute is created at the class level. If we read it through an instance (my_ship.captain) the look up mechanism won't find it in the instance, but in the class, and return it. Then, when we write to it through an instance (not through the class) the writing will be done in the instance, so a 'captain' attribute will be added to the instance. That's fine, indeed, it's very nice, while the attribute is not being written to, just read, it's being shared between instances, kept in the class (and saving memory), then, as soon as you write to it, it's shadowed by the instance.


s = BasicStarship()
print(s.captain)       # "Picard" via class lookup
s.captain = "Xuan"     # creates an instance attribute
print(s.__dict__)      # {'captain': 'Xuan'}
print(BasicStarship.__dict__['captain'])  # 'Picard'

We can sumarize it like this:

Type hints alone do not create attributes; they only declare intent.
If you want the attribute to exist on the class (and thus be visible via Foo.x), you must assign a default value.

By the way, this is not the first time I see this behaviour of reading values from a "parent object" until we write the value to the object itself, shadowing it. This is just how things work in JavaScript with the [[Prototype]] chain.

I'm not much of a fan of defining instance attributes at the class level. It's true that it makes very explicit that an attribute is part of the public contract of the class, but I think most of the time it's a bit boilerplate. Type-checkers and autocomplete work perfectly fine with the classical style of initializing in the __init__ method, and if an attribute is internal/private and should not be considered part of the public API we should just follow the convention of starting it with '_'. So normally I would write the above code like this:



class AdvancedStarship:
    # stats = {} mypy will complain about this, because it is not a ClassVar
    stats: ClassVar[dict[str, int]] = {}  # class variable
    
    def __init__(self, damage: int, captain: str = 'Picard') -> None:
        self.captain = captain
        self.damage = damage



The case where these class-level type hints feel very useful to me is for Protocols, making unnecessary to declare the "data part" of the protocol with properties (get/set descriptors), that is the approach I used to follow so far.



from typing import Protocol

class Foo(Protocol):
    x: int  # part of the interface

class Bar:
    def __init__(self):
        self.x = 42  # matches Foo


It's also useful if we have attributes that won't be set in __init__, but in some later method call. This way we make them part of the class contract and initialize them to a default value (probably None), shared by all instances via the class attribute (as we saw with BasicStarship.captain), and then get it added to each instance when it gets set to a specific value.

Sunday, 15 February 2026

Logical Assignment Operator and More

I've recently come across the Logical OR Assignment (||=), and the Nullish Coalescing Assignment (??=) operators in JavaScript. They are not a revolution, just a shortcut for the usage of the OR (||) operator and the nullish coalescing operator in assignment situations. We use "||=" for falsy values and "??=" for nullish (null, undefined) values. Let's see:


// for "falsy" values
> let name = "";
> name ||= "default";
'default'
> name ||= "default2";
'default'

// is equivalent to:
> name = ""
> name = name || "default";
'default'
> name = name || "default2";
'default'

// for strict null or undefined values:
> let name = null; // or name = undefined
> name ??= "default";
'default'
> name ??= "default2";
'default'

// is equivalent to:
> name = null;
> name = name ?? "default";
'default'
> name = name ?? "default2";
'default'


Python does not have a 'None coalescing' operator (so obviously it does not have a 'None coalescing assignment' operator) so as equivalent we have to use an if-else expression. We have the 'or' operator (that we can use with falsy values), but not an "or assignment" operator. So the equivalent code to the above JavaScript is quite more verbose:


# for "falsy" values
> name = ""
> name = name or "default"
'default'
> name = name || "default2"
'default'

# for strict None values:
> name = null
> name = name if name is not None else "default"
'default'
> name = name if name is not None else "default2"
'default'

As the if-else pattern is quite verbose, we can write a simple coalesce function (I've just remembered that such function is almost standard SQL) to make code more straightforward.


def coalesce(value, default_value):
    return value if value is not None else default_value

a = coalesce(a, "default value")

As for other languages, Kotlin has the || operator and the :? null coalescing operator, but not a shortcut form to use during assignment. Ruby has a logical or assignment operator that we can use with nil and false (the only falsy values in Ruby). It feels strange that Ruby does not have a null coalescing operator, so if we want to be strict and deal only with null (nil), we have to use the so rich Ruby syntax differently:


# for null coalescing assignment
# like JavaScript: a = a ?? "default" 
# or Kotlin: a = a ?: "default"

a = "default" if a.nil?
# or
a = a.nil? ? "default" : a


Reached this point I think it'll be good to remember what are considered falsy values (those that, when evaluated in a boolean context, are considered as false) in different languages:

  • JavaScript: false, null, undefined, 0, ""
  • Python: False, None, 0, "", [], {}, set()
  • Rubynil, false
  • Kotlinfalse. Kotlin does NOT perform truthy/falsy coercion, it's fully, strictly typed:trying to use a non boolean value in a condition causes a compilation error.

As you can see the main (and very important) difference between JavaScript and Python is that in Python empty containers are falsy.

Saturday, 7 February 2026

Python Attribute Lookup and Dunders

I already talked in the past about Python descriptors [1] and [2] (referencing also the complex attribute lookup process). Somehow I've recently realised of how some commonly used attributes are managed with descriptors present in classes or metaclasses. First, I'll paste here the conclusions after an interesting chat with a GPT regarding the attibute lookup process:

1) Instance attribute lookup (obj.attr)

This is (conceptually) what object.__getattribute__(obj, name) does:

a) Check for a data descriptor on the class or its MRO
Search type(obj).__mro__ for name in each class’s __dict__.
If found and it’s a data descriptor (has __set__ or __delete__), return descriptor.__get__(obj, type(obj)).

b) Check the instance’s own dictionary
If obj.__dict__ exists and contains name, return obj.__dict__[name].
Note: If the class defines __slots__ without __dict__, this step may not exist.

c) Check for a non-data descriptor or other attribute on the class/MRO
Search type(obj).__mro__ for name.
If found and it’s a non-data descriptor (has __get__ only), return descriptor.__get__(obj, type(obj)).
Otherwise, return the found value as-is.

d) Fallback: __getattr__
If nothing above produced a value, and type(obj) defines __getattr__(self, name), call it and return its result.

e) Otherwise
Raise AttributeError.

2) class attribute lookup (C.attr)

Conceptually, type.__getattribute__(C, name) does this:

a) Metaclass MRO — data descriptors first
Search type(C).__mro__. If name is found and it’s a data descriptor (__set__ or __delete__ present), return descriptor.__get__(None, C).

b) Class MRO (C and its bases) — regular attributes & descriptors
Search C.__mro__ (starting with C, then bases):
If found and it’s a descriptor (__get__), return descriptor.__get__(None, C) (note obj=None).
Otherwise, return the raw value.

c) Metaclass MRO — non-data descriptors and other attributes
If found on the metaclass MRO and it’s a descriptor, return descriptor.__get__(C, type(C)) (here, the “instance” is the class C) Otherwise return the value.

e) Fallback
If not found and the metaclass defines __getattr__(cls, name), call it.
Else raise AttributeError.

Let's see now some examples of attributes that are indeed descriptors:

__name__ of a class (Person.__name__). One could think that it's just an attribute directly in the class object, but if it were that way, I could acces it via an instance of the class (person1.__name__) that is not the case. So indeed __name__ is a descriptor in the metaclass (and exactly the same for __bases__ or __doc__):


>>> class Person:
...     pass
...     
>>> Person().__name__
Traceback (most recent call last):
    Person().__name__
AttributeError: 'Person' object has no attribute '__name__'

>>> Person.__name__
'Person'

>>> Person.__dict__["__name__"]
Traceback (most recent call last):
    Person.__dict__["__name__"]
    ~~~~~~~~~~~~~~~^^^^^^^^^^^^
KeyError: '__name__'

>>> type(Person).__dict__["__name__"]
attribute '__name__' of 'type' objects
>>> type(type(Person).__dict__["__name__"])
class 'getset_descriptor'

>>> type(Person).__dict__["__bases__"]
attribute '__bases__' of 'type' objects
>>> type(type(Person).__dict__["__bases__"])
class 'getset_descriptor'>

>>> type(type(Person).__dict__["__doc__"])
class 'getset_descriptor'


__class__ of an instance or __class__ of a class. This one does not seem be based on descriptors, but (my discussion with a GPT is a bit confusing) it seem like it's managed specially by the look up algorithm.


>>> p1 = Person()
>>> p1.__class__
class '__main__.Person'

>>> type.__class__
class 'type'

>>> type(p1.__dict__["__class__"])
Traceback (most recent call last):
    type(p1.__dict__["__class__"])
         ~~~~~~~~~~~^^^^^^^^^^^^^
KeyError: '__class__'

>>> type(Person.__dict__["__class__"])
Traceback (most recent call last):
    type(Person.__dict__["__class__"])
         ~~~~~~~~~~~~~~~^^^^^^^^^^^^^
KeyError: '__class__'


Dunder attributes. It's interesting to note that there are 2 categories of __dunder__ attributes (those that start and end by "__").
- On one hand we have those like the ones we've just seen, these are Special Attributes (Metadata), that are used to store metadata: __name__, __class__, __bases__, __mro__, __dict__, __module__, __doce__, __annotations__.
- And on the other hand we have Special Methods (Behavioral Hooks), that are used to implement Python's syntactic sugar:

__call__: ob(), Invokation
__getitem__: ob[key]
__setitem__: ob[key] = value 
__getattr__: Fallback for missing attributes
__getattribute__: Intercepts all attribute access
__iter__, __next__: Iteration
__str__, __repr__: String representation
__eq__, __lt__, etc: Comparisons
__enter__, __exit__: Context managers
__add__, __mul__, etc: Arithmetic operations

Notice that if you access a Behavioral Hook "on your own" (I mean, you explicitly do: obj.__call__() or obj.__iter__()) the normal look up mechanism applies (using the object and its class). However, when used in the intended way (when you do obj(), or iter(obj)) the look up is done only in the class of the object (and if an object is a class it's done in its metaclass) not in the object itself.

Friday, 30 January 2026

Python Multinested Closure

After my previous post (that mentions other related posts with similar details) about Python closure introspection (and a bit of internals) I came across a detail that at first seemed strange to me, but that makes much sense (and made me further dive into the implementation).

Let's say we have these nested functions (with 3 levels of nesting). We return the most nested function (inner_2) that traps variables in the most outer function (becoming a closure):


def outer():
    print("outer")
    x = "a"
    y = "b"
    def inner_1():
        # it's using x
        nonlocal x
        x += "b"
        print(f"inner_1: {x}")
        def inner_2():
            # it's using both x and y
            nonlocal x
            x += "c"
            print(f"inner_2, x:{x} y:{y}")
        return inner_2
    return inner_1

in_1 = outer()
in_2 = in_1()


inner_2 is trapping 2 variables defined in outer: x and y. We can see it by checking its __closure__ and the co_freevars in its __code__ object, and the co_cellvars of the outer function code object:


print(f"in_2.__closure__: {in_2.__closure__}.") # 2 cells, for the x and y values>
print(f"in_2.__code__.co_freevars: {in_2.__code__.co_freevars}.") # in_2.__code__.co_freevars: ('x', 'y')

# in_2.__closure__: (cell at 0x78f58889d570: str object at 0x78f58886f930, cell at 0x78f58889d540: str object at 0x5a0cc7144e08).
# in_2.__code__.co_freevars: ('x', 'y').

print(f"outer.__code__.co_cellvars: {outer.__code__.co_cellvars}") # ('x', 'y')
# in_1.__code__.co_freevars: ('x', 'y')

But checking these attributes for the intermediate inner function comes with some surprise:


print(f"in_1.__closure__: {in_1.__closure__}.") # 2 cells, for the x and y values>
print(f"in_1.__code__.co_freevars: {in_1.__code__.co_freevars}.") # in_1.__code__.co_freevars: ('x', 'y').
print(f"in_1.__code__.co_cellvars: {in_1.__code__.co_cellvars}") # () 
print(f"outer.__code__.co_cellvars: {outer.__code__.co_cellvars}") # ('x', 'y')

#in_1.__closure__: (cell at 0x78f58889d570: str object at 0x78f58886f930, cell at 0x78f58889d540: str object at 0x5a0cc7144e08).
#in_1.__code__.co_freevars: ('x', 'y').
#in_1.__code__.co_cellvars: ()
#outer.__code__.co_cellvars: ('x', 'y')


inner_1 is trapping x in its closure, which is normal as it's using it, but it's also trapping y, that it's not using, why? Well, indeed inner_1 is not using y in a direct, visible way, but it needs it, as when the inner_2 function object is created, it needs both x and y for its closure. The cells for x and y are created in the heap when outer is executed. outer creates inner_1 and returns it, so when inner_1 is executed and creates inner_2, outer is long gone, so we need to have the reference to the x and y cells somewhere, to put them in inner_2.__closure__. That "somewhere" is inner_1 closure. So yes, even if inner_1 only works directly with x, it gets y also in its closure.

Discussing this with a GPT you get a nice explanation:

This is sometimes described as “transitive closure capture” or “cell promotion/relaying”: an intermediate function (inner_1) must carry closure cells that it doesn’t itself use, so that functions nested within it can close over them.

In other words: If a nested function needs a variable from an outer scope, every function layer in between must carry that variable as a closure cell, even if those intermediate layers don’t use it directly.

Only the immediate lexical parent can provide the closure cells to a newly created function.

The approach followed by Python for creating its closures is rather different from that of JavaScript, and explains the limitation that I mentioned in this post. In Python the compiler checks if a function closes over variables of its outer scopes, and if so, it sets the co_freevars and co_cellvars of the corresponding code objects and adds the necessary instructions so that at execution time cell objects get created and when the function object is created, its __closure__ can be correctly set, with exactly the cells that it needs. If some "dynamic code" (code compiled dynamically with exec()) tries to access to a variable of an outer scope that had not been trapped by the __closure__ of the function that invokes exec, it can't, as it's not there. In JavaScript this is quite different. eval() has access to any variable of the outer scopes, because indeed all functions in JavaScript have access to all its outer scopes through the scope chain. When a function is created, it gets its [[scope]] property set to the scope (the activation object I think it's called) of its parent function. So if we have a certain level of nesting when defining functions, we end up with a chain of scopes. And the variable look up mechanism will search in this chain if it does not find a variable in the current scope. This is very powerful, but at the same time has serious performance implications. Outer scopes are kept alive regardless of whether the inner functions access to them or not (cause we allow eval to access to them, and we don't know what eval will be evaluating). This also involves extra longer look ups.

Nicely explained by a GPT:

JavaScript keeps the entire lexical scope chain alive, whereas Python collapses scopes into minimal “cell objects” and releases frames as soon as possible.

In JavaScript, every function carries a scope chain because dynamic features like eval() force engines to preserve the full lexical environment at runtime. Python does not need this because its lexical scope is fixed at compile time and not accessible to exec()/eval().

I was wondering how the most powerful and dynamic language that I can think of, ruby, manages this. I have no practical ruby knowledge, so I just asked a GPT, and as expected it follows a very similar approach to JavaScript, keeping sort of a chain of "scopes" that allows eval access to variables in any of them. From a GPT:

Ruby’s closures sit right between Python and JavaScript, but they lean much closer to JavaScript in philosophy:

  • They close over entire lexical scopes, not a minimal set of cell-like variables.
  • Ruby scopes are runtime objects (not a purely compile‑time fiction like Python’s).
  • Blocks, Procs, and lambdas capture the full environment, not a pruned subset.
  • Ruby supports eval within a Binding, which preserves the whole lexical + dynamic scope much like JavaScript’s eval.

Wednesday, 21 January 2026

Python Closure Introspection

I talked time ago about some minor limitation (related to eval) of Python closures when compared to JavaScript ones. That's true, but the thing is that Python closures are particularly powerful in terms of introspection. In this previous post (and some older ones) I already talked about fn.__code__.co_cellvars, fn.__code__.co_freevars and fn.__closure__, as a reminder taken from here

  • co_varnames — is a tuple containing the names of the local variables (starting with the argument names).
  • co_cellvars — is a tuple containing the names of local variables that are referenced by nested functions.
  • co_freevars — is a tuple containing the names of free variables; co_code is a string representing the sequence of bytecode instructions.

And the __closure__ attribute of a function object is a tuple containing the cells for the variables that it has trapped (the free variables).


# closure example (closing over wrapper and counter variables from the enclosing scope)
def create_formatter(wrapper: str) -> Callable[[str], str]:
    counter = 0
    def _format(st: str) -> str:
        nonlocal counter 
        counter += 1
        return f"{wrapper}st{wrapper}"
    return _format

format = create_formatter("|")

print(format("a"))
# |a|

# the closure attribute is a tuple containing the trapped values
print(f"closure: {format.__closure__}")
print(f"freevars: {format.__code__.co_freevars}")
# closure: (cell at 0x731017299ea0: int object at 0x6351ad1bd1b0, cell at 0x731017299de0: str object at 0x6351ad1cd2e8)
# freevars: ('counter', 'wrapper')


A cell is a wrapper object pointing to a value, the trapped variable, it's an additional level of indirection that allows the closure to share the value with the enclosing function and with other closures that could also be trapping that value, so that if any of them changes the value, this is visible for all of them.



def create_formatters(format_st: str) -> Callable[[str], str]:
    """
    creates two formatter closures that share the same 'format' free variable.
    one of them can disable the formatting by setting the format string to an empty string.
    """
    def _prepend(st: str) -> str:
        nonlocal format_st
        if st == "disable":
            format_st = ""  # Example of modifying the closed-over variable
            return
        return f"{format_st}{st}"
    
    def _append(st: str) -> str:
        return f"{st}{format_st}"
    
    return _prepend, _append


prepend, append = create_formatters("!")
print(prepend("Hello"))  
print(append("Hello"))    
# !Hello
# Hello!

prepend("disable")
print(prepend("World"))  # Output: World (since format_st was modified to "")
print(append("World"))   # Output: World
# !Hello
# Hello!


Here you can find a perfect explanation of co_freevars, co_cellvars and closure cells:

Closure cells refer to values needed by the function but are taken from the surrounding scope.

When Python compiles a nested function, it notes any variables that it references but are only defined in a parent function (not globals) in the code objects for both the nested function and the parent scope. These are the co_freevars and co_cellvars attributes on the __code__ objects of these functions, respectively.

Then, when you actually create the nested function (which happens when the parent function is executed), those references are then used to attach a closure to the nested function.

A function closure holds a tuple of cells, one each for each free variable (named in co_freevars); cells are special references to local variables of a parent scope, that follow the values those local variables point to.

If we have a function factory that creates a closure, each time we invoke it we'll get a new function object with its __closure__ attribute pointing to its own object (a tuple), but with __code__ pointing to the same code object. So all those instances of the function have the same bytecodes and metainformation, but each instance has its own state (closure cells/freevars).

The closure "superpowers" that Python features are:

1) As we saw above, ee can easily check if a function is a closure (has cells/freevars) just by checking if its __closure__ attribute is not None (or if its __code__.co_freevars tuple is not empty).

2) We can see "from outside" the values of the closure freevars (the names, the values, and combine both with a simple "show_cell_values" function). And furthermore, we can modify them, just by modifying the contents of the cells in fn.__closure__. It's what we could call "closure introspection".



# combining the names in co_freevars and the values in closure cells to nicely see the trapped values
def show_cell_values(fn) -> dict[str, CellType]:
    return {name: fn.__closure__[i].cell_contents
        for i, name in enumerate(fn.__code__.co_freevars)
    }

def cell_name_to_index_map(fn) -> dict[str, int]:
    return {name: i for i, name in enumerate(fn.__code__.co_freevars)}

def get_freevar(fn, name: str) -> Any:
    name_to_index = cell_name_to_index_map(fn)
    return fn.__closure__[name_to_index[name]].cell_contents

def set_freevar(fn, name: str, value: Any) -> Any:
    name_to_index = cell_name_to_index_map(fn)
    fn.__closure__[name_to_index[name]].cell_contents = value
    
    
def create_formatter(wrapper: str) -> Callable[[str], str]:
    counter = 0
    def _format(st: str) -> str:
        nonlocal counter 
        counter += 1
        return f"{wrapper}st{wrapper}"
    return _format

format = create_formatter("|")

print(f"format cells: {show_cell_values(format)}")
print(f"format 'wrapper' freevar before: {get_freevar(format, 'wrapper')}")
print(format("a"))
# format cells: {'counter': 1, 'wrapper': '|'}
# format 'wrapper' freevar before: |
# |st|

set_freevar(format, 'wrapper', '-')

print(f"format 'wrapper' freevar after: {get_freevar(format, 'wrapper')}")
print(format("a"))
# format 'wrapper' freevar after: -
# -st-

Thursday, 15 January 2026

Methods as Closures

Instances of classes and closures feel like 2 competing approaches for certain problems. Instances of classes have state and behavior, but that behaviour is normally splat in multiple execution units (methods). A closure is a single execution unit (a function) that keeps state through the variables it traps (freevars). When a class has a single method, you can model it as a closure (well, a closure factory, so that each closure instance has its own state). Additionally, languages like Python have callable classes, where you have a main/default execution unit (__call__), so they feel closer to a closure :-)

Somehow the other day I realised that in languages like Python or JavaScript, methods of a class can be clousures. How? Well, in Python classes are objects (in JavaScript a class is just syntax sugar for managing functions and prototypes), and we can define classes inside functions, so each time the function runs a new class is created (and returned, so the function becomes a class factory). What happens if a method in one of these internal classes tries to access to a variable defined at the outer function level? Well, it will trap it in its closure. Let's see an example where the format method has access to the pre_prefix variable from the enclosing function.:


def formatter_class_factory(pre_prefix):
    class Formatter:
        def __init__(self, prefix):
            self.prefix = prefix

        def format(self, tx):
            # this method is accessing the pre_prefix variable from the enclosing scope
            return f"{pre_prefix} {self.prefix}: {tx}"

    return Formatter


MyFormatter = formatter_class_factory("Log")
formatter = MyFormatter("INFO")
print(formatter.format("This is a test message."))  

print(f"closure: {MyFormatter.format.__closure__}")  
print(f"freevars: {MyFormatter.format.__code__.co_freevars}") 
print(f"closure[0].cell_contents: {MyFormatter.format.__closure__[0].cell_contents}") 

# Log INFO: This is a test message.
# closure: (,)
# freevars: ('pre_prefix',)
# closure[0].cell_contents: Log

We can write the equivalent JavaScript code and see that it works the same. So JavaScript methods can also get access to variables present in the scope where the class is defined. What this means is that same as regular functions, methods also have a scope-chain (where the freevars will be looked up).


function formatterClassFactory(prePrefix) {
    class Formatter {
      constructor(prefix) {
        this.prefix = prefix;
      }
  
      format(tx) {
        // this method is accessing the prePrefix variable from the enclosing scope
        return `${prePrefix} ${this.prefix}: ${tx}`;
      }
    }

    return Formatter;
  }
  
  const MyFormatter = formatterClassFactory("Log");
  const formatter = new MyFormatter("INFO");
  console.log(formatter.format("This is a test message.")); 
  // Log INFO: This is a test message.
  
  // Notice that JavaScript does not provide direct closure introspection (no equivalent to __closure__), so we can not translate that part from the Python snippet

By the way, it's interesting how for people coming from class based languages the idea that "closures are poor man's class instances" makes sense, while for people coming from functional languages "class instances are poor man's closures". This is discussed here.

Tuesday, 6 January 2026

Conditional Decorator

After my previous post about decorating decorators I was thinking about some more potential use of this technique, and the idea of applying a decorator conditionally came up. Python supports applying a decorator conditionally using an if-else expression like this:


def log_call(func):
    @wraps(func)
    def wrapper(*args, **kwargs):
        print(f"In function: {func.__name__}")
        return func(*args, **kwargs)
    return wrapper 

@(log_call if debugging else lambda x: x)
def do_something(a, b):
    return a + b
    

That's pretty nice, but at the same time quite limited. We apply or not apply a decorator based on a condition at the time the function being decorated is defined. But what if we want to decide whether the decorator logic applies based on a dynamic value, each time the decorated function is invoked? We can have a (meta)decorator: conditional, that we apply to another decorator when this decorator is applied, not defined. conditional creates a new decorator that traps in its closure the original decorator and a boolean function (condition_fn) that decides whether the decorator has to be applied. This new decorator receives a function and returns a new function that in each invocation checks (based on condition_fn) if the original decorator has to be applied. Less talk, more code:


def conditional(decorator, condition_fn: Callable):
    """
    metadecorator: createa a new decorator that applies the original decorator only if `condition_fn` returns True.
    """
    def conditional_deco(fn: Callable):
        @wraps(fn)
        def wrapper(*args, **kwargs):
            if condition_fn():
                return decorator(fn)(*args, **kwargs)
            else:
                return fn(*args, **kwargs)
        return wrapper
    return conditional_deco

@(conditional(log_call, lambda: debugging))
def do_something2(a, b):
    return a + b    

print(f"- debugging {debugging}")
print(do_something2(7, 3)) 
debugging = False
print(f"- debugging {debugging}")
print(do_something2(7, 3))
print("------------------------")

# - debugging True
# In function: do_something2
# 10
# - debugging False
# 10