Wednesday, 21 January 2026

Python Closure Introspection

I talked time ago about some minor limitation (related to eval) of Python closures when compared to JavaScript ones. That's true, but the thing is that Python closures are particularly powerful in terms of introspection. In this previous post (and some older ones) I already talked about fn.__code__.co_cellvars, fn.__code__.co_freevars and fn.__closure__, as a reminder taken from here

  • co_varnames — is a tuple containing the names of the local variables (starting with the argument names).
  • co_cellvars — is a tuple containing the names of local variables that are referenced by nested functions.
  • co_freevars — is a tuple containing the names of free variables; co_code is a string representing the sequence of bytecode instructions.

And the __closure__ attribute of a function object is a tuple containing the cells for the variables that it has trapped (the free variables).


# closure example (closing over wrapper and counter variables from the enclosing scope)
def create_formatter(wrapper: str) -> Callable[[str], str]:
    counter = 0
    def _format(st: str) -> str:
        nonlocal counter 
        counter += 1
        return f"{wrapper}st{wrapper}"
    return _format

format = create_formatter("|")

print(format("a"))
# |a|

# the closure attribute is a tuple containing the trapped values
print(f"closure: {format.__closure__}")
print(f"freevars: {format.__code__.co_freevars}")
# closure: (cell at 0x731017299ea0: int object at 0x6351ad1bd1b0, cell at 0x731017299de0: str object at 0x6351ad1cd2e8)
# freevars: ('counter', 'wrapper')


A cell is a wrapper object pointing to a value, the trapped variable, it's an additional level of indirection that allows the closure to share the value with the enclosing function and with other closures that could also be trapping that value, so that if any of them changes the value, this is visible for all of them.



def create_formatters(format_st: str) -> Callable[[str], str]:
    """
    creates two formatter closures that share the same 'format' free variable.
    one of them can disable the formatting by setting the format string to an empty string.
    """
    def _prepend(st: str) -> str:
        nonlocal format_st
        if st == "disable":
            format_st = ""  # Example of modifying the closed-over variable
            return
        return f"{format_st}{st}"
    
    def _append(st: str) -> str:
        return f"{st}{format_st}"
    
    return _prepend, _append


prepend, append = create_formatters("!")
print(prepend("Hello"))  
print(append("Hello"))    
# !Hello
# Hello!

prepend("disable")
print(prepend("World"))  # Output: World (since format_st was modified to "")
print(append("World"))   # Output: World
# !Hello
# Hello!


Here you can find a perfect explanation of co_freevars, co_cellvars and closure cells:

Closure cells refer to values needed by the function but are taken from the surrounding scope.

When Python compiles a nested function, it notes any variables that it references but are only defined in a parent function (not globals) in the code objects for both the nested function and the parent scope. These are the co_freevars and co_cellvars attributes on the __code__ objects of these functions, respectively.

Then, when you actually create the nested function (which happens when the parent function is executed), those references are then used to attach a closure to the nested function.

A function closure holds a tuple of cells, one each for each free variable (named in co_freevars); cells are special references to local variables of a parent scope, that follow the values those local variables point to.

If we have a function factory that creates a closure, each time we invoke it we'll get a new function object with its __closure__ attribute pointing to its own object (a tuple), but with __code__ pointing to the same code object. So all those instances of the function have the same bytecodes and metainformation, but each instance has its own state (closure cells/freevars).

The closure "superpowers" that Python features are:

1) As we saw above, ee can easily check if a function is a closure (has cells/freevars) just by checking if its __closure__ attribute is not None (or if its __code__.co_freevars tuple is not empty).

2) We can see "from outside" the values of the closure freevars (the names, the values, and combine both with a simple "show_cell_values" function). And furthermore, we can modify them, just by modifying the contents of the cells in fn.__closure__. It's what we could call "closure introspection".



# combining the names in co_freevars and the values in closure cells to nicely see the trapped values
def show_cell_values(fn) -> dict[str, CellType]:
    return {name: fn.__closure__[i].cell_contents
        for i, name in enumerate(fn.__code__.co_freevars)
    }

def cell_name_to_index_map(fn) -> dict[str, int]:
    return {name: i for i, name in enumerate(fn.__code__.co_freevars)}

def get_freevar(fn, name: str) -> Any:
    name_to_index = cell_name_to_index_map(fn)
    return fn.__closure__[name_to_index[name]].cell_contents

def set_freevar(fn, name: str, value: Any) -> Any:
    name_to_index = cell_name_to_index_map(fn)
    fn.__closure__[name_to_index[name]].cell_contents = value
    
    
def create_formatter(wrapper: str) -> Callable[[str], str]:
    counter = 0
    def _format(st: str) -> str:
        nonlocal counter 
        counter += 1
        return f"{wrapper}st{wrapper}"
    return _format

format = create_formatter("|")

print(f"format cells: {show_cell_values(format)}")
print(f"format 'wrapper' freevar before: {get_freevar(format, 'wrapper')}")
print(format("a"))
# format cells: {'counter': 1, 'wrapper': '|'}
# format 'wrapper' freevar before: |
# |st|

set_freevar(format, 'wrapper', '-')

print(f"format 'wrapper' freevar after: {get_freevar(format, 'wrapper')}")
print(format("a"))
# format 'wrapper' freevar after: -
# -st-

Thursday, 15 January 2026

Methods as Closures

Instances of classes and closures feel like 2 competing approaches for certain problems. Instances of classes have state and behavior, but that behaviour is normally splat in multiple execution units (methods). A closure is a single execution unit (a function) that keeps state through the variables it traps (freevars). When a class has a single method, you can model it as a closure (well, a closure factory, so that each closure instance has its own state). Additionally, languages like Python have callable classes, where you have a main/default execution unit (__call__), so they feel closer to a closure :-)

Somehow the other day I realised that in languages like Python or JavaScript, methods of a class can be clousures. How? Well, in Python classes are objects (in JavaScript a class is just syntax sugar for managing functions and prototypes), and we can define classes inside functions, so each time the function runs a new class is created (and returned, so the function becomes a class factory). What happens if a method in one of these internal classes tries to access to a variable defined at the outer function level? Well, it will trap it in its closure. Let's see an example where the format method has access to the pre_prefix variable from the enclosing function.:


def formatter_class_factory(pre_prefix):
    class Formatter:
        def __init__(self, prefix):
            self.prefix = prefix

        def format(self, tx):
            # this method is accessing the pre_prefix variable from the enclosing scope
            return f"{pre_prefix} {self.prefix}: {tx}"

    return Formatter


MyFormatter = formatter_class_factory("Log")
formatter = MyFormatter("INFO")
print(formatter.format("This is a test message."))  

print(f"closure: {MyFormatter.format.__closure__}")  
print(f"freevars: {MyFormatter.format.__code__.co_freevars}") 
print(f"closure[0].cell_contents: {MyFormatter.format.__closure__[0].cell_contents}") 

# Log INFO: This is a test message.
# closure: (,)
# freevars: ('pre_prefix',)
# closure[0].cell_contents: Log

We can write the equivalent JavaScript code and see that it works the same. So JavaScript methods can also get access to variables present in the scope where the class is defined. What this means is that same as regular functions, methods also have a scope-chain (where the freevars will be looked up).


function formatterClassFactory(prePrefix) {
    class Formatter {
      constructor(prefix) {
        this.prefix = prefix;
      }
  
      format(tx) {
        // this method is accessing the prePrefix variable from the enclosing scope
        return `${prePrefix} ${this.prefix}: ${tx}`;
      }
    }

    return Formatter;
  }
  
  const MyFormatter = formatterClassFactory("Log");
  const formatter = new MyFormatter("INFO");
  console.log(formatter.format("This is a test message.")); 
  // Log INFO: This is a test message.
  
  // Notice that JavaScript does not provide direct closure introspection (no equivalent to __closure__), so we can not translate that part from the Python snippet

By the way, it's interesting how for people coming from class based languages the idea that "closures are poor man's class instances" makes sense, while for people coming from functional languages "class instances are poor man's closures". This is discussed here.

Tuesday, 6 January 2026

Conditional Decorator

After my previous post about decorating decorators I was thinking about some more potential use of this technique, and the idea of applying a decorator conditionally came up. Python supports applying a decorator conditionally using an if-else expression like this:


def log_call(func):
    @wraps(func)
    def wrapper(*args, **kwargs):
        print(f"In function: {func.__name__}")
        return func(*args, **kwargs)
    return wrapper 

@(log_call if debugging else lambda x: x)
def do_something(a, b):
    return a + b
    

That's pretty nice, but at the same time quite limited. We apply or not apply a decorator based on a condition at the time the function being decorated is defined. But what if we want to decide whether the decorator logic applies based on a dynamic value, each time the decorated function is invoked? We can have a (meta)decorator: conditional, that we apply to another decorator when this decorator is applied, not defined. conditional creates a new decorator that traps in its closure the original decorator and a boolean function (condition_fn) that decides whether the decorator has to be applied. This new decorator receives a function and returns a new function that in each invocation checks (based on condition_fn) if the original decorator has to be applied. Less talk, more code:


def conditional(decorator, condition_fn: Callable):
    """
    metadecorator: createa a new decorator that applies the original decorator only if `condition_fn` returns True.
    """
    def conditional_deco(fn: Callable):
        @wraps(fn)
        def wrapper(*args, **kwargs):
            if condition_fn():
                return decorator(fn)(*args, **kwargs)
            else:
                return fn(*args, **kwargs)
        return wrapper
    return conditional_deco

@(conditional(log_call, lambda: debugging))
def do_something2(a, b):
    return a + b    

print(f"- debugging {debugging}")
print(do_something2(7, 3)) 
debugging = False
print(f"- debugging {debugging}")
print(do_something2(7, 3))
print("------------------------")

# - debugging True
# In function: do_something2
# 10
# - debugging False
# 10

Saturday, 3 January 2026

Python MetaDecorator

We know that when using decorators in Python you should always use functools.wraps/update_wrapper on the function returned by the decorator. Apart from setting the __name__, __doc__, __module__... attributes of the new/wrapper function with those of the original one, it also adds a __wrapped__ attribute that points to the original function. What it does not do is adding information to the function about the decorator that has been applied. So while we have a way to refer to the original function via __wrapped__, we can not check if a decorator has been applied to the function.

Obviously our decorator could just add a __decorator__ attribute to the wrapper function that it returns, but well, we have to repeat that logic in each of our decorators, and can not do anything with already existing decorators. So the nice way to do this would be having a function (let's call it empower()) that we can apply to an existing decorator, obtaining a new decorator that applies the original decorator and then sets the __decorator__ attribute in the decorated function. This empower function is a decorator factory (receives a decorator and creates a new decorator) and indeed can be applied (at least in some cases, we'll see it later) as a decorator itself when defining the initial decorator, so it could be seen as a sort of meta-decorator (a decorator that decorates and creates decorators).

A function can be decorated by multiple decorators, so shouldn't we better have a __decorators__ attribute with that list of decorators? Well, the __wrapped__ attribute points to the function being decorated in this step (so functools.wraps does not check if the function being decorated has in turn a __wrapped__ attributes). So if we have a chain of decorators we'll have to traverse a chain of __wrapped__ attributes to get to the source function. I mean:



"""
Veryfying that if multiple decorators are applied, functools.__wrapped__ points to the previous decorated function in the chain, not directly to the original function. 
"""
from functools import wraps
def start_call(func):
    @wraps(func)
    def wrapper(*args, **kwargs):
        print(f"Starting function: {func.__name__}")
        return func(*args, **kwargs)
    return wrapper

def end_call(func):
    @wraps(func)
    def wrapper(*args, **kwargs):
        result = func(*args, **kwargs)
        print(f"Ending function: {func.__name__}")
        return result
    return wrapper 


@start_call
@end_call
def do_something(a, b):
    return a + b

# Example usage
if __name__ == "__main__":
    # do_something has 2 levels of decoration
    print(f"Result: {do_something(5, 10)}")

    print("---------------- Unwrapping decorators ----------------")
    unwrapped1 = do_something.__wrapped__
    print(f"Result: {unwrapped1(5, 10)}")
    print("----------------")
    unwrapped2 = do_something.__wrapped__.__wrapped__
    print(f"Result: {unwrapped2(5, 10)}")

    # Starting function: do_something
    # Ending function: do_something
    # Result: 15
    # ---------------- Unwrapping decorators ----------------
    # Ending function: do_something
    # Result: 15
    # ----------------
    # Result: 15

So that's also the approach I've followed here. I add a __decorator__ attribute to each decorated function, and have a get_decorators helper function that will traverse that __decorator__ chain to get all the decorators. My empower decorator provides an additional functionality, if the decorator being decorated does not apply wraps() to the original function, it does it. Let's see an implementation of this empower decorator and the associated get_decorators() function.


def empower(decorator):
    def empowered_decorator(func):
        decorated_fn = decorator(func)
        decorated_fn.__decorator__ = decorator
        # if the original decorator has not used wraps, we add it here
        if not hasattr(decorated_fn, '__wrapped__') or decorated_fn.__wrapped__ != func:
            wraps(func)(decorated_fn)
        return decorated_fn
    return empowered_decorator

def get_decorators(func):
    decorators = []
    while cur_decor := getattr(func, '__decorator__', None):
        decorators.append(func.__decorator__)
        func = func.__wrapped__
    return decorators

Given the previously defined start_call and end_call decorators, we can empower them at the time they are applied to a function, like this:


@(empower(start_call(">>>")))
@(empower(end_call))
def do_something2(a, b):
    return a + b

print(do_something2(7, 3))
print(f"decorators applied to do_something2: {[dec.__name__ for dec in get_decorators(do_something2)]}")
# decorators applied to do_something2: ['intermediate', 'end_call']

Being used at the time a decorator is being applied, rather that at the time when a decorator is defined, the empower decorator works naturally both for decorators that expect parameters and decorators that do not (other than the function being decorated). A decorator that expects parameters does indeed create a new decorator that traps the provided parameters in its closure for being then invoked with the function to be decorated, so in both cases empower ends up receiving a decorator that just expects a function.

We can also apply it when a decorator is being defined, but only for decorators that do not expect parameters (other than the function being decorated itself). For applying it at definition time to decorators that expect parameters, we need a different implementation, that I've called empower_dec_with_params. So all in all we have:


# intended to be used when defining a decorator with parameters
def empower_dec_with_params(decorator):
    def outer_decorator(*args, **kwargs):
        def inner_decorator(func):
            decorated_fn = decorator(*args, **kwargs)(func)
            decorated_fn.__decorator__ = decorator
            # if the original decorator has not used wraps, we add it here
            if not hasattr(decorated_fn, '__wrapped__') or decorated_fn.__wrapped__ != func:
                wraps(func)(decorated_fn)
            return decorated_fn
        return inner_decorator
    return outer_decorator
    
@empower_dec_with_params
def start_call(prepend: str = ""):
    def intermediate(fn):
        @wraps(fn)
        def wrapper(*args, **kwargs):
            print(f"{prepend} Starting function: {fn.__name__}")
            return fn(*args, **kwargs)
        return wrapper
    return intermediate

@empower
def end_call(func):
    @wraps(func)
    def wrapper(*args, **kwargs):
        result = func(*args, **kwargs)
        print(f"Ending function: {func.__name__}")
        return result
    return wrapper 

@start_call(">>>")
@end_call
def do_something(a, b):
    return a + b

print(do_something(5, 10))
print(f"decorators applied to do_something: {[dec.__name__ for dec in get_decorators(do_something)]}")

# >>> Starting function: do_something
# Ending function: do_something
# 15
# decorators applied to do_something: ['start_call', 'end_call']

Tuesday, 23 December 2025

Awaiting for a Resolved Promise/Future

Almost 4 years ago I wrote this post about some differences between the async machinery in JavaScript and Python. There are many more things I could add regarding the workings of their event loops and more, but I'll talk today about one case I've been looking into lately: awaiting for an already resolved Promise/Future or a function marked as async but that does not suspend.

In JavaScript awaiting for an already resolved Promise or a function marked as async but that does not suspend will give control to the event loop (so the function that performs the await will get suspended), while in Python the function doing that await will not suspended, it will run the next instruction without transferring control to the event loop. Let's see an example in JavaScript:


async function fn(){
	console.log("fn started");
	result = await Promise.resolve("Bonjour");
	console.log("fn, after await, result: " + result);
}

fn();
console.log("right before quitting");
	// output:
	// main1 started
	// right before quitting
	// main1, after await, result: Bonjour

In the above code, when we await an already resolved Promise, the then() callback that the compiler added to that Promise to call again into the state-machine corresponding to that async function, is added to the microtask queue (rather than being executed immediately). So the fn function gets suspended and the execution flow continues in the global scope from which fn had been called, writing the "right before quitting" message. Then the event loop takes control, check that it has a task in its microtask queue and executes it, resuming the fn function and writing the "fn, after..." message.

If we await for an async function that does not perform a suspension (getPost(0)) the result is the same (well, if the async function does not suspend it indeed returns also a resolved promise). Let's see:


function sleep(ms) {
  return new Promise(resolve => setTimeout(resolve, ms));
}

async function getPost(id){
	let result
	console.log("getPost started");
	if (id === 0){
		result = "Post 0";	
	}
	else {
		await sleep(1000);
		result = "Post " + id;
	}
	console.log("getPost finished");
	return result;
}

async function main2(){
	console.log("main2 started");
	let result = await getPost(0);
	console.log("main2, after await, result: " + result);
}

main2();
console.log("right before quitting");

	// output:
	// main2 started
	// getPost started
	// getPost finished
	// right before quitting
	// main2, after await, result: Post 0


If we run know an equivalent example in Python we can see that the behaviour is different, the function continues its execution without suspending


async def do_something():
    print("inside do_something")
    
async def main1():
	print("main1 started")
	asyncio.create_task(do_something())
	ft = asyncio.Future()
	ft.set_result("Bonjour")  # immediately resolved future
	result = await ft
	print("main1, after await, result: " + result)


asyncio.run(main1())
sys.exit()
    # output:
    # main1 started
    # main1, after await, result: Bonjour
    # inside do_something

In main1 create_task creates a task and adds it to a queue in the event loop so that it schedules it when it has a chance (next time it gets the control). main1 execution continues creating a future and resolving it and when we await it, as it's already resolved/completed, no suspension happens, the execution continues writing the "main1, after await". Then the main1 function finishes and the event loop takes control and runs remaining tasks that it had in its queue, our "do_something" task in this case.

If we run now an example where we await for a coroutine (get_post) that does not suspend the result is the same:


async def get_post(post_id):
	print("get_post started")
	if post_id == 0:
		result = "Post 0"
	else:
		await asyncio.sleep(0.5)  # Simulate async operation
		result = f"Post {post_id}"
	print("get_post finished")
	return result

async def do_something():
    print("inside do_something")

async def main2():
    print("main2 started")
    asyncio.create_task(do_something())
    # the do_something task is scheduled to run when the eventloop has a chance, 
    result = await get_post(0)
    print("main2, after await, result: " + result)

asyncio.run(main2())
    # output:
    # main2 started
    # get_post started
    # get_post finished
    # main2, after await, result: Post 0
    # inside do_something

We invoke the get_post(0) coroutine, that does not do any asynchronous operation that suspends it, but directly returns a value. The await receives a normal value rather than an unresolved Future, so it justs continues with the print("main2, after") rather than suspending, and hence no control transfer to the event loop until when main2 is finished.

A bit related to all this, some days ago I was writing some code where I have multiple async operations running and I wait for any of them to complete with asyncio.wait(), like this:


        while pending_actions:
            done_actions, pending_actions = await asyncio.wait(
                pending_actions,
                return_when=asyncio.FIRST_COMPLETED
            )
            for done_task in done_actions:
            	# result = await done_task
                result = done_task.result()
                # do something with result
            )

In done_actions we have Tasks/Futures that are already complete. That means that to get its result we can do both task.result() or await task. Given that awaiting for a resolved/completed Task/Future does not cause any suspension, both options are valid and similar, but with some differences. As I read somewhere:

Internally, coroutines are a special kind of generators, every await is suspended by a yield somewhere down the chain of await calls (please refer to PEP 3156 for a detailed explanation).

This means that even if that await done_task will not transfer control to the event loop causing a suspension, because there's not an unresolved Task/Future to wait for, the chain of coroutine calls moves back up to the Task that ultimately controls this coroutine chain and from there (given that the Future is already resolved and hence there's no need to get suspended waiting for its resolution) the Task will move forward again in the coroutine chain. So this means some overhead because of this going back and forth. Additionally, invoking result() makes evident that we are accessing to a completed item, while using await makes it feel more as if the item is not complete (and hence we await it).

Thursday, 18 December 2025

Default Parameters and Method Overriding

In my previous post I discussed how the values of default parameters should be considered as an implementation detail, and not as part of the contract of the function/method (while the fact of having default parameters becomes part of the contract) . This has interesting implications regarding inheritance and method overriding.

  • If a method in a base class (or interface/protocol) has a default parameter "timeout = 10", it should be fine that a derived class overrides this method with a different default value "timeout = 20".
  • If a method in a base class (or interface/protocol) has no default parameters, it should be OK for a derived class to overwrite that method using some defaults. If the method in a derived instance is invoked from a reference typed as base, we will be forced to provide all the parameters, it's when it's invoked from a reference typed as derived that the defaults can be omitted.
  • Of course, what is not OK is that a method that in the base class has defaults gets overriden in a derived class using a method that makes them compulsory.

In dynamic languages like Python (or Ruby or JavaScript) the runtime itself gives us total freedom to do whatever we want (more or less) with this kind of things, but maybe typecheckers decided to add some restriction on this for no reason. I've checked with mypy and this is perfectly fine:


class Formatter:
    def format(self, text: str, wrapper: str = "|") -> str:
        return f"{wrapper}{text}{wrapper}"

# NO error in mypy    
class FormatterV2(Formatter):
    def format(self, text: str, wrapper: str = "*") -> str:
        return f"{wrapper}{text}{wrapper}"

# mypy:
# error: Signature of "format" incompatible with supertype "Formatter"  [override]
class FormatterV3(Formatter):
    def format(self, text: str, wrapper: str) -> str:
        return f"{wrapper}{text}{wrapper}"
        

class GeoService:
    def get_location(self, ip: str, country: str) -> str:
        return f"Location for IP {ip} in country {country}"
    
# NO error in mypy 
class GeoService2(GeoService):
    def get_location(self, ip: str, country: str = "FR") -> str:
        return f"Location for IP {ip} in country {country}"

Notice also that a mechanism that adds runtime restrictions, abstract classes/methods (abc.ABC, abc.abstractmethod) does not care about default parameters values (not even if we override a method making a default compulsory). Well, indeed abc's do not care about parameters at all (not even number of parameters or names), they only care about method names, not about method signatures.

Though from a design perspective what I've said above should be true for both dynamic and static languages, it seems that static languages like Kotlin or C# have decided to be quite restrictive regarding default parameters.

In Kotlin a derived class can not change the value of a default parameter in one method that it's overriding. Indeed when overriding a method with default parameter values, the default parameter values must be omitted from the signature. The reason for this is that Kotlin manages defaults at compile time. The compiler checks at the callsite that a function is being invoked with a missing parameter and adds to the call the value that was defined as default in the signature. Being done at compile time, if we have a variable typed as Base but that is indeed pointing to Child, it will choose the default defined in Base, not in Child (we can say that polymorphism does not work for defaults), so that's why to avoid confusion we can not redefine the default in the Child.

The other feature that I mentioned above, having a method in the Child that sets a default for a parameter that had no default in the Base should work OK, but Kotlin designers decided to forbid it, being the main reason for this that it would make method overloading confusing. As discussed with a GPT:

Defaults are syntactic sugar for overloads in many static languages.
Allowing derived classes to add defaults would blur the line between overriding and overloading, making method resolution harder to reason about.

I was wondering how Python manages default parameters at runtime. I know that when a function object is created default values are stored in the function object (which is a gotcha that I explained time ago). Diving a bit more:

When you define a function, any default expressions are evaluated immediately and stored on the function object: Positional/keyword defaults: func.__defaults__ → a tuple Keyword-only defaults: func.__kwdefaults__ → a dict

And regarding how defaults are used if necessary each time a function is invoked, we could think that maybe the compiler adds some checks at the start of the function, but no, it's not like that. These checks are performed by the call machinery itself. For each call to function (CALL bytecode instruction) the interpreter ends up calling a C function, and it's this C function who performs the defaults checking:

  • Defaults are stored on the function object when the function is defined.
  • Argument binding (including applying defaults) happens before any Python-level code runs, inside CPython’s C-level call machinery (vectorcall).
  • No default checks are injected into the function body’s bytecode.
  • The bytecode’s CALL ops trigger the C-level call path, which performs binding using __defaults__ and __kwdefaults__.

Wednesday, 10 December 2025

Default Parameters and Conditional Omission

I've recently come across an interesting idea in the Python discussion forum. What the guy proposes is:

The idea aims to provide a concise and natural way to skip passing an argument so that the function default applies, without duplicating calls or relying on boilerplate patterns.

Using for example a syntax like this:


def fetch_data(user_id: int, timeout: int = 10) -> None:
    """
    Hypothetical API call.
    timeout: network timeout in seconds, defaults to 10
    """
    
timeout: int | None = ...  # Could be an int or None.
user_id: int = 42

fetch_data(
    user_id,
    timeout = timeout if timeout  # passes only if timeout is not None; or if timeout is truthy.
)

In the lack of this feature, we could opt for providing the default value at the callsite:



fetch_data(
    user_id,
    timeout = timeout if timeout else 10 # we repeat here the default value
)

This is repetitive, as the that default value is already part of the function signature, and furthermore, it becomes incorrect if the function signature changes. It's this point that feels very interesting, as I had never thought much about how we should understand default parameters from a design point of view. The idea for me is that the values of default parameters should be considered as an implementation detail, not as part of the contract provided by the function. So if the function decides to change the values for its default parameters our client code should continue to work, as it should not care about those values. So, if we want to pass a certain value, and it happens to be the current default value, we should pass it anyway, as that default could change in the future. On the other hand, if we don't care about that parameter and just want the function to use its default value, we should not pass it explicitly (that is what we're doing in the previous code and we should not). I've further discussed this with a GPT:

Default parameter values are generally considered implementation details, not part of the contract. The contract is:
“If you omit this argument, the function will pick a value for you.”
But what that value is should not be relied upon unless explicitly documented as part of the API guarantee.
If your calling code cares about the value, it should pass it explicitly, even if it happens to match the current default. That way, if the default changes later (which is common in evolving APIs), your code still behaves as intended.

Best Practice Summary:

  • Treat defaults as convenience, not as a contract.
  • Explicit beats implicit when correctness matters.
  • If you rely on a specific value → pass it explicitly.
  • If you’re okay with whatever the function decides → omit the argument.

Having said this, the idea proposed in the forum, having some syntax that easily allowed us to conditionally choose between providing a value or saying: "use your default" makes pretty much sense. And it makes sense to wonder if any other languages support this feature. I was not aware of any, but was assuming (based on how an extremely powerful and expressive language it is) that maybe Ruby would support it, but no, it does not. But there's another brilliant and lovely language that happens to support it, JavaScript, thanks to a "feature" that normally is considered more a problem than a benefit, the confusing coexistence between null and undefined.

In JavaScript when we don't explicitly provide a parameter to a function, it takes the undefined value. If that parameter was defined with a default value, it then will use that default rather than undefined (this won't be the case if we pass over the null value). So this means that if we want to say "use your default" we can just pass it over the undefined value


function sayMessage(person: string, msg: string = "Bonjour") {
  console.log(`${person}: ${msg}`);
}

const isEnglish = false;
sayMessage("Xuan", isEnglish ? "Hi" : undefined); 
// "Bonjour" 

This idea of a syntax that allows conditionally omitting a parameter and forcing the default is indeed related to a previous idea that I discussed in this post, conditional collection literals. And similiar to the workaround that I show in that post, I've come up with a simple function "invoke_with_use_default" (sorry, I can't come with a nice name for it) that can help us to try to emulate this missing feature.


def invoke_with_use_default(fn: Callable, *args, **kwargs) -> Any:
    args = [arg for arg in args if arg is not USE_DEFAULT]
    kwargs = {key:value for key, value in kwargs.items() if value is not USE_DEFAULT}
    return fn(*args, **kwargs)
    
def generate_story(main_char: str, year: int, city: str = "Paris", duration: int = 5) -> str:
    return f"This is an adventure of {main_char} in {city} in year {year} that lasts for {duration} days"

in_asturies = False
is_short = False

print(invoke_with_use_default(generate_story, 
    "Francois", 
    2025, 
    "Xixon" if in_asturies else USE_DEFAULT,
    duration=(10 if is_short else USE_DEFAULT),
))
# This is an adventure of Francois in Paris in year 2025 that lasts for 5 days

in_asturies = True
print(invoke_with_use_default(generate_story, 
    "Francois", 
    2025, 
    "Xixon" if in_asturies else USE_DEFAULT,
    duration=(10 if is_short else USE_DEFAULT),
))
# This is an adventure of Francois in Xixon in year 2025 that lasts for 5 days    

We could also think of having a decorator function "enable_use_defaults" that enables a function to be invoked with a USE_DEFAULT directive.


def enable_use_default(fn: Callable) -> Callable:
    @wraps(fn)
    def use_default_enabled(*args, **kwargs):
        args = [arg for arg in args if arg is not USE_DEFAULT]
        kwargs = {key:value for key, value in kwargs.items() if value is not USE_DEFAULT}
        return fn(*args, **kwargs)
    return use_default_enabled

@enable_use_default
def generate_story(main_char: str, year: int, city: str = "Paris", duration: int = 5) -> str:
    return f"This is an adventure of {main_char} in {city} in year {year} that lasts for {duration} days"

in_asturies = False
is_short = False
print(generate_story( 
    "Francois", 
    2025, 
    "Xixon" if in_asturies else USE_DEFAULT,
    duration=(10 if is_short else USE_DEFAULT),
))