Tuesday, 23 December 2025

Awaiting for a Resolved Promise/Future

Almost 4 years ago I wrote this post about some differences between the async machinery in JavaScript and Python. There are many more things I could add regarding the workings of their event loops and more, but I'll talk today about one case I've been looking into lately: awaiting for an already resolved Promise/Future or a function marked as async but that does not suspend.

In JavaScript awaiting for an already resolved Promise or a function marked as async but that does not suspend will give control to the event loop (so the function that performs the await will get suspended), while in Python the function doing that await will not suspended, it will run the next instruction without transferring control to the event loop. Let's see an example in JavaScript:


async function fn(){
	console.log("fn started");
	result = await Promise.resolve("Bonjour");
	console.log("fn, after await, result: " + result);
}

fn();
console.log("right before quitting");
	// output:
	// main1 started
	// right before quitting
	// main1, after await, result: Bonjour

In the above code, when we await an already resolved Promise, the then() callback that the compiler added to that Promise to call again into the state-machine corresponding to that async function, is added to the microtask queue (rather than being executed immediately). So the fn function gets suspended and the execution flow continues in the global scope from which fn had been called, writing the "right before quitting" message. Then the event loop takes control, check that it has a task in its microtask queue and executes it, resuming the fn function and writing the "fn, after..." message.

If we await for an async function that does not perform a suspension (getPost(0)) the result is the same (well, if the async function does not suspend it indeed returns also a resolved promise). Let's see:


function sleep(ms) {
  return new Promise(resolve => setTimeout(resolve, ms));
}

async function getPost(id){
	let result
	console.log("getPost started");
	if (id === 0){
		result = "Post 0";	
	}
	else {
		await sleep(1000);
		result = "Post " + id;
	}
	console.log("getPost finished");
	return result;
}

async function main2(){
	console.log("main2 started");
	let result = await getPost(0);
	console.log("main2, after await, result: " + result);
}

main2();
console.log("right before quitting");

	// output:
	// main2 started
	// getPost started
	// getPost finished
	// right before quitting
	// main2, after await, result: Post 0


If we run know an equivalent example in Python we can see that the behaviour is different, the function continues its execution without suspending


async def do_something():
    print("inside do_something")
    
async def main1():
	print("main1 started")
	asyncio.create_task(do_something())
	ft = asyncio.Future()
	ft.set_result("Bonjour")  # immediately resolved future
	result = await ft
	print("main1, after await, result: " + result)


asyncio.run(main1())
sys.exit()
    # output:
    # main1 started
    # main1, after await, result: Bonjour
    # inside do_something

In main1 create_task creates a task and adds it to a queue in the event loop so that it schedules it when it has a chance (next time it gets the control). main1 execution continues creating a future and resolving it and when we await it, as it's already resolved/completed, no suspension happens, the execution continues writing the "main1, after await". Then the main1 function finishes and the event loop takes control and runs remaining tasks that it had in its queue, our "do_something" task in this case.

If we run now an example where we await for a coroutine (get_post) that does not suspend the result is the same:


async def get_post(post_id):
	print("get_post started")
	if post_id == 0:
		result = "Post 0"
	else:
		await asyncio.sleep(0.5)  # Simulate async operation
		result = f"Post {post_id}"
	print("get_post finished")
	return result

async def do_something():
    print("inside do_something")

async def main2():
    print("main2 started")
    asyncio.create_task(do_something())
    # the do_something task is scheduled to run when the eventloop has a chance, 
    result = await get_post(0)
    print("main2, after await, result: " + result)

asyncio.run(main2())
    # output:
    # main2 started
    # get_post started
    # get_post finished
    # main2, after await, result: Post 0
    # inside do_something

We invoke the get_post(0) coroutine, that does not do any asynchronous operation that suspends it, but directly returns a value. The await receives a normal value rather than an unresolved Future, so it justs continues with the print("main2, after") rather than suspending, and hence no control transfer to the event loop until when main2 is finished.

A bit related to all this, some days ago I was writing some code where I have multiple async operations running and I wait for any of them to complete with asyncio.wait(), like this:


        while pending_actions:
            done_actions, pending_actions = await asyncio.wait(
                pending_actions,
                return_when=asyncio.FIRST_COMPLETED
            )
            for done_task in done_actions:
            	# result = await done_task
                result = done_task.result()
                # do something with result
            )

In done_actions we have Tasks/Futures that are already complete. That means that to get its result we can do both task.result() or await task. Given that awaiting for a resolved/completed Task/Future does not cause any suspension, both options are valid and similar, but with some differences. As I read somewhere:

Internally, coroutines are a special kind of generators, every await is suspended by a yield somewhere down the chain of await calls (please refer to PEP 3156 for a detailed explanation).

This means that even if that await done_task will not transfer control to the event loop causing a suspension, because there's not an unresolved Task/Future to wait for, the chain of coroutine calls moves back up to the Task that ultimately controls this coroutine chain and from there (given that the Future is already resolved and hence there's no need to get suspended waiting for its resolution) the Task will move forward again in the coroutine chain. So this means some overhead because of this going back and forth. Additionally, invoking result() makes evident that we are accessing to a completed item, while using await makes it feel more as if the item is not complete (and hence we await it).

Thursday, 18 December 2025

Default Parameters and Method Overriding

In my previous post I discussed how the values of default parameters should be considered as an implementation detail, and not as part of the contract of the function/method (while the fact of having default parameters becomes part of the contract) . This has interesting implications regarding inheritance and method overriding.

  • If a method in a base class (or interface/protocol) has a default parameter "timeout = 10", it should be fine that a derived class overrides this method with a different default value "timeout = 20".
  • If a method in a base class (or interface/protocol) has no default parameters, it should be OK for a derived class to overwrite that method using some defaults. If the method in a derived instance is invoked from a reference typed as base, we will be forced to provide all the parameters, it's when it's invoked from a reference typed as derived that the defaults can be omitted.
  • Of course, what is not OK is that a method that in the base class has defaults gets overriden in a derived class using a method that makes them compulsory.

In dynamic languages like Python (or Ruby or JavaScript) the runtime itself gives us total freedom to do whatever we want (more or less) with this kind of things, but maybe typecheckers decided to add some restriction on this for no reason. I've checked with mypy and this is perfectly fine:


class Formatter:
    def format(self, text: str, wrapper: str = "|") -> str:
        return f"{wrapper}{text}{wrapper}"

# NO error in mypy    
class FormatterV2(Formatter):
    def format(self, text: str, wrapper: str = "*") -> str:
        return f"{wrapper}{text}{wrapper}"

# mypy:
# error: Signature of "format" incompatible with supertype "Formatter"  [override]
class FormatterV3(Formatter):
    def format(self, text: str, wrapper: str) -> str:
        return f"{wrapper}{text}{wrapper}"
        

class GeoService:
    def get_location(self, ip: str, country: str) -> str:
        return f"Location for IP {ip} in country {country}"
    
# NO error in mypy 
class GeoService2(GeoService):
    def get_location(self, ip: str, country: str = "FR") -> str:
        return f"Location for IP {ip} in country {country}"

Notice also that a mechanism that adds runtime restrictions, abstract classes/methods (abc.ABC, abc.abstractmethod) does not care about default parameters values (not even if we override a method making a default compulsory). Well, indeed abc's do not care about parameters at all (not even number of parameters or names), they only care about method names, not about method signatures.

Though from a design perspective what I've said above should be true for both dynamic and static languages, it seems that static languages like Kotlin or C# have decided to be quite restrictive regarding default parameters.

In Kotlin a derived class can not change the value of a default parameter in one method that it's overriding. Indeed when overriding a method with default parameter values, the default parameter values must be omitted from the signature. The reason for this is that Kotlin manages defaults at compile time. The compiler checks at the callsite that a function is being invoked with a missing parameter and adds to the call the value that was defined as default in the signature. Being done at compile time, if we have a variable typed as Base but that is indeed pointing to Child, it will choose the default defined in Base, not in Child (we can say that polymorphism does not work for defaults), so that's why to avoid confusion we can not redefine the default in the Child.

The other feature that I mentioned above, having a method in the Child that sets a default for a parameter that had no default in the Base should work OK, but Kotlin designers decided to forbid it, being the main reason for this that it would make method overloading confusing. As discussed with a GPT:

Defaults are syntactic sugar for overloads in many static languages.
Allowing derived classes to add defaults would blur the line between overriding and overloading, making method resolution harder to reason about.

I was wondering how Python manages default parameters at runtime. I know that when a function object is created default values are stored in the function object (which is a gotcha that I explained time ago). Diving a bit more:

When you define a function, any default expressions are evaluated immediately and stored on the function object: Positional/keyword defaults: func.__defaults__ → a tuple Keyword-only defaults: func.__kwdefaults__ → a dict

And regarding how defaults are used if necessary each time a function is invoked, we could think that maybe the compiler adds some checks at the start of the function, but no, it's not like that. These checks are performed by the call machinery itself. For each call to function (CALL bytecode instruction) the interpreter ends up calling a C function, and it's this C function who performs the defaults checking:

  • Defaults are stored on the function object when the function is defined.
  • Argument binding (including applying defaults) happens before any Python-level code runs, inside CPython’s C-level call machinery (vectorcall).
  • No default checks are injected into the function body’s bytecode.
  • The bytecode’s CALL ops trigger the C-level call path, which performs binding using __defaults__ and __kwdefaults__.

Wednesday, 10 December 2025

Default Parameters and Conditional Omission

I've recently come across an interesting idea in the Python discussion forum. What the guy proposes is:

The idea aims to provide a concise and natural way to skip passing an argument so that the function default applies, without duplicating calls or relying on boilerplate patterns.

Using for example a syntax like this:


def fetch_data(user_id: int, timeout: int = 10) -> None:
    """
    Hypothetical API call.
    timeout: network timeout in seconds, defaults to 10
    """
    
timeout: int | None = ...  # Could be an int or None.
user_id: int = 42

fetch_data(
    user_id,
    timeout = timeout if timeout  # passes only if timeout is not None; or if timeout is truthy.
)

In the lack of this feature, we could opt for providing the default value at the callsite:



fetch_data(
    user_id,
    timeout = timeout if timeout else 10 # we repeat here the default value
)

This is repetitive, as the that default value is already part of the function signature, and furthermore, it becomes incorrect if the function signature changes. It's this point that feels very interesting, as I had never thought much about how we should understand default parameters from a design point of view. The idea for me is that the values of default parameters should be considered as an implementation detail, not as part of the contract provided by the function. So if the function decides to change the values for its default parameters our client code should continue to work, as it should not care about those values. So, if we want to pass a certain value, and it happens to be the current default value, we should pass it anyway, as that default could change in the future. On the other hand, if we don't care about that parameter and just want the function to use its default value, we should not pass it explicitly (that is what we're doing in the previous code and we should not). I've further discussed this with a GPT:

Default parameter values are generally considered implementation details, not part of the contract. The contract is:
“If you omit this argument, the function will pick a value for you.”
But what that value is should not be relied upon unless explicitly documented as part of the API guarantee.
If your calling code cares about the value, it should pass it explicitly, even if it happens to match the current default. That way, if the default changes later (which is common in evolving APIs), your code still behaves as intended.

Best Practice Summary:

  • Treat defaults as convenience, not as a contract.
  • Explicit beats implicit when correctness matters.
  • If you rely on a specific value → pass it explicitly.
  • If you’re okay with whatever the function decides → omit the argument.

Having said this, the idea proposed in the forum, having some syntax that easily allowed us to conditionally choose between providing a value or saying: "use your default" makes pretty much sense. And it makes sense to wonder if any other languages support this feature. I was not aware of any, but was assuming (based on how an extremely powerful and expressive language it is) that maybe Ruby would support it, but no, it does not. But there's another brilliant and lovely language that happens to support it, JavaScript, thanks to a "feature" that normally is considered more a problem than a benefit, the confusing coexistence between null and undefined.

In JavaScript when we don't explicitly provide a parameter to a function, it takes the undefined value. If that parameter was defined with a default value, it then will use that default rather than undefined (this won't be the case if we pass over the null value). So this means that if we want to say "use your default" we can just pass it over the undefined value


function sayMessage(person: string, msg: string = "Bonjour") {
  console.log(`${person}: ${msg}`);
}

const isEnglish = false;
sayMessage("Xuan", isEnglish ? "Hi" : undefined); 
// "Bonjour" 

This idea of a syntax that allows conditionally omitting a parameter and forcing the default is indeed related to a previous idea that I discussed in this post, conditional collection literals. And similiar to the workaround that I show in that post, I've come up with a simple function "invoke_with_use_default" (sorry, I can't come with a nice name for it) that can help us to try to emulate this missing feature.


def invoke_with_use_default(fn: Callable, *args, **kwargs) -> Any:
    args = [arg for arg in args if arg is not USE_DEFAULT]
    kwargs = {key:value for key, value in kwargs.items() if value is not USE_DEFAULT}
    return fn(*args, **kwargs)
    
def generate_story(main_char: str, year: int, city: str = "Paris", duration: int = 5) -> str:
    return f"This is an adventure of {main_char} in {city} in year {year} that lasts for {duration} days"

in_asturies = False
is_short = False

print(invoke_with_use_default(generate_story, 
    "Francois", 
    2025, 
    "Xixon" if in_asturies else USE_DEFAULT,
    duration=(10 if is_short else USE_DEFAULT),
))
# This is an adventure of Francois in Paris in year 2025 that lasts for 5 days

in_asturies = True
print(invoke_with_use_default(generate_story, 
    "Francois", 
    2025, 
    "Xixon" if in_asturies else USE_DEFAULT,
    duration=(10 if is_short else USE_DEFAULT),
))
# This is an adventure of Francois in Xixon in year 2025 that lasts for 5 days    

We could also think of having a decorator function "enable_use_defaults" that enables a function to be invoked with a USE_DEFAULT directive.


def enable_use_default(fn: Callable) -> Callable:
    @wraps(fn)
    def use_default_enabled(*args, **kwargs):
        args = [arg for arg in args if arg is not USE_DEFAULT]
        kwargs = {key:value for key, value in kwargs.items() if value is not USE_DEFAULT}
        return fn(*args, **kwargs)
    return use_default_enabled

@enable_use_default
def generate_story(main_char: str, year: int, city: str = "Paris", duration: int = 5) -> str:
    return f"This is an adventure of {main_char} in {city} in year {year} that lasts for {duration} days"

in_asturies = False
is_short = False
print(generate_story( 
    "Francois", 
    2025, 
    "Xixon" if in_asturies else USE_DEFAULT,
    duration=(10 if is_short else USE_DEFAULT),
))



Sunday, 30 November 2025

exec, eval, return and ruby

In this post about expressions I mentioned that Kotlin features return expressions, which is a rather surprising feature. Let's see it in action:


// Kotlin code:
fun getCapital(country: Country) {
     val city = country?.capital ?: return "Paris"
     // this won't run if we could not find a capital
     logger.log("We've found the capital")
     return city
}

Contrary to try or throw expressions, that can be simulated (in JavaScript, Python...) with a function [1], [2], there's no way to use a "return() function" to mimic them (it would exit from that function itself, not from the calling one). Well, it came to my mind that maybe we could use a trick in JavaScript with eval() (I already knew that it would not work in Python with exec()), but no, it does not work in JavaScript either.


// JavaScript code:
function demo() {
    eval("return 42;");
    console.log("This will never run");
}

console.log(demo());
// Output: SyntaxError: Illegal return statement


JavaScript gives us a SyntaxError when we try that because that return can not work in the way we intend (returning from the enclosing function) so it prevents us from trying it. The code that eval compiles and runs is running inside the eval function, it's not as if it were magically placed inline in the enclosing function, so return (or break, or continue) would just return from eval itself, not from the enclosing function, and to prevent confusion, JavaScript forbids it.

The reason why I thought that maybe this would be possible is because as I had already explained in this previous post JavaScript eval() is more powerful than Python exec(), as it allows us modifying and even adding variables to the enclosing function. As a reminder:


// JavaScript code:
function declareNewVariable() {
    // has to be declared as "var" rather than let to make it accessible outside the block
    let block = "var a = 'Bonjour';";
    eval(block);
    console.log(`a: ${a}`)
}

declareNewVariable();
// a: Bonjour


This works because when JavaScript compiles and executes a "block" of code with eval() it gives it access to the scope chain of the enclosing function.

Python could have also implemented this feature, but it would be very problematic in performance terms. Each Python function stores its variables in an array (I think it's the f_localsplus attribute of the internal frame/interpreter object, not to be confused with the higher level PyFrameObject wrapper), and the bytecode access to variables by index in that array (using LOAD_FAST, STORE_FAST instructions), not by name . exec() accepts an arbitrary dictionary to be used as locals, meaning that it will access to that custom locals or to the one created from the real locals, as a dictionary lookup (with LOAD_NAME, STORE_NAME). Basically there's not an easy way to reconcile both approaches. Well, indeed exec() could have been designed as receiving by default a write-through proxy like the one created by frame.f_locals. That would allow modifying variables from the enclosing function, but would not work for adding variables to it (see this post). So I guess Python designers have seen it as more coherent to prevent both cases rather than having one case work (modification of variable) and another case not (addition of a new variable). As for the PyFrameObject stuff that I mention, some GPT information:

In Python 3.11+, the local variables and execution state are stored in interpreter frames (also called "internal frames"), which are lower-level C structures that are much more lightweight than the old PyFrameObject.
When you call sys._getframe() or use debugging tools, CPython creates a PyFrameObject on-demand that acts as a Python-accessible wrapper around the internal frame data. This wrapper is what you can inspect from Python code, but it's only created when needed.

So all in all we can say (well, a GPT says...)

Bottom line: Neither Python’s exec() nor JavaScript’s eval() can magically splice control-flow into the caller’s code. They both create separate compilation units. JavaScript feels “closer” because eval() shares lexical scope, but the AST boundaries still apply.

After all this, one interesting question comes up, is there any language where the equivalent to eval/exec allows us returning from the enclosing function? The answer is Yes, Ruby (and obviously it also allows modifying and adding new variables to the enclosing function). Additionally notice that ruby also supports return expressions (well, everything in ruby is an expression).


# ruby code:
def example
  result = eval("return 5")
  puts "This won't execute"
end

example  # returns 5

Ruby's eval is much more powerful than JavaScript's or Python's - it truly executes code as if it were written inline in the enclosing context.

The "as if" is important. It's not that Ruby compiles the code passed to eval and somehow embeds it in the middle of the currently running function. That could be possible I guess in a Tree parsing interpreter, modifying the AST of the current function, but Ruby has long ago moved to bytecode and JIT. What really happens is this

Ruby's eval compiles the string to bytecode and then executes it in the context of the provided binding, which includes:

- Local variables
- self (the current object)
- The control flow context (the call stack frame)

That last part is key. When you pass a Binding object, you're not just passing variables - you're passing a reference to the actual execution frame. So when the evaled code does return, break, or next (Ruby's continue), it operates on that captured frame. Here's where it gets wild.

The Binding object idea (an object that represents the execution context of a function) is amazing. By default (when you don't explicitly provide a binding object) the binding object represents the current execution frame, but you can even pass as binding the execution frame of another function!!! You can get access to variables from another function, and if that function is still active (it's up in the call stack) you can even return from that function, meaning you can make control flow jump from one function to another one up in the stack chain!

eval operates on a Binding object (which you can pass explicitly), and that binding captures the complete execution context - local variables, self, the surrounding scope, everything. You can even capture and pass bindings around

Just notice that Python allows a small subset of the binding object functionality by allowing us to explicitly provide custom dictionaries as locals and globals to exec().

Sunday, 23 November 2025

How Exceptions Work

It's been quite a while since I first complained about the lack of safe-navigation and coalesce operators in Python, and provided a basic alternative. I've also complained about the lack of try-expressions in Python, and also provided a basic alternative. Indeed, there's not a strong reason for having 2 separate functions, I think we can just use do_try for the safe-get and coalesce option.



def do_try(action: Callable, exceptions: BaseException | list[BaseException] | None = Exception, on_except: Any | None = None) -> Any:
    """
    simulate 'try expressions'
    on_except can be a value or a Callable (that receives the Exception)
    """
    try:
        return action()
    except exceptions as ex:
        return on_except(ex) if (on_except and callable(on_except)) else on_except

person = Person()
embassy = do_try(lambda: person.country.main_cities[0])


I also complained of how absurd it feels having a get method for ditionaries but not for collections. That means that we end up writing code like this:


x = items[i] if len(items) > i else "default"

Of course, that would not be necessary if we had safe-navigation, but as we don't have it, we can just use the do_try function:


x = do_try(lambda: items[i], "default")

And here comes the interesting part, obviously using do_try means using try-except under the covers, which when compared to using an if conditional seems something to avoid in performance terms, right? Well, I've been revisiting a bit the internals and cost of exceptions. Since version 3.11 Python has zero-cost exceptions. This means that (as in Java) having try-except blocks in your code does not have any performance effect if no exception is thrown/raise, the only costs occur if an exception is actually raised: The "zero cost" refers to the cost when no exception is raised. There is still a cost when exceptions are thrown.

Modern Python uses exception tables. For each function containing try-except blocks an exception table is created linking the try part to the handling code in the except part. Exception tables are created at compile time and stored in the code object. Then at runtime if an exception occurs the interpreter will consult the exception table to find the handler for the given exception and jump to it. Obviously creating an exception object, searching the exception table and jumping to the handler has a cost. Given that in Python compilation occurs when we launch the script, just before we can run the code, we can say that this exception table creation also has a runtime cost, but it's mininum as it happens only once per function (when the function is created), not every time the function is executed. That's where the cost happens if an exception is raised: creating the Exception object, unwinding and jumping to the handler.

Throwing/raising an exception felt like a low level mechanism to me, but it's not at all.

Language-level exceptions are software constructs managed by the runtime (JVM for Java, CPython for Python). They do not involve the OS unless the program crashes. So when you use a throw/raise statement in your code there's not any sort if software interrupt, it's just one more (or several) instruction. The python interpreter will come across a RAISE_VARARGS bytecode instruction, and it will search in the exception table for the current function and/or the functions in the call stack, trying to find an exception handler.

Notice that the same happens in Java-JVM. The Java Compiler creates an exception table for each method and stores it in the .class file. This table maps bytecode ranges to handlers (catch blocks) and the type of exception they handle. When the class loader loads the class the JVM stores this table in the method’s metadata.. Given that the JVM comes with JIT compilation, there's an additional level. When the JIT compiles the method:

The JIT generates native machine code for the method.
It also creates a new exception table for the compiled code, because:
The original bytecode offsets are no longer relevant.
The JIT needs to map native instruction addresses to handler entry points.

This table is stored alongside the compiled code in the JVM’s internal structures.

So once a method has been compiled by the JIT at runtime we'll have two exception tables, the initial one for the bytecode form of the method (that is kept around in case we have to deoptimize from native back to bytecodes), and the table for native code. Notice that when the JIT compiles the bytecodes to native code we'll incur a very small extra cost for the creation of this additional table.

With all the above, using do_try() for safe indexed access seems a bit overkill (unless we're sure that access is very rarely going to fail and throw), and having a specific commodity function for it makes sense:


def get_by_index(sequence: Sequence[T], index: int, default: Any = None) -> Any:
    """
    Safely access an element by index in a sequence, where sequence is any class supporting __getitem__ and __len__.
    like: list, str, tuple, and bytes
    Usage: get_index(my_list, 2, default='Not Found')
    """
    return sequence[index] if 0 <= index < len(sequence) else default
    

We could generalize the function for nested access, but once we start to loop with if conditions at some number of iterations the try-except will probably end up being better for performance.

Friday, 14 November 2025

FICXixon 2024

I guess as you get older (OK, let's say mature, to sound nicer) traditions become more and more important. One more year, and one more edition of FICXixón is almost here, and as usual, I realise I have not published yet my post about the previous edition, so here it goes:

FICXixon 62 edition took place from November 15th to 23rd. This was a dry November month here, which makes me rather angry, I love rainy weather, I've always loved it, but now even more probably due my youth memories (and I guess my Galician heritage also plays its part). All in all I attended 6 films, which is my record since I returned to Xixón in 2018. This year I was not so busy at work, and had more time to check the programme and attend screenings. I watched 2 excellent films, a good film, an interesting documentary, and 2 films that were OK, but I would not watch again. From the micro-reviews below you can guess who is who.

  • The Antique, Friday 15, 19:00, OCine. The best film I've watched in this edition. I was quite in doubt between this one and Bird, but the trailer of "The Antique" looked so good (additionally, a story set in Russia is more appealing to me now that one set in UK), and in the end I settled on this gorgeous Georgian film. An old flat in a historical building in central Saint Petersbourg, the snow covered streets, an old man approaching the end, a gorgeous Georgian woman, antiquities, social unrest. I think I've said enough
  • Una Ballena, Saturday 16, 22:00, Teatro Jovellanos. Basque film mixing neo-noir and horror-fantasy. It had excellent reviews, but I'm not sure why it did not work for me. There was an "Encuentro con el público" after the screening, where the director and the main actress, the beautiful Ingrid, would discuss the film with the audience, some people expressed how (positively) shocked they were with the film, and honestly I felt a bit out of place.
  • La prisionnere de Bordeaux, Tuesday 19, 21:30, Yelmo Ocimax. When FICXixón 2022 dedicated a retrospective to Patricia Mazuay I watched three of her films, and I loved 2 of them. And I loved even more the "Encuentros con el público" with her. She was so funny and expressive. So when I found out that they were programming her last film I could not miss it, furthermore when it stars the charming Isabelle Huppert, one of my favorite actresses. So I even took the bus to go to the Yelmo cinema located far away in the outskirts, something I'd never done before for a FIC film! I did not regret it, the film is the kind of bizarre, funny, melancholic product that you could expect from these 2 crazy women.
  • When the Light Breaks, Wednesday 20, 22:00, Teatro Jovellanos. Excellent Icelandic drama. Grieving for a loved one is even worse when you are (and he was) young, and even worse when you have to hide how broken you are cause you had a secret relation. The light sequences at the start and the end of the film are mesmerizing.
  • Que se sepa (Indarkeriaren pi(h)artzunak), Thursday 21, 22:00, Escuela de Comercio. Interesting and necessary Basque documentary about one more of the many episodes of sorrow and pain brought up by the long and bloody conflict in the Basque Country. This time ETA and the Spanish Government join forces to kill an innocent man and destroy his family.
  • Fréwaka, Saturday 22, 19:15, OCine. It was preceded by a Colombian short, "La noche del Minotauro". Irish horror film, it was entertaining I think, but had little effect on me, so little that 1 year after I hardly remember anything about it.

Sunday, 9 November 2025

CGNAT

I've got a rather basic network knowledge and I've lately come across a problem/limitation I was not aware of and that I think is increasingly common, CGNAT. With my Internet Provider (Telecable) in Asturies, my FTTH router (a nice ZTE F6640) has a stable IP. I mean, it's not static, but it rarely changes (even after rebooting it). So when I recently felt that it could be convenient for me to occasionally connect to one of the computers in my LAN from outside, I thought it would be feasible.

So let's say I want to be able to ssh into my RasPI 5 from downtown while I discuss with my friends about how woke ideology is destroying humanity. The DHCP server in my router is configured to provide a static IP to all significant devices in my LAN, let's say 192.168.1.5 for my rasPi5. To make the port 22 in my rasPi accessible from outside I have to configure port forwarding in my router. It's just a matter of telling the router "forward incoming connections to one of your ports (let's say 10022) to port 22 in 192.168.1.5". I'd never done it before, but seems like something that has existed for decades and should work. So I connected my laptop to my mobile phone hotspot, to simulate the "I'm on the outside world thing", and tried. And tried, and tried... to not avail.

Checking some forums with similar questions involving other Internet providers in Spain I came across this fucking technology: CGNAT

Carrier-grade NAT (CGN or CGNAT), also known as large-scale NAT (LSN), is a type of network address translation (NAT) used by ISPs in IPv4 network design. With CGNAT, end sites, in particular residential networks, are configured with private network addresses that are translated to public IPv4 addresses by middlebox network address translator devices embedded in the network operator's network, permitting the sharing of small pools of public addresses among many end users. This essentially repeats the traditional customer-premises NAT function at the ISP level.

My internet provider in Asturies continues to use IPv4 (that's not the case in France, where to my surprise I found recently that it's using IPv6), and given that it has not enough public IP addresses for all its customers, it's adding an extra NAT (Network Address Translation) Layer.

I had got my router public address using curl ident.me, that gave me a nice and public 85.152.xxx.yyy address, but if I connect to my fiber router and check in it, I see a different one: 100.102.x.y. Well, that's not a public IP, and an indicator that my ISP is using CGNAT, as explained here.

If it's any of the following, then your router doesn't have a public IP address:

  • 192.168.x.x
  • 10.x.x.x
  • 172.16.x.x through 172.31.x.x
  • 100.64.x.x through 100.127.x.x

The last one is usually indicative of your ISP using CGNAT.

Summing up, my laptop has a 192.168. private IP address. My fiber Router faces the outside world with another private IP address (100.102.). Me and other customers in my area are connected to another upstream router in my ISP network, and this one faces the outside world with the 85.152.xxx.yyy public IP that I can see with ident.me. So in order for the connection from the outside to my RasPi to work I would also have to set up port-forwarding in that upstream ISP router shared with my "neighbours". So, no way...

Well, there's another way (that I have not tried) to set up this, a sort of reverse approach. In the last year I've been using SSH tunnels to connect to some non public servers at work through a "Bastion" work server with a public IP. With a standard SSH tunnel I basically create a SSH connection to that Bastion server telling it (to the Bastion server) that any connection that goes through it (through that "tunnel") has to be forwarded to another server. There are also reverse SSH Tunnels, where I create a SSH connection to a server (a tunnel) telling that server that any connections it receives to a certain port have to be forwarded to "me" through that tunnel, to a certain port on my machine. So if you have a server on the internet (Azure, AWS...) you could use it to create a reverse SSH tunnel to your PC located behind CGNAT. All this is explained for example here.