Sunday, 29 March 2026

Pipe Operator and Null Safety

I've talked a couple of times [1] and [2] about how beautiful it's having a pipe operator in a language, though it's not particular common, and ways to simulate it in Python. Having a pipe operator makes applying functions to a value as convenient as chaining methods. When chaining methods we can leverage (if available) the safe navigation/optional chaining/elvis (?.) operator, to deal with null values. So, I've been thinking about null safety and pipes (not applying a function if the value is null, and coalescing to a default value).

In my previous post I mentioned that JavaScript had 2 different proposals for a pipe operator, but one of them has been discarded. I've been checking if this proposal includes null safety and the answer is not. It was discussed in the early stages, apart from the normal |> operator, having an additional ?|> operator for null safe cases, but it was discarded


// not null-safe, active proposal
user
  |> getProfile(%)
  |> formatProfile(%)
  
// null-safe, has been discarded
value ?|> fn
value |> fn ?? default

It was rejected on the basis that Pipelines should be pure syntax for data flow, not control flow.

To my surprise (I was not aware php continues to be used and evolve) PHP has recently added a pipe operator to the language, and for the moment it also lacks a null-safe version.

For Python decision makers adding a pipe operator seems "making the language too complex for beginners"... (you can't imagine how much I hate that so common kind of "pythonic" reflections...), but as I explained in my previous post we can easily add a pipe function that makes the trick (what has also been requested multiple times is adding such kind of function to functools, but no luck so far). An implementation is so simple as this:



def pipe(val: Any, *fns: Callable[[Any], Any]) -> Any:
    """
    pipes function calls over an initial value
    """
    def _call(val, fn):
        return fn(val)
    return functools.reduce(_call, fns, val)


And we can use it like this:



@dataclass
class Post:
    id: str
    title: str
    author: str

def get_post(post_id: str) -> Post | None:
    # simulate a function that may return None
    if post_id == "1":
        return Post(id="1", title="First post", author="1")
    else:
        return None

def get_address(person_id: str) -> str | None:
    # simulate a function that may return None
    if person_id == "1":
        return "Rue de La Nation, Paris"
    else:
        return None

pipe("1",
    get_post,
    lambda post: get_address(post.author),
	str.upper,
    print,
)	

# RUE DE LA NATION, PARIS


Creating a null aware equivalent is quite simple. The idea I came up with is having pipe accept not just a sequence of callables, but a sequence of callables or flag and callable or flag and value, with the flag indicating the we have to check for null before applying the Callable, or that we have to coalesce it to a value. Let's see the code:



// sentinel values
NULL_SAFE = object()
COALESCE = object()

def pipe(val: Any, *steps: Callable[[Any], Any] | tuple[Any, Callable[[Any], Any] | Any]) -> Any:
    """
    pipes function calls over an initial value, with support for null safety and coalescing:
    """
    def _call(val, step: Callable[[Any], Any] | tuple[Any, Callable[[Any], Any] | Any]) -> Any:
        if callable(step):
            return step(val)
        else:
            option = step[0]
            if option is NULL_SAFE:
                fn = step[1]
                return None if val is None else fn(val)
                
            elif option is COALESCE:
                default_val = step[1]
                return default_val if val is None else val
            else:
                raise ValueError(f"Invalid option: {option}")
    
    return functools.reduce(_call, steps, val)

pipe2("2",
    (NULL_SAFE, get_post),
    (NULL_SAFE, lambda post: get_address(post.author)),
    (COALESCE, "Not found"),
    str.upper,
    print,
)

# NOT FOUND


The function is quite minimal. We should add it proper error handing, throwing meaningful exceptions for each potential incorrect usage. You can just ask a GPT to add it and you'll end up with something like this:


def pipe(val: Any, *steps: Union[Callable[[Any], Any], Tuple[object, Any]]) -> Any:
    """
    Pipe value through callables or option-tuples.
    Steps can be:
      - a callable: called as fn(acc)
      - null_safe(fn): tuple (NULL_SAFE, fn) — only call fn if acc is not None
      - coalesce(default): tuple (COALESCE, default) — replace None with default

    Raises TypeError or ValueError for invalid steps.
    """
    def _call(val: Any, step: Union[Callable[[Any], Any], Tuple[object, Any]]) -> Any:
        if callable(step):
            return step(val)

        if not (isinstance(step, tuple) and len(step) == 2):
            raise TypeError("pipe2 steps must be callables or 2-tuples from null_safe/coalesce")

        option, payload = step
        if option is NULL_SAFE:
            if val is None:
                return None
            if not callable(payload):
                raise TypeError("NULL_SAFE payload must be callable")
            return payload(val)

        if option is COALESCE:
            default = payload
            return default if val is None else val

        raise ValueError(f"Unknown pipe2 option: {option!r}")

    return functools.reduce(_call, steps, val)


Friday, 20 March 2026

Python Annotated

In Python there is this common mantra that type annotations (type hints) do not have any runtime effect. Well, that's mainly true, as those type hints are not used by the runtime to check if your type assumptions/restrictions are correct (as the documentation says: "The Python runtime does not enforce function and variable type annotations. "). But on the other hand, this type information exists at runtime (only that the runtime itself does not use it). Until recently you would use inspect.get_annotations to get that info, a dictionary that gets stored in the __annotations__ attribute (notice that Annotations have been improved in Python 3.14, they're now evaluated lazily, and you should use now the annotationlib.get_annotions). So that information is available at runtime, and your custom code can make use of it for whatever it feels fit.

Additionally, the type hints syntax allows us to use any object as a type hint, not just a type. Indeed Python’s grammar does not restrict the content of annotation expressions, as PEP 3107 explicitly states: "Annotations can be any valid Python expression". Put another way: Python does allow arbitrary expressions (hence returning anything) in type annotations. This means that you can use the syntax for defining information (metadata) about these parameters, information that then is used by your custom code for something other that type-checking (for example, stating that a function expects a string following a certain pattern, let's say: create_user(msg: r"^[0-9a-f]{32}$")). That's very nice, but most likely you'll want to combine both the typing information and the extra metadata information. With that in mind, the Annotated class (Annotated[T, x]) was introduced some versions ago. T is a Type, and type-checkers understand the Annotated class and just take care of the T part. The x part is for metadata, that can be any object, and that will be used by your custom code at runtime. Indeed, that x can be multiple values, not just one, I mean: Annotated[str, ValueRange(10, 20), Complexity("high")].

Apart from metadata that applies to the parameters we can have metadata that applies to the function itself or to a class ('cacheable', 'optimized', some sort of privacy mechanism, whatever). Normally we'll use a custom decorator that adds this information as an attribute to the function/class (__cached__, __private__). So all in all we have 2 mechanisms to provide metadata. Annotated for parameters, and decorators for classes/functions themselves.



# to add metadata to the class or function itself, just use specific decorators that add metadata to specific attributes, for example:
def non_critical(func):
    func._non_critical = True
    return func

# metadata for parameters
@dataclass
class ValueRange:
    lo: int
    hi: int

@non_critical
def create_post_2(
    title: Annotated[str, ValueRange(5, 20)], 
    content: Annotated[str, ValueRange(5, 100)],
) -> dict:
    return {"title": title, "content": content}

annotations = annotationlib.get_annotations(create_post_2)
print(f"annotations: {annotations}") 
# annotations: {'title': typing.Annotated[str, ValueRange(lo=5, hi=20)], 'content': typing.Annotated[str, ValueRange(lo=5, hi=100)], 'return': class 'dict'}


In other languages like Kotlin/Java we use annotations for both parameters metadata and function/class metadata. It's important to note that while in Python we can provide any object as metadata (both when using Annotated or when using a decorator and passing any expression as argument), in Kotlin/Java annotations metadata is managed at compile time, so you are limited to compile-time constants. This means that in Python we have an enormous power with what we can provide as metadata.

Kotlin and Java annotations cannot take arbitrary runtime objects as parameters. Their allowed values are strictly limited because annotation arguments must be compile‑time constants and the annotation instances themselves are created by the compiler, not at runtime.

The Annotated class (well, indeed what we have are instances of _AnnotatedAlias) has an __origin__ attribute (that points to the the type-hint) and a __metadata__ attribute for the metadata. However, ff we only want to get the typing information we can directly use the typing.get_type_hints functions.


annotations = annotationlib.get_annotations(create_post_2)

print(annotations["title"].__origin__) # to get the original type hint, which is str in this case
# class 'str'>

metadata = {key: value.__metadata__ 
    for key, value in annotations.items() if hasattr(value, "__metadata__")
}
print(f"metadata: {metadata}")
# metadata: {'title': (ValueRange(lo=5, hi=20),), 'content': (ValueRange(lo=5, hi=100),)}

print(f"type hints: {get_type_hints(create_post_2)}")
# type hints: {'title': class 'str', 'content': class 'str', 'return': class 'dict'}

Thursday, 12 March 2026

La Mort de Quentin Deranque, un meurtre raciste / a racist murder

On February 12th, 2026, Quentin Deranque, a 23-year-old French man, was beaten to death (receiving multiple kicks to the head while he lay on the floor) by a far-left, pro-Islam, anti-French militia, a terrorist organization called "la Jeune Garde." Why? Because he was a French patriot, because he loved his country and culture, and because he intended to defend a few French girls belonging to Nemesis, a female organization that tries to raise awareness about the dangers that mass immigration from Muslim countries represents for women's rights and safety.

Of course, most mainstream media (which in France range from far-left to left, except for the excellent CNews) hurried to talk about a "fight" rather than a lynching. When the video of the lynching was made public, they tried to minimize it by claiming he was a far-right militant and an ultra-conservative Catholic. When that failed to silence the scandal, they escalated their lies, calling him a fascist, a racist, and a xenophobe. The far-left political movements that have instigated this violence for decades even dared to call him a "Nazi."

No, he was not any of those things; as I said, he was just a French patriot, a French nationalist. One could also say he was a conservative Catholic. It seems he had turned to Catholicism because of the link he established between French identity and the Catholicism. As someone who, even to this day, is not religious, I can clearly see and embrace that link, and I feel deeply grateful for having grown up in a place where Catholicism forms the basis of our moral system (regardless of whether most people consider themselves religious or not) rather than having grown up in a Muslim society.

Quentin was the child of a French father and a Peruvian mother, and he had mixed European and Amerindian features. Unless he was an illiterate idiot (he was a math student who loved philosophy and reading, so that does not seem to be the case), it is obvious that he could not be a racist and that his French nationalism was not based on a "legacy of blood" but on a "legacy of culture."

The fact that Quentin had partial extra-European origins leads to very interesting reflections. We saw his friends on TV; they were devastated by his assassination, yet they paid him tribute with enormous dignity and emotion. Many of these friends were clearly French nationalists, and for them, Quentin, with his 50% Peruvian ancestry, was just another French comrade. This is interesting for those who try to scare us with lies about French nationalism being inherently racist and xenophobic.

The even more interesting reflection is that I firmly believe Quentin’s death was a racist crime. His assassins, the ten far-left scumbags, all of them "white" and mostly coming from white, French (anti-France) bourgeois families, who beat him to death most likely focused on him because of his extra-European features. There were two other nationalist guys lying on the floor being kicked, but not with such cruelty. Am I saying that these far-left terrorists, who are supposed to fight against fascism and racism, killed him because he was not 100% white? YES, that is exactly what I am saying.

The far-left movement in Europe, and particularly in France, has become a cult of racial obsession. They have fully traded traditional class struggle for the radical, segregationist dogmas of the decolonial and indigenist movements. For these oikophobes —people who despise their own civilization— anyone of non-European descent is viewed strictly as a perpetual victim of a 'white system.' In their eyes, such a person is 'required' to hate their 'oppressors' and reject every facet of French culture. This means that for these lunatics, someone like Quentin, who chose to assimilate and embrace his French heritage, is seen as the ultimate 'traitor' to their narrative. To these self-loathing bourgeois radicals, Quentin should have been a grievance-filled victim, weaponizing his skin tone against the state. Instead, he chose the dignity of belonging to a national community, a history, and a culture. He was a French nationalist by choice and by love, proving their 'systemic' lies wrong. It was that clarity of spirit, that refusal to be a pawn in their racial war, that the far-left found truly intolerable.

Finally, I'll put here a list with the names and information (just taken from some other blogs) about the assasins. First three of the pieces of shit that directly kicked Quentin's head to his death:

Trois militants antifas lyonnais ont été formellement identifiés lors du lynchage du jeune Quentin.
1- Jacques-Élie Favrot (surnom “Jef”).
Assistant parlementaire de Raphaël Arnault, M2 à Sciences Po Saint-Étienne.
Militant à la Jeune Garde Lyon ainsi qu’à OSE CGT (syndicat étudiant de la CGT à Saint-Étienne).
2- Adrian Besseyre
Militant très actif de la Jeune Garde Lyon, né en 2001, il a également effectué un stage à l’Assemblée Nationale pour Raphaël Arnault.
3- Lelio Le Besson
Membre du service d’ordre de la Jeune Garde Lyon, et désormais militant actif de « Génération antifasciste », le mouvement qui a succédé à la JG.
Il a fait ses études à l’IG2E en Gestion des Risques et Traitement des pollutions

Then we have the scumbag that founded the far-left terror group, Raphaël Arnault. In this country in decay called France, a criminal identified as Fiche S (someone considered a serious threat to National Security), already condemned for a violent arbitrary aggression, can get a seat in the National Assembly. This ultra-violent illiterate piece of shit should be considered as one of the "intellectual" authors of this crime.

And then we have LFI, the anti-French, Pro-Islam political sect that has funded and empowered this terrorist group and has been sowing hatred in the country for years. Particularly, the leader of the sect, Melenchon, and its main and most violent and ignorant subordinates: Thomas Portes, Bompart, Rima Hassan, Mathilde Panot and Bilongo. They have minimized and even justified the murder, vomited lie after lie about Quentin (plainly calling him a "nazi") and even made fun of his execution. Hope one day all these traitors and scumbags will rot in hell.

Repose en Paix, Quentin. Les hommes sont morts, mais la dignité est éternelle.

Sunday, 22 February 2026

Python Class-Level Type Hints

Notice that in this post I'm talking about "standard" Python classes, not about dataclasses. I recently became aware of the possibility of using class-level type hints in your classes. The thing is that when reading the documentation I found it rather confusing. To make sense of it we have to be pretty aware of the difference between the intent that we express with those class hints and its runtime effects. So we have this example in the documentation:


class BasicStarship:
    captain: str = 'Picard'               # instance variable with default
    damage: int                           # instance variable without default
    stats: ClassVar[dict[str, int]] = {}  # class variable

The 'damage: int' part is the one that I knew about "class-level typehints" and was clear to me. We declare an attribute and its type, but we don't initialize it. Python takes this just as typing information, it has no runtime impact (other than being added to that class __annotations__), we are not creating an attribute in the class object.

The 'captain: str = 'Picard'' is what I could not understand. For me it's like the normal way of adding a class attribute, only that additionally you indicate the type, so how can it be that the doc says that it's an "instance variable with default". Well, it's the type-checking meaning vs the runtime effect. I am right that we get an attribute created at the class level (in the class __dict__), just see:


>>> class User:
...     continent: str = "Europe"
...     active = True
...

>>> User.__dict__
mappingproxy({'__module__': '__main__', '__firstlineno__': 1, '__annotations__': {'continent': }, 'continent': 'Europe', 'active': True, '__static_attributes__': (), '__dict__': , '__weakref__': , '__doc__': None})

>>> User.continent
'Europe'

>>> User.active
True

But for the type checker what that typed declaration means is that instances of that class will have a captain (or continent in my example) attribute. This could feel contradictory, but given how attribute look up works it's perfectly fine. Initially the 'captain' attribute is created at the class level. If we read it through an instance (my_ship.captain) the look up mechanism won't find it in the instance, but in the class, and return it. Then, when we write to it through an instance (not through the class) the writing will be done in the instance, so a 'captain' attribute will be added to the instance. That's fine, indeed, it's very nice, while the attribute is not being written to, just read, it's being shared between instances, kept in the class (and saving memory), then, as soon as you write to it, it's shadowed by the instance.


s = BasicStarship()
print(s.captain)       # "Picard" via class lookup
s.captain = "Xuan"     # creates an instance attribute
print(s.__dict__)      # {'captain': 'Xuan'}
print(BasicStarship.__dict__['captain'])  # 'Picard'

We can sumarize it like this:

Type hints alone do not create attributes; they only declare intent.
If you want the attribute to exist on the class (and thus be visible via Foo.x), you must assign a default value.

By the way, this is not the first time I see this behaviour of reading values from a "parent object" until we write the value to the object itself, shadowing it. This is just how things work in JavaScript with the [[Prototype]] chain.

I'm not much of a fan of defining instance attributes at the class level. It's true that it makes very explicit that an attribute is part of the public contract of the class, but I think most of the time it's a bit boilerplate. Type-checkers and autocomplete work perfectly fine with the classical style of initializing in the __init__ method, and if an attribute is internal/private and should not be considered part of the public API we should just follow the convention of starting it with '_'. So normally I would write the above code like this:



class AdvancedStarship:
    # stats = {} mypy will complain about this, because it is not a ClassVar
    stats: ClassVar[dict[str, int]] = {}  # class variable
    
    def __init__(self, damage: int, captain: str = 'Picard') -> None:
        self.captain = captain
        self.damage = damage



The case where these class-level type hints feel very useful to me is for Protocols, making unnecessary to declare the "data part" of the protocol with properties (get/set descriptors), that is the approach I used to follow so far.



from typing import Protocol

class Foo(Protocol):
    x: int  # part of the interface

class Bar:
    def __init__(self):
        self.x = 42  # matches Foo


It's also useful if we have attributes that won't be set in __init__, but in some later method call. This way we make them part of the class contract and initialize them to a default value (probably None), shared by all instances via the class attribute (as we saw with BasicStarship.captain), and then get it added to each instance when it gets set to a specific value.

Sunday, 15 February 2026

Logical Assignment Operator and More

I've recently come across the Logical OR Assignment (||=), and the Nullish Coalescing Assignment (??=) operators in JavaScript. They are not a revolution, just a shortcut for the usage of the OR (||) operator and the nullish coalescing operator in assignment situations. We use "||=" for falsy values and "??=" for nullish (null, undefined) values. Let's see:


// for "falsy" values
> let name = "";
> name ||= "default";
'default'
> name ||= "default2";
'default'

// is equivalent to:
> name = ""
> name = name || "default";
'default'
> name = name || "default2";
'default'

// for strict null or undefined values:
> let name = null; // or name = undefined
> name ??= "default";
'default'
> name ??= "default2";
'default'

// is equivalent to:
> name = null;
> name = name ?? "default";
'default'
> name = name ?? "default2";
'default'


Python does not have a 'None coalescing' operator (so obviously it does not have a 'None coalescing assignment' operator) so as equivalent we have to use an if-else expression. We have the 'or' operator (that we can use with falsy values), but not an "or assignment" operator. So the equivalent code to the above JavaScript is quite more verbose:


# for "falsy" values
> name = ""
> name = name or "default"
'default'
> name = name || "default2"
'default'

# for strict None values:
> name = null
> name = name if name is not None else "default"
'default'
> name = name if name is not None else "default2"
'default'

As the if-else pattern is quite verbose, we can write a simple coalesce function (I've just remembered that such function is almost standard SQL) to make code more straightforward.


def coalesce(value, default_value):
    return value if value is not None else default_value

a = coalesce(a, "default value")

As for other languages, Kotlin has the || operator and the :? null coalescing operator, but not a shortcut form to use during assignment. Ruby has a logical or assignment operator that we can use with nil and false (the only falsy values in Ruby). It feels strange that Ruby does not have a null coalescing operator, so if we want to be strict and deal only with null (nil), we have to use the so rich Ruby syntax differently:


# for null coalescing assignment
# like JavaScript: a = a ?? "default" 
# or Kotlin: a = a ?: "default"

a = "default" if a.nil?
# or
a = a.nil? ? "default" : a


Reached this point I think it'll be good to remember what are considered falsy values (those that, when evaluated in a boolean context, are considered as false) in different languages:

  • JavaScript: false, null, undefined, 0, ""
  • Python: False, None, 0, "", [], {}, set()
  • Rubynil, false
  • Kotlinfalse. Kotlin does NOT perform truthy/falsy coercion, it's fully, strictly typed:trying to use a non boolean value in a condition causes a compilation error.

As you can see the main (and very important) difference between JavaScript and Python is that in Python empty containers are falsy.

Saturday, 7 February 2026

Python Attribute Lookup and Dunders

I already talked in the past about Python descriptors [1] and [2] (referencing also the complex attribute lookup process). Somehow I've recently realised of how some commonly used attributes are managed with descriptors present in classes or metaclasses. First, I'll paste here the conclusions after an interesting chat with a GPT regarding the attibute lookup process:

1) Instance attribute lookup (obj.attr)

This is (conceptually) what object.__getattribute__(obj, name) does:

a) Check for a data descriptor on the class or its MRO
Search type(obj).__mro__ for name in each class’s __dict__.
If found and it’s a data descriptor (has __set__ or __delete__), return descriptor.__get__(obj, type(obj)).

b) Check the instance’s own dictionary
If obj.__dict__ exists and contains name, return obj.__dict__[name].
Note: If the class defines __slots__ without __dict__, this step may not exist.

c) Check for a non-data descriptor or other attribute on the class/MRO
Search type(obj).__mro__ for name.
If found and it’s a non-data descriptor (has __get__ only), return descriptor.__get__(obj, type(obj)).
Otherwise, return the found value as-is.

d) Fallback: __getattr__
If nothing above produced a value, and type(obj) defines __getattr__(self, name), call it and return its result.

e) Otherwise
Raise AttributeError.

2) class attribute lookup (C.attr)

Conceptually, type.__getattribute__(C, name) does this:

a) Metaclass MRO — data descriptors first
Search type(C).__mro__. If name is found and it’s a data descriptor (__set__ or __delete__ present), return descriptor.__get__(None, C).

b) Class MRO (C and its bases) — regular attributes & descriptors
Search C.__mro__ (starting with C, then bases):
If found and it’s a descriptor (__get__), return descriptor.__get__(None, C) (note obj=None).
Otherwise, return the raw value.

c) Metaclass MRO — non-data descriptors and other attributes
If found on the metaclass MRO and it’s a descriptor, return descriptor.__get__(C, type(C)) (here, the “instance” is the class C) Otherwise return the value.

e) Fallback
If not found and the metaclass defines __getattr__(cls, name), call it.
Else raise AttributeError.

Let's see now some examples of attributes that are indeed descriptors:

__name__ of a class (Person.__name__). One could think that it's just an attribute directly in the class object, but if it were that way, I could acces it via an instance of the class (person1.__name__) that is not the case. So indeed __name__ is a descriptor in the metaclass (and exactly the same for __bases__ or __doc__):


>>> class Person:
...     pass
...     
>>> Person().__name__
Traceback (most recent call last):
    Person().__name__
AttributeError: 'Person' object has no attribute '__name__'

>>> Person.__name__
'Person'

>>> Person.__dict__["__name__"]
Traceback (most recent call last):
    Person.__dict__["__name__"]
    ~~~~~~~~~~~~~~~^^^^^^^^^^^^
KeyError: '__name__'

>>> type(Person).__dict__["__name__"]
attribute '__name__' of 'type' objects
>>> type(type(Person).__dict__["__name__"])
class 'getset_descriptor'

>>> type(Person).__dict__["__bases__"]
attribute '__bases__' of 'type' objects
>>> type(type(Person).__dict__["__bases__"])
class 'getset_descriptor'>

>>> type(type(Person).__dict__["__doc__"])
class 'getset_descriptor'


__class__ of an instance or __class__ of a class. This one does not seem be based on descriptors, but (my discussion with a GPT is a bit confusing) it seem like it's managed specially by the look up algorithm.


>>> p1 = Person()
>>> p1.__class__
class '__main__.Person'

>>> type.__class__
class 'type'

>>> type(p1.__dict__["__class__"])
Traceback (most recent call last):
    type(p1.__dict__["__class__"])
         ~~~~~~~~~~~^^^^^^^^^^^^^
KeyError: '__class__'

>>> type(Person.__dict__["__class__"])
Traceback (most recent call last):
    type(Person.__dict__["__class__"])
         ~~~~~~~~~~~~~~~^^^^^^^^^^^^^
KeyError: '__class__'


Dunder attributes. It's interesting to note that there are 2 categories of __dunder__ attributes (those that start and end by "__").
- On one hand we have those like the ones we've just seen, these are Special Attributes (Metadata), that are used to store metadata: __name__, __class__, __bases__, __mro__, __dict__, __module__, __doce__, __annotations__.
- And on the other hand we have Special Methods (Behavioral Hooks), that are used to implement Python's syntactic sugar:

__call__: ob(), Invokation
__getitem__: ob[key]
__setitem__: ob[key] = value 
__getattr__: Fallback for missing attributes
__getattribute__: Intercepts all attribute access
__iter__, __next__: Iteration
__str__, __repr__: String representation
__eq__, __lt__, etc: Comparisons
__enter__, __exit__: Context managers
__add__, __mul__, etc: Arithmetic operations

Notice that if you access a Behavioral Hook "on your own" (I mean, you explicitly do: obj.__call__() or obj.__iter__()) the normal look up mechanism applies (using the object and its class). However, when used in the intended way (when you do obj(), or iter(obj)) the look up is done only in the class of the object (and if an object is a class it's done in its metaclass) not in the object itself.