Sunday 27 December 2020

Constructors and Object Literals

In C# we can combine constructors expecting parameters and Object literals without a problem, like this:


class Person
{
	public string Name {get; set;}
    
       public int Age{get; set;}

       public string Country {get; set;}
       
       public Person(string name, int age)
       {
       	this.Name = name;
       	this.Age = age;
       }
}


var p3 = new Person("Francois", 2)
            {
                Country = "France"
            };

However, the above contructor+literal is not possible in JavaScript. Hopefully, it's really simple to do something similar with Object.assign:


class Person{
	constructor(name, age){
		this.name = name;
		this.age = age;
	}
    
    doGreet(greeting){
		return `${greeting}, I'm ${this.name} and I am ${this.age}`;
	}
}

//Obviously this is not valid JavaScript syntax
//let p1 = new Person("Francois", 2){
//	country: "France"
//};


let p1 = Object.assign(new Person("Francois", 2), {country: "France"});

console.log(JSON.stringify(p1, null, "\t"));

//this is not OK, we miss the Person [[prototype]] (hence, the methods) in the resulting object.
//let p0 = {...new Person("Francois", 2), ...{country: "France"}};

Notice that in this case we have to use Object.assign rather than the Object spread operator (...), as we would be missing the Person [[prototype]] in the resulting object.

Automatic Properties

Automatic Properties have evolved a bit over the years as new C# versions were released and I think I'll summarize here its different features.

The most common, basic case is this:


public string Name {get; set;}

Notice that the C# compiler creates a private, backing field for the property: <Name>k__BackingField.

We can also mark the setter as private:


public int Age{get; private set;}

In C# 6 two features where added: Automatic Property Initializers:


public string Country {get; set;} = "Unknown";

And Getter Only Autoproperties:


    //immutable, can be set also from constructor
    public string Dni {get;} = "Unknown";
    public string ID {get;}
    
    //use examples:
    public Person(string dni)
    {
    	this.ID = "unknownID";
    	this.Dni = dni;
    }

In the above case, the compiler no longer generates a setter. The private backing field is directly set, either in that assignment, or in the class constructor.

Finally, C# 9, paying further attention to the Immutability trend, has added Init only setters.


    //immutable, can be set also from constructor and from object literal
    public string MotherLanguage {get; init;} = "Unknown";
    
    //use examples:
    public Person(){}
    public Person(string motherLanguage)
    {
    	this.MotherLanguage = motherLanguage;
    }
    var p2 = new Person
    {
        MotherLanguage = "Francais"
    };

This "init" does not involve any addition at the bytecode level, it's just translated into a normal setter. The interesting thing with it is that it can be set not only inline or in the constructor, but also from an Object literal. Contrary to the previous, "Getter only autoproperties" where I said that if set from the constructor it's done by directly setting the private backing field, here, both from the constructor or the Object literal, the compiler emits a call to the setter.

Regarding my statements about getters being or not being generated, the backup fields and so on... I've been using AvaloniaILSpy, an excellent multiplatform frontend to ILSpy. I've been using it on my Ubuntu laptop and it runs nicely.

Saturday 19 December 2020

Extract Parameter Names in JavaScript

After publishing my previous post I realised there was a nice way to improve my "joinCollections" funtion, getting rid of the 2 "alias" parameters. First let's see the code again:


function joinCollections(collection1, collection2, idSelectorFn1, idSelectorFn2, alias1, alias2){
    let result = [];
    collection1.forEach(it1 => {
        let id1 = idSelectorFn1(it1);
        collection2.forEach(it2 =>{
            if (id1 === idSelectorFn2(it2))
                result.push({
                    [alias1]: it1,
                    [alias2]: it2
                });
        });
    });
    return result;
}

let persons = joinCollections(employees, 
    students, 
    employee => `${employee.firstName}-${employee.lastName}`, 
    student => `${student.firstName}-${student.lastName}`,
    "employee",
    "student"
)

As you can see we are using the same value as "alias" and as the parameter name in the selector funtion. Of course we could have done differently (employee/emp...), but there's not any reason for that. The alias and the parameter refer to the same thing, whatever thing we have stored in the collection.
In JavaScript we can get access to the parameter names of a function just by converting the function to string and extracting the names from the function signature. Hence, we can easily extract the alias from the selector function, just like this (code specific for an arrow function)
selectorFn.toString().split("=>")[0].trim();


function joinCollections2(collection1, collection2, idSelectorFn1, idSelectorFn2){
    let getAlias = selectorFn => selectorFn.toString().split("=>")[0].trim();
    let [alias1, alias2] = [idSelectorFn1, idSelectorFn2].map(getAlias);

    let result = [];
    collection1.forEach(it1 => {
        let id1 = idSelectorFn1(it1);
        collection2.forEach(it2 =>{
            if (id1 === idSelectorFn2(it2))
                result.push({
                    [alias1]: it1,
                    [alias2]: it2
                });
        });
    });
    return result;
}

let persons = joinCollections(employees, 
    students, 
    employee => `${employee.firstName}-${employee.lastName}`, 
    student => `${student.firstName}-${student.lastName}`
)

The call to joinCollections is now much cleaner.

This post has made me think again about the difference between arguments and parameters. It's perfectly explained here:

A parameter is the variable which is part of the method’s signature (method declaration). An argument is an expression used when calling the method

If you check the comments there's one question that also came to my mind, the array-like object arguments that we have inside javascript functions, shouldn't it have been called parameters?

No, it should not. That object gives us access to what the funcion has been passed/has received in that invocation, not to the list of parameters in its signature. The signature can have x parameters and we can pass over "y" values. arguments gives us access to those "y" values.

Sunday 13 December 2020

Join JavaScript Arrays

The other day I needed to join values from 2 javascript arrays, same as if I were joining values from 2 Database tables. In C# we can join values in collections by means of Linq to Objects and its Join method. In JavaScript I'm not aware of Lodash providing something similar, but it's pretty simple to implement, so that's what I've done.

The most important thing is what should I return in the resulting array after joining the 2 collections so that it's easy to apply on it .map (select) and .filter(where). The best option that came to mind was putting each joined pair in an object with 2 keys, with each key provided as a sort of "alias" in the call to join. OK, I better show the code:


function joinCollections(collection1, collection2, idSelectorFn1, idSelectorFn2, alias1, alias2){
    let result = [];
    collection1.forEach(it1 => {
        let id1 = idSelectorFn1(it1);
        collection2.forEach(it2 =>{
            if (id1 === idSelectorFn2(it2))
                result.push({
                    [alias1]: it1,
                    [alias2]: it2
                });
        });
    });
    return result;
}

With that, we can easily access the 2 objects being joined for each "row" in order to do additional filtering or for mapping into a single resulting object.

Given the following inputs:


let employees = [
    { 
        firstName: "Terry", 
        lastName: "Adams", 
        role: "FrontEnd Developer"
    },
    { 
        firstName:"Charlotte", 
        lastName:"Weiss", 
        role: "Systems Administrator" 
    },
    { 
        firstName:"Magnus", 
        lastName:"Hedland", 
        role: "Psycologist"
    }, 
    { 
        firstName:"Vernette", 
        lastName:"Price", 
        role: "Shop Assistant"
    }
];

var students = [
    { 
        firstName:"Vernette", 
        lastName:"Price", 
        studies: "Philosophy"
    },
    { 
        firstName:"Terry", 
        lastName:"Earls", 
        studies: "Computer Engineering"
     },
     { 
         firstName:"Terry", 
         lastName:"Adams", 
         studies: "Computer Engineering"
    } 
];

We can obtain a list of Person objects with their full name, job and studies.


let persons = joinCollections(employees, 
    students, 
    employee => `${employee.firstName}-${employee.lastName}`, 
    student => `${student.firstName}-${student.lastName}`,
    "employee",
    "student"
).map(ob => {
    return {
        fullName: `${ob.employee.firstName} ${ob.employee.lastName}`,
        job: ob.employee.role,
        studies: ob.student.studies
    };
});

console.log("-- persons:\n " + JSON.stringify(persons, null, "\t"));

let persons2 = joinCollections(employees, 
    students, 
    employee => `${employee.firstName}-${employee.lastName}`, 
    student => `${student.firstName}-${student.lastName}`,
    "employee",
    "student"
)
.filter(ob => ob.employee.role === "FrontEnd Developer")
.map(ob => {
    return {
        fullName: `${ob.employee.firstName} ${ob.employee.lastName}`,
        job: ob.employee.role,
        studies: ob.student.studies
    };
});

Different from what we do in SQL, if we want to join more than 2 collections, we should join 2 of them, map the resulting pairs to a normal object, and then do the next join. I mean, given a third collection with salaries data, to join it to the previous employees and students data we first should do the join and map done in the previous step, and then do an additional join, like this:


let salaries = [
    {
        job: "FrontEnd Developer",
        amount: 40000 
    },
    {
        job: "Shop Assistant",
        amount: 20000 
    }
];

let persons = joinCollections(employees, 
    students, 
    employee => `${employee.firstName}-${employee.lastName}`, 
    student => `${student.firstName}-${student.lastName}`,
    "employee",
    "student"
).map(ob => {
    return {
        fullName: `${ob.employee.firstName} ${ob.employee.lastName}`,
        job: ob.employee.role,
        studies: ob.student.studies
    };
});

let personsWithSalaries = joinCollections(
    persons,
    salaries,
    person => person.job, 
    salary => salary.job,
    "person",
    "salary"
).map(ob => {
    return {...ob.person, ...ob.salary};
});

As usual, I've uploaded it into a gist

Monday 7 December 2020

Linux Memory Values

I've been trying to understand lately memory consumption information in Linux and I'll share my findings here.

My personal laptop has 8 GB of RAM (for sure, I can see it in the Bios), but the integrated Graphics card is taking 2.1 GB (and my Bios does not allow me to configure and reduce it!!!). The different programs that I've used in Linux to view my memory status only show the physical memory available to the system discounting those reserved 2.1 GB, hence 5.9 GB, and I have not found a way in Linux to see that the real physical amount of memory in my system is 8 GB.

I have to admit that things are a bit more clear in Windows. Task manager will show 6GBs in the main view, but at least it also has a "Hardware reserved: 2.1 GBs" reading. Furthermore, the "Windows About" window shows: Installed RAM 8.00 GB (5.92 GB usable).

The three main programs that I use to check my memory are the Ubuntu System Monitor and the top and free -m commands.

The information in the System Monitor is pretty straight forward. A 2 colors representation (as when checking a hard disk usage), and a "used" percentage. There is also the "Cache" value... that can seem quite more misterious, we'll understand what it represents in the next paragraphs.

The information displayed by top can be particularly confusing if we just read the first (total) and second (free) values. We should not care much about that "free", we should read just to the end to find the "avail Mem" value, that's what really matters in terms of how much memory we still have available (many of us would call it "free", but "free" has a different meaning in memory terms).

If we run free -m we really get the full picture, but we need some extra knowledge to interpret it.

              
		total        used        free      shared  buff/cache   available
Mem:           5920        1908        1704         112        2307        3610
Swap:          8092           0        8092

used is what we expect, the memory currently in use by our system. available is, as I've said for "top", what we really care in terms of how much memory my programs can still use without having to resort to swapping. And then we mainly have "free" and "buff/cache" (and "shared", but this is a small value, it represents the space used for tmpfs)

The main point to understand buff/cache and free is to be aware that Linux tries to take advantage of our RAM as much as possible (this is the same for Windows, but as Task Manager does not show any information about it, there's not much way for confusion). For that, it will store in memory pages read from disk, so that if we need them again we can take them from memory rather than reading them again from the slow disk. This is called Page Cache-Disk Cache. This is roughly what we see in free -m as buff/cache, and "free" is basically: total - used by programs - used by buff/cache - shared.

In computing, a page cache, sometimes also called disk cache,[2] is a transparent cache for the pages originating from a secondary storage device such as a hard disk drive (HDD) or a solid-state drive (SSD). The operating system keeps a page cache in otherwise unused portions of the main memory (RAM), resulting in quicker access to the contents of cached pages and overall performance improvements. A page cache is implemented in kernels with the paging memory management, and is mostly transparent to applications. Usually, all physical memory not directly allocated to applications is used by the operating system for the page cache. Since the memory would otherwise be idle and is easily reclaimed when applications request it, there is generally no associated performance penalty and the operating system might even report such memory as "free" or "available".

Another interesting point is the swap used value. In my case it's 0, which looks quite natural as I have plenty of available RAM, but sometimes values could be slightly more confusing, as explained here:

Sometimes the system will page something out (for whatever reason). If later that page is moved back to memory for a read operation, the copy in swap space is not deleted. If the same page is later paged out again, without being changed, it can do so without writing to the dis

Sunday 29 November 2020

Composing Art Actions

I took a pic in my hometown (Xixón) last week that I quite like. Just one more "arty" failed attempt... but well, as I've said, I like it. It's a sort of composition of 3 artistic actions. The first action was someone painting that forest and sky on an electric box (probably this is part of a city council sponsored program). The second action was some teenager (I guess) adding that Anarchy "A" (I hate anarchism... but that (A) emerging as a sort of "rising sun" is a nice intervention). The third artistic action has been me, spotting that painting, and taking the pic with a long exposure while a car passed by, capturing that blurring light trail (I've done this millions of times and I still find it surprising that something so easy can sometimes look so good).

And that's all, quite a short post...

Saturday 21 November 2020

Toulouse, Bâtiments Laids

J'aime bien Toulouse. Elle a été ma "ville d'adoption" pendant des années et maintenant c'est ma ville secondaire. Je la trouve bien jolie, avec beaucoup de charme. Mes premières semaines là- bas on peut dire que je suis tombé amoureux de la ville. Outre que l'architecture et le patrimoine, Toulouse est une ville dynamique, jeune, avec une économie forte (bon, le covid est catastrophique pour l'industrie aéronautique et les entreprises d'IT qui en dépendent, mais j'espère qu'on va s'en sortir).

Après des mots flatteurs, je dois admettre qu' on ne peut pas comparer Toulouse a Lyon, ou même à Bordeaux. Lyon et Bordeaux jouent à un autre niveau, c'est incontestable. Ce n'est pas que l'architecture ancienne, mais aussi la moderne. On trouve à Lyon et Bordeaux bien de bâtiments modernes remarquables ou simplement agréables (je pense a les bâtiments résidentiels a EurAtlantique ou Bassin à Filtres, ou à Confluences). A Toulouse, c'est une autre histoire, outre que la médiathèque et la nouvelle Business School (et ils ne sont pas trop impressionant) je ne peux pas penser a un seul batiment interessant bâti dans les derniers 70 ans... Je pense que au jour d'hui Toulouse a les pires arquitectes de France (Taillandier...)

Malheureusement, Toulouse a eu de très mauvais arquitectes aussi au cours du siècle dernier, ils ont laissé des verrues dans la ville qui sont d'une laideur redoutable. J'ai dit beaucoup de fois qu' en general j'aime les tours et les gratte-ciels quand ils répondent à certains exigences esthétiques, mais quand ce n'est pas le cas, une tour peut devenir d'un laideur extrême (au moins, pour la plupart des gens). Je vais montrer ci-dessous des exemples qu'on peut trouver à Toulouse de bâtiments très détestés. Je dois admettre aussi qu'avec les années je me suis habitué et même par la plupart j'en suis venu à les apprécier.

Cité Roguet, Saint Cyprien, années 60. Bien sur c'est moche, mais je lui trouve un certain charme

Barre Cristal, Saint Cyprien, années 60 (détesté par beaucoup, pas par moi, en fait, je pourrais dire que je l'aime)

Tour de rue de Maroc. Pour moi c'est dégueulasse en raison de son état actuelle, mais avec une bonne renovation de facade, ca pourrait marcher très bien

Place Roquelaine. Ouais, c'est vraiment moche, je ne peux pas trouver rien de positif...

J'ai commencé a ecrire ce post il y a des semaines, et par hasard je trouve que 20 Minutes a publié un article sur la cité Roguet et la barre Cristal il y a 2 jours.

Ça a été mon premier post en français!, désolé pour toutes les erreurs.

Saturday 7 November 2020

Bruxellisation

I've already explained in previous posts that I like skyscrappers (if they follow some basic aesthetic rules), but I'm not a fan of large clusters of them, and pretty much prefer seeing them scattered over the urban landscape in very specific points, as signals and reference points. This said, I've found this short video interesting, though I quite don't agree with the idea that seems to be conveyed in it (probably this video has been made by a USA guy), that the more skyscrappers in a city the more developed and appealling the city is.

One interesting concept mentioned in the video and that was unknown to me by that name is the one of Bruxellisation/Brusselisation

The indiscriminate and careless introduction of modern high-rise buildings into gentrified neighbourhoods" and has become a byword for "haphazard urban development and redevelopment".

I guess my main hometown, Xixón, could be seen as a good example of Bruxellisation :-) Not that we have many historical buildings, but in the 60's and 70's many buildings of 8 to 12 stories sprung up like mushrooms wall to wall with buildings of the early century... Over time, I have learnt to enjoy these contrasts, and find some charm in it. Additionally, I have to say that many of the 15 stories towers that were built in some parts of my town look pretty nice to me now (in particular when compared to the predominatly ugly, cheap towers built in Southern France cities). So while I understand that most people distaste "Bruxellisation", in my case it's not a particularly denigrant term. Indeed, I like Brussels a real lot, and not just because of the gorgeous Grand Place and some other nice central areas, but preciselly due to these contrasts between nice, old Flemish buildings and the 60/70's grey, high-rise buildings.

From the same article I learn also another term, Facadism defining an important and controversial practice in architecture:

The architectural and construction practice where the facade of a building is designed or constructed separately from the rest of a building, or when only the facade of a building is preserved with new buildings erected behind or around it

The part that I've highlighted is the one that mainly concerns me from this practice, and that for the most part I find like a good option in many circunstances. I guess it's normal that I like it, as it can bring about contrasts similar to the ones I've just said that can look pretty charming to me.

Monday 2 November 2020

La Colomina 36

Having been a fierce supporter of Asturian language for many years I have to admit that I have read very few literature books in Asturian language (well, as in any other language, to be honest). This said, there are 3 books that I've really enjoyed: "Carretera ensin barru" more than 10 years ago... "Lluvia d'agostu" 2 years ago, and now "La colomina 36".

La colomina 36, by Nicolás Bardio, is a too short, amazing fiction, "un drole de livre". It's an uchronia in which Asturies in 1970 is one more (and far away) republic of the Soviet Union! The story follows Fabian, an Asturian KGB member in charge of watching the other dwellers of his 8 stories block in Mieres (the block number 36), hoping to find one enemy of the state among them. It makes me think mainly of the Stasi in Eastern Germany. The story is interesting, maybe slightly slow, but with a really superb ending, but the really amazing, exiting, delirious element of the book is the background, that Soviet Socialist Republic of Asturies (RSSA)!

Unfortunately the book does not explain in much detail how Asturies got to that status, but as we read the book some hints are provided. I would have preferred an intro of several pages explaining that alternate history, but indeed that would be material enough for a different (and longer) book, that I really hope the author has in mind. Indeed, he previously created a Role game "Depués d'Ochobre/After October" that describes that universe. If you can read Asturian, please visit that url, it's crazy!
I'll summarize here how Asturies seemed to manage to turn into a Soviet Republic and some of the details provided about normal life in that Republic:

The 1934 revolution succeded and, not having had an equivalent in the rest of Spain, Asturies ended up becoming an independent country. A country that would enter in the WWII in 1942 when Nazi Germany tried to invade it. There are references in the book to the Heros of both the Revolution and the War. After the war, Asturies decides to join the Soviet Union, with Stalin coming to Asturies to sign the adhesion pact. In an attack of inventive the author says that Moscovitas Rialto (a delicious almond paste produced in Uvieu) has its roots in that event!

As interesting as how Asturies got to that situation is how life for Asturians is. We learn that the Asturian population speaks both Asturian and Russian (in their own way :-), with their home libraries made up of Asturian books and Russian classics. There's a Russian stewardess living in the block that normally does the routes Asturies-Praga and Asturies-Moscú. One family in the building has one child studying in Moscú and another one in Leningrad. Mieres is the biggest city in Asturies, and the 2 main football teams are the Caudal Lokomotiv de Mieres and the CSKA Xixón :-D. Obviously, the Xixon harbor is an important Soviet naval base. Also, there are Spanish refugees that found shelter in Asturies after the Spanish Civil War... There are some more bits of life in Soviet Asturies over the book, but these are the main ones that come to my mind now.

If all the above is not enough, the book cover should for sure awake your interest.

And what to say about the flag and coat of arms taken from the Role game!

Saturday 31 October 2020

Top and Bottom Type

When reading this post by Axel Rauschmayer about any being a "top type" in TypeScript, I found particularly interesting one of the comments, that argues that it's both a top and a bottom type:

1. any is like a top type in that you can assign values of all other types to it: let a: any = x works whatever type x has.
2. any is like a bottom type in that you can assign values with an any type to all other types: let a: any; let x: T = a is allowed for all types T (except never).

This made me revisit the behaviour of the dynamic keyword in C#. It's clear that it behaves as a top type, but I was a bit dubious about how the compiler dealed with it and to what extent it could be also considered a bottom type. Some code to the rescue:



    interface ITalkative
    {
        string SayHi();
    }
    
    class Person: ITalkative
    {
        public string SayHi()
        {
            return "Bonjour";
        }
    }

    class Printer
    {
        public static void Print(ITalkative t)
        {
            Console.WriteLine("[" + t.SayHi() + "]");
        }
    }

    class Cat
    {
        public string SayHi()
        {
            return "Miau";
        }
    }

 
    class Program
    {
        static void Main(string[] args)
        {
            Console.WriteLine("Started");
            dynamic ob = new Cat();

            try{
                Printer.Print(ob);
            }
            catch(Exception ex){
                Console.WriteLine("1. " + ex.Message);
                //The best overloaded method match for 'Dynamic.Printer.Print(Dynamic.ITalkative)' has some invalid arguments
            }
            
            try{
                Person p = ob;
            }
            catch(Exception ex){
                Console.WriteLine("2. " + ex.Message);
                //Cannot implicitly convert type 'Dynamic.Cat' to 'Dynamic.Person'
            }

            try{
                Printer.Print(ob as Person); //casting returns null and then Print fails
            }
            catch(Exception ex){
                Console.WriteLine("3. " + ex.Message);
                //Object reference not set to an instance of an object.
            }

            try{
                Person p1 = ob as Person;
            }
            catch(Exception ex){
                Console.WriteLine("4. " + ex.Message);
            }

  
        }
    }

I'm particularly interested in the first case. I was not sure if the compiler would allow that, but it does. So the compiler allows me to invoke a method that expects ITalkative with just a "dynamic" value, so yes, we can say that dynamic is a bottom type. Notice however that dynamic in C# has nothing to do with Duck Typing (or TypeScript's structural typing), so it's obvious that this will fail at runtime. The method invocation Printer.Print fails cause the runtime checks (it's different from a cast error) if the parameter implements ITalkative, as that is not the case, we get a The best overloaded method match for 'Dynamic.Printer.Print(Dynamic.ITalkative)' has some invalid arguments exception.

A decade ago (puff...) I had posted about Duck Typing in C# and dynamic, but probably I was thinking that we would already get a compiler error in this situation, rather than a runtime one.

Tuesday 13 October 2020

Sin Fin

Sin Fin is a beautiful Spanish film. It combines Drama, Romance and Science Fiction. The Science Fiction element (time travelling from 2015 to 1993) is there just to allow the story to happen, the romance part leads to the drama, that is the prevailing element.

A story about how the purchase of a dream brings about the destruction of who has been devoted to you for 20 years, about devoting the rest of your broken life so that the materialization of the dream can erase the nightmare, about correcting in 1 day the errors of 2 decades...

I have a particular appreciation for any art piece (whatever the form) that portrays the passage of time, distant moments of one life, that reminds us of our temporarity, that confronts us to how what we do today determines our future, and how maybe one day in that future we'll look at that distant "today" with sorrow and tears or with joy and reverence, with a desire to erase or repeat that day. This film is about making mistakes and being given the chance to travel in time to correct those mistakes, to save a life, to save two lives.

The story stars in 1993, with a boy and a girl of around 20, in that moment in your existence where you have that sacred power to dream and shape your future, where a single day can determine the joys and pains of the ensuing decades. It's such a power and such a responsibility that it gives me vertigo now. I was not aware of that power when I had it, it was not until a couple of years ago that I started to look back at that time in my life with a sort of nostalgia, but a healthy one, with a sort of gratitude for still having a present now, and for not being so fully aware of how each decision at that time was drawing a path for which there would not be way back...

Twenty years later the energetic, dreamy girl has turned into a depressive adult with no other prospect that putting an end to everything. The genious guy obsessed with creating something "big" is now an unhappy adult with that same obsession that so far has not done other thing than destroying their lives... I'll stop here, it should have been enough to wake up your interest or to bore you even more than usual.

It's a slightly rainy autumn afternoon in Toulouse, a vacation day in the first year of the pandemia. I'm listening to Yarostan and Les deux minutes de la Haine, 2 bands that I already knew, but that did not fully get my attention until I listened to their stuff in the best record of this year?... and I'm giving these details beacause I think they play well with the melancholy in this post, and if some day in the future I read back these paragraphs I'll enjoy being reminded of these details, details that make up a life...

Saturday 3 October 2020

Defend Armenia

I'm not religious, but as a proud European I clearly adhere to Eric Zemmour's mantra about the roots of European civilization: "Europe is the Greek Culture, the Roman Culture and Christianism". Yes, we have the Celts, the Germanic tribes... but 90% of what we are as Europeans come from those 3 cultural and moral frameworks.

Armenia, (by the way, the first Christian nation) is part of our Western civilization, as obviously, Greece is. Last month Turkey (the reverese reflection of everything European Civilization stands for, our eternal enemy) was threatening Greece with war as Turkey intends to steal part of Greece's maritime space to leverage the gas fields that are supposed to be there. Most of Europe said nothing, it was only France who clearly stood by our European Brothers. On the other side, Germany, terrified by the enormous amount of Turkish invaders that have settled on the country in the last 50 years and how they (particularly the neo-fascist, paramilitary organization grey wolves) could react setting the country in flames... did as much as possible to avoid any sort of common European action that could bother the neo-sultan Erdogan... 

This last week, Azerbaijan (this is a different country, but Azerbaijanis are a Turkic ethnic group, so they are not much different from the Turkish (scum) people, by the way, obviously both of them are Muslims) has launched an attack on the Armenian people living in the Nagorno Karabakh region. Almost immediatelly Erdogan expressed his support for his ethnic, muslim brothers and made it clear that he would help them by any means.

Turks have a tradition of killing (and torturing, and enslaving, and raping and whatever...) Christians, particularly Armenians and Greeks (but also Serbians, Romanians... well any sort of Europeans... haven't you heard about the Ottoman pirates?) so this new war against Armenia (as the low level conquest war that ultranationalist-islamist Turkish criminals fight everyday in the unfortunate European cities where they have settled). In the last decades the Turkish government had favored to kill and torture Kurds and now they aim to leverage the knowledge that they gained while massacring the Kurds in Northern Syria to slaughter the Armenians. At least, they are sending against the Armenians some of the same Jihadist mercenaries that they paid for to try to annihilate the Kurds. If you think I'm a delirious European nationalist... well, you can see that it's not just me, it's Emmanuel Macron also who says it.


 

In the last decades, cowardice, self-hate and the absolute ignorance of almost any lesson drawn from our history... have fed the decadence of European civilization. Supporting our Armenian and Greek brothers (by any means necessary) against our common enemy should be a first step in regaining our dignity as a people and reconquering our lands.

I've found interesting this article about the influential Armenian diaspora getting ready to support the land of their ancestors. And well, this picture of a 106 years old Armenian woman during the previous chapter of this war (back in the 90's) is an incarnation of courage and dignity.

 


 

Saturday 26 September 2020

JavaScript Arrays Oddities

As almost everything in the language, Arrays in JavaScript are not exactly the same as in other languages. In principle one thinks of an array as a continuous section of memory, where each item has the same size and hence it can be easily located based on its numeric index (you know: startAddress + (index * itemSize))... Well, this is not always the case in JavaScript.

First, in JavaScript arrays can be sparse (rather than dense). This means that if you create an array of a given size (e.g.: new Array(200);) but you only assign a value to some of those 200 positions, the unassigned ones are "holes" and do not take up space. Accessing one of those holes will retun an undefined value, traversing the array with for-of will also return undefined values, but traversing it with .forEach will skip the "holes". If we really had "holes" in the array (rather than having those positions pointing to "undefined") the Array can not be just a continuous piece of memory... Well, this article explains how it's implemented in the v8 engine. You can read there that for small arrays, a real array (continuous memory space) is used, with those empty spaces containing a "hole" value, so there's not any saving in memory space. If the array is big, then "a low level array" is no longer used, but a dictionary where the numeric indexes are the keys.

The article stops there, but there are more oddities with arrays. If you are familiar with Python (or with the last C# addition, indexes) maybe you have tried to use a negative index with your array. That sort of works in JavaScript, but not as you would expect. array[-1] does not refer to the last item in the array, but to a "-1" property (indeed as if you use any other string). So doing ar[-1] = "ax"; will add a property to the object:

 

> let ar = ["a","b"];
> ar;
[ 'a', 'b' ]
> ar[-1];
undefined
> ar[-1] = "Bonjour";
'Bonjour'
> ar[-1];
'Bonjour'
> ar;
[ 'a', 'b', '-1': 'Bonjour' ]
> 

So in a case like that I don't know how it is implemented. Maybe the "normal" part of the Array is maintained as a low level array and a dictionary is used only for the non numeric indexes, or maybe a dictionary is used for both numeric and non numeric indexes.

One additional comment. Both numeric and non numeric indexes are considered as "own properties" of the array. However, holes in the sparse array are not "own properties".

 

> ar;
[ 'a', 'b', '-1': 'Bonjour' ]
> ar.hasOwnProperty(1);
true
> ar.hasOwnProperty(-1);
true
> ar.hasOwnProperty(-2);
false

> let sparseAr = new Array(5);
> sparseAr[2] = "aa";
> sparseAr;
[ <2 empty items>, 'aa', <2 empty items> ]
> sparseAr.hasOwnProperty(0);
false
> sparseAr.hasOwnProperty(2);
true


Saturday 19 September 2020

Canvas Double Buffering

In the past I did a bit of Html Canvas programming (for example: this), and I never needed to resort to double buffering to avoid flickering. I did not pay much attention to it at the time, but the other day I came across some discussion as to whether double buffering was necessary. Though you can find some discrepancies and some samples claiming to cause flickering, the main answer is that No, you don't need to implement double buffering, which leads us to something more interesting, why not?

The fiddle in this answer is rather instructive, but what the author says Lucky for us every canvas implementation implements it (double buffering) behind-the-scenes for you. could be misleading.

If we have an understanding of how the event loop works in the Browser (or in node), we know that user interactions, timeouts, requestAnimationFrame... enqueue tasks in a macrotask queue (for the current discussion we can omit the microtask queue), and that the event loop will take tasks from that queue one by one, executing the corresponding javascript code, and updating the DOM (rendering) after each task is completed. This "updating the DOM after the ask is completed" is essential for this explanation. From the article:

Rendering never happens while the engine executes a task. It doesn’t matter if the task takes a long time. Changes to the DOM are painted only after the task is complete.

So if we are drawing animations to a canvas (normally based on requestAnimationFrame), each animation frame javascript code will run as part of a Task, and when that frame code is completed, the rendering will happen. So all the sequential drawing calls that we do to the Canvas Context (clear the canvas, draw a rectangle, draw a circle...) won't be visible until the whole sequence of drawing operations is complete (the task is complete) and then the rendering-drawing takes place.

I don't think the Canvas element uses a second (hidden) canvas to draw on it during the Task (javascript) execution phase, and then draws that Canvas on screen during the rendering phase, I just think that all those drawing operations are stored, and then, during the rendering phase, they are painted to the canvas.

Tuesday 8 September 2020

Promise Inspection Part 2

In my previous post I said that probably there was an easier way than inheritance to implement synchronous inspection in Promises, so that's what this post is about.

This mechanism is used differently from the one shown last week. Rather than receiving an executor function and creating a Promise from it, we receive an existing Promise, and chain to her a new Promise that is expanded with the expected inspection methods (isFulfilled(), isRejected(), getValue(), getReason()). The code is pretty straight forward. The new Promise waits for the original Promise and when this one is completed or rejected it sets her inpection properties accordingly.

An interesting point is that indeed the new Promise does not hold the isFullfilled, value, reason... values (used by the corresponding isFullfilled(), getValue()... expansion methods) as data fields (that hence could be set from outside), but as variables that get trapped by the closures used for each of those expansion methods. This is an old trick used for creating private fields in JavaScript.

Donc, voilà le code:

 

//returns a new Promise expanded with inspection methods
function enableSyncInspect(pr){
    //we trap these variables in the closure making them sort of private, and allow public access only through the inspection methods that we add to the new promise
    let isFulfilled = false;
    let value = null; //resolution result
    
    let isRejected = false;
    let reason = null; //rejection reason
    
    let isPending = true;

    //create a new promise that gets resolved-rejected by the original promise and gets expanded with inspection methods
    let prWrapper = pr.then(_value => {
        isPending = false;
        isFulfilled = true;
        value = _value;
        return _value;
    }, _reason => {
        isPending = false;
        isRejected = true;
        reason = _reason;
        return _reason;
    });

    prWrapper.isFulfilled = () => {
        return isFulfilled;
    }

    prWrapper.getValue = () => {
        return isFulfilled 
            ? value
            : (() => {throw new Error("Unfulfilled Promise");})(); //emulate "throw expressions"
    }

    prWrapper.isRejected = () => {
        return isRejected;
    }

    prWrapper.getReason = () => {
        return isRejected
            ? reason
            : (() => {throw new Error("Unrejected Promise");})(); //emulate "throw expressions"
    }

    prWrapper.isPending = () => {
        return isPending;
    }

    return prWrapper;
}


That we can use like this:

 

function formatAsync(msg){
    return new Promise((resFn, rejFn) => {
        console.log("starting format");
        setTimeout(() => {
            console.log("finishing format");
            resFn(`[[${msg}]]`);
        }, 2000);
    });
}

function printValueIfFulfilled(pr){
    if (pr.isFulfilled()){
        console.log("Promise resolved to: " + pr.getValue());
    }
    else{
        console.log("Promise NOT resolved yet");
    }
}

//async main
(async () => {
    let pr1 = formatAsync("Bonjour");
    let syncInspectPr = enableSyncInspect(pr1);

    console.log("isPending: " + syncInspectPr.isPending());

    //this fn runs in 1 seconds (while the async fn takes 3 seconds) so it won't be fulfilled at that point)
    setTimeout(() => printValueIfFulfilled(syncInspectPr), 1000);

    let result = await syncInspectPr;
    console.log("result value: " + result);
    
    printValueIfFulfilled(syncInspectPr);

})();

//Output:
// starting format
// isPending: true
// Promise NOT resolved yet
// finishing format
// result value: [[Bonjour]]
// Promise resolved to: [[Bonjour]]


As usual I've uploaded it into a gist.

Tuesday 1 September 2020

Promise Inheritance and Synchronous Inspection

The other day, taking a look into the advanced Promises provided by bluebirdjs I though about how to implement a very small and basic part of the funcionality, the Synchronous Inspection, that basically stores the value resolved by a Promise and allows accessing to it synchronously (read the doc for a real explanation). There are other ways to implement it, but I decided to use inheritance, and has been interesting enough to post it here.

In a previous post I already made use of Promise inheritance, but I did not need to take into account the resolve and reject callbacks passed to the executor function received by the constructor. In this case I wanted to override those resolve-reject callbacks, so that I would add my additional logic (setting the isFulfilled and value or isRejected and reason values) and then invoke the originals (the parent ones let's say) so that they perform their magic logic of invoking the continuations (the "then"-"catch" handlers). These callbacks are not exposed as methods in the Promise class (one could think of having implemented them as protected methods in another language...), so in order to get hold of them to use them, we create an internal Promise with an executor function that just will give us a reference to those original callbacks, and then we call the original executor function passing to it resolve-reject callbacks that perform our extra logic and then invoke the originals. Our "then" overriden method invokes the internal Promise parent method, so that all the logic performed by the Promise class to set the continuations is run.

The explanation above is really confusing, so you better just check the code of my SyncInspectPromise class:

 


class SyncInspectPromise extends Promise{
    constructor(executorFn){
        //compiler forces me to do a super call
        super(() => {});

        this._isFulfilled = false;
        this._value = null; //resolution result
        
        this._isRejected = false;
        this._reason = null; //rejection reason
        
        this._isPending = true;
        
        //we need to be able to invoke the original resFn, rejFn functions after performing our additional logic
        let origResFn, origRejFn;
        this.internalPr = new Promise((resFn, rejFn) => {
            origResFn = resFn;
            origRejFn = rejFn;
        });

        let overriddenResFn = (value) => {
            this._isPending = false;
            this._isFulfilled = true;
            this._value = value;
            origResFn(value);
        };

        let overriddenRejFn = (reason) => {
            this._isPending = false;
            this._isRejected = true;
            this._reason = reason;
            origRejFn(reason);
        };

        executorFn(overriddenResFn, overriddenRejFn);
    }

    isFulfilled(){
        return this._isFulfilled;
    }

    getValue(){
        return this.isFulfilled() 
            ? this._value
            : (() => {throw new Error("Unfulfilled Promise");})(); //emulate "throw expressions"
    }


    isRejected(){
        return this._isRejected;
    }

    getReason(){
        return this.isRejected()
            ? this._reason
            : (() => {throw new Error("Unrejected Promise");})(); //emulate "throw expressions"
    }

    isPending(){
        return this._isPending;
    }

    then(fn){
        //we set the continuation to the internal Promise, so that invoking the original res function
        //will invoke the continuation
        return this.internalPr.then(fn);
    }

    catch(fn){
        //we set the continuation to the internal Promise, so that invoking the original rej function
        //will invoke the continuation
        return this.internalPr.catch(fn);
    }

    finally(fn){
        return this.internalPr.finally(fn);
    }
}


And we can use it like this:

 

function sleep(ms){
    let resolveFn;
    let pr = new Promise(res => resolveFn = res);
    setTimeout(() => resolveFn(), ms);
    return pr;
}

function printValueIfFulfilled(pr){
    if (pr.isFulfilled()){
        console.log("Promise resolved to: " + pr.getValue());
    }
    else{
        console.log("Promise NOT resolved yet");
    }
}

(async () => {
    let pr1 = new SyncInspectPromise(res => {
        console.log("starting query");
        setTimeout(() => {
            console.log("finishing query");
            res("hi");
        }, 3000);
    });
    console.log("isPending: " + pr1.isPending());

    //this fn runs in 1 seconds (while the async fn takes 3 seconds) so it won't be fulfilled at that point)
    setTimeout(() => printValueIfFulfilled(pr1), 1000);

    let result = await pr1;
    console.log("result value: " + result);
    
    printValueIfFulfilled(pr1);

})();

//output:
// starting query
// isPending: true
// Promise NOT resolved yet
// finishing query
// result value: hi
// Promise resolved to: hi

As usual I've uploaded it into a gist.

Sunday 23 August 2020

Method Composition

Every now and then I'll find something that will remind me again of how beautiful a language JavaScript is. This time I came up with a not much frequent need, Method Composition. Function Composition is quite popular, and libraries like lodash-fp with its Auto-Curried, iteratee-first, data-last approach make composition really powerful.

In this case I'm talking about composing methods. May seem a bit odd, but I came up with a case where it could be rather useful. I have several methods that modify (yes, in spite of the immutability trend I continue to use mutable objects on many occasions) the object on which they are invoked, and I'm calling several of them in a row, to the point where it would be useful to add a new method to the class that just performs all those consecutive calls. Adding a new method to the "class" is so easy as adding the new function to the prototype of the "class" (if I just wanted to add it to an specific instance I would add it to the instance __proto__ (or instance [[Prototype]])). In order to invoke the sequence of methods we have to bear in mind that each method can have a different number of parameters, but Function.length comes to our rescue.

It's more useful to just see the code than further explaining it, so here it goes the factory function that composes the different methods:

 

function composeMethods(...methods){
    //use a normal function, not an arrow, as we need "dynamic this"
    return function(...args) {
        methods.forEach(method => {
            let methodArgs = args.splice(0, method.length);
            method.call(this, ...methodArgs);
        })
    }
}

And given a class like this:

 

class Particle{
    constructor(x, y, opacity){
        this.x = x || 0;
        this.y = y || 0;
        this.opacity = 1;
    }

    move(x, y){
        this.x += x;
        this.y += y;
    }

    fade(v){
        this.opacity += v;
    }
}

We'll use it this way to compose the "move" and "fade" methods into a new "moveAndFade" method:

 

//composing "move" and "fade" and adding it as a new "moveAndFade" method
Particle.prototype.moveAndFade = composeMethods(Particle.prototype.move, Particle.prototype.fade);

let p1 = new Particle();
console.log(JSON.stringify(p1));

//x, y, v
p1.moveAndFade(4, 8, -0.1);
console.log(JSON.stringify(p1));

p1.moveAndFade(2, 3, -0.2);
console.log(JSON.stringify(p1));

// {"x":0,"y":0,"opacity":1}
// {"x":4,"y":8,"opacity":0.9}
// {"x":6,"y":11,"opacity":0.7}


I've uploaded it into a gist.

Saturday 22 August 2020

Fix Broken Folder

I'll share this experience here in case it can help anyone. This morning I was copying a bunch of films on Ubuntu from my laptop to and external NTFS disk. The copying got stuck at 50% and after a while I cancelled it. The usb disk seemed to be still in use but anyway I disconnected it (as so many times before). When connecting it again it looked fine, save for the folder "Films" on which I was copying the files. Opening the folder from the UI it showed up empty. From the command line I could do a cd into it, but doing a ls would return reading directory '.': Input/output error

I did not enter into panic as all the other folders with important stuff (and already backed up in other disks) like personal pictures, documents... seemed intact, so I was only losing like 400 films... (many of them scattered over other old, smaller, external disks). Anyway, managing to recover the folder would be nice...

The first thing I found about was running the badblocks command to verify that the disk was physically OK (I honestly thought so, as all the other folders looked good I was more leaning to think of a file system error). badblocks was taking so long, so I did a fast search and found that it could take up to 70 hours!!! and just to verify something, not to fix the disk...

I read by passing something that I already knew but was missing, that even if NTFS disks are perfectly supported in Linux since a very long while... it's Microsoft who has the full knowledge of the system, so I thought I could check if Windows was able to recognize the folder.

Rebooting my laptop on Windows and trying to navigate the folder I got this error:
The File or Directory is Corrupted and Unreadable

A fast search brought up this article, that recommended doing a chkdsk /f on the problematic disk.

After a few seconds the first messages stating that some error had been found and had been fixed showed up, so it was quite reassuring. After less than 10 minutes (for a 2TBs disk with a 30% use disk), the procedure was finished and the contents of my "Films" folder were back again! I'm putting below some of the fix messages that chkdsk returned:

Saturday 8 August 2020

Reconcile the French People

There's much talking (but no acting at all) in France about the Islamisation, communitarism-separatism (immigrant communities rejecting to integrate and assimilate, and creating their own close, xenophobic and medieval communities), the skyrocketing levels of criminality and violence (now we call it "ensauvagement"), the ridiculous laxity of the justice system (particularly when the criminals are part of a so called "minority")... and bla, bla, bla...

Regarding this separatism, there's a nice and well thought (as usual, it's interesting how much someone like me that has swung to the center-right continues to respect these "traditional left" guys) set of writings/drawings in Charlie Hebdo issue 1461, 600 jours pour rabibocher les Francais/600 days to reconcile the French people. From all of them, there's one by Riss that particularly stands out, so I'm translating it here (sorry for my lack of translation skills):

Travelling educates the young people

Young people between 15 and 25 years old having committed repeated criminal acts will be comdemned to spend the prison sentence expected for adults by the Penal Code, not in prison, but in a foreign country, in a French NGO. 10 months of prison will become 10 months to be spent in New Delhi taking care of people infected with leper or tuberculosis. 12 months of prison will become 12 months in a Health center in a slum in Haiti. The aim is to move these young people away from their environmnet and to make them discover new horizons. At the end of their condemnation purged this way, they'll get a job offering in France, related to the activities conducted in the NGO that has taken them in and educated them.

As I have little trust in some humans I'm not sure to what extend this could work, but for sure it's a better option than the ridiculous prison sentences in a comfortable French prison and the social aids that the tax payers will pay them once they are out committing crimes again... The other options that I would favor: removing their French nationality if they are bi-nationals, penal labour or plain-good-old capital punishment... I guess are not acceptable by our naive European societies...

 

 

Sunday 2 August 2020

Retry Async Function

The other day I had an async function that could fail (rejecting its promise) quite often and wanted to retry it a few times if needed. In the end I wanted to generate a new function with the retry functionality in it. I had done this in the past with pure promises and had been a bit more complex (it was something along the lines of what you can find here), but with async-await the code is so simple that I'm not sure why I'm posting it here, anyway:

 

//wrap setTimeout with an async-await friendly function
function sleep(ms){
    console.log("sleep starts");
    let resolveFn;
    let pr = new Promise(res => resolveFn = res);
    setTimeout(() => {
        console.log("sleep ends");
        resolveFn();
    }, ms);
    return pr;
}

//fn: function returning a Promise
//if the Promise is rejected we'll retry up to "attempts" times, with a "timeout" in between
//returns a new function with retry logic
function addRetryToAsyncFn(fn, attempts, timeout){
    return async (...args) => {
        while (attempts--){
            try{
                return await fn(...args);
            }
            catch(ex){
                console.log("attempt failed");
                if (!attempts)
                    throw ex;
            }
            await sleep(timeout);
        }
    };
}

//given an async function: getAndFormatNextTicket
//we'll use it like this:
let formattedTicket = await addRetryToAsyncFn(getAndFormatNextTicket, 5, 1000)("Francois");

Thanks to async/await the retry code for an async function is basically the same as the one for a normal function:

 

function addRetry(fn, attempts){
    return (...args) => {
        while (attempts--){
            try{
                return fn(...args);
            }
            catch(ex){
                console.log("attempt failed");
                if (!attempts)
                    throw ex;
            }
        }
    };
}

You can see a full example here.

Saturday 1 August 2020

Tour In Nova, Bordeaux

Last week I did a day-trip to Bordeaux (the so beautiful "La belle-endormie"). As the train approached Saint Jean (the gorgeous main train station nicelly refurnished a few years ago) I could see in the distance a new addition to the city's skyline, the Tour In Nova. Bordeaux is a low rise city, with whole neighbourhoods made up by 1 - 2 stories buildings (though not as low rise as Toulouse, probably the lowest-rise mid-size city in the world). This listing about Bordeaux buildings seems quite accurate to me. As you can see it got 3 (unappealing I can say) administrative towers in the 70's, peaking at 90 meters, and then some residential towers that I assume are mainly in Grand Ensembles.

However, in the last years some new, interesting "towers" (in France anything above 40 meters is a tower) and modern neighborhoods have been-are being built. La Cité du Vin is a prominent example. This building is located in the new Bassins a Flot area, where you'll find tons of 8-9 stories residential (and some office) buildings with interesting shapes, roofs and facades.

The other most notorious development area is the Euratlantique district, close to the main train station. I've seen this area emerge over the last years and I was feeling a bit disappointed so far, but the Tour In Nova has been a great surprise. Though not reaching 60 meters in height it's well visible when you enter the city, and the upper block looks really, really nice to me. It reminds me of the BEC tower in Bilbao (that is higher, but not that beautiful). It's a real shame that the crazy French security laws concerning high-rise buildings, that force you to keep a dedicated security team in place in buildings higher than 60 meters (well, it's a simplification, but you get the idea) prevented it from getting some additional floors (with 5-6 extra floors to reach 75-80 meters it would certainly stand out quite more). Anyway, if it had not been that law it would have been some "rednecks against modernity" citizens group who would have fight against such a "skyscraper"...

I can not close this post without mentioning the main reason for my last trip to Bordeaux, Les Bassins de Lumières. Some French people still have amazing ideas, living up to the glorious cultural traditions of the country. To leverage an enormous submarine base created by Fascist Italy and Nazi Germany during the occupation, to create a massive Digital art center (that opened just one month ago) is for sure one of those amazing ideas. Bordeaux is well worth a several days visit, and this new cultural space is well worth an afternoon.

Thursday 23 July 2020

UEFI and the GPT

As I said in my previus post, I've not kept myself much up to date with the evolution of hardware for the last decade, so I had missed another huge change (not so noticeable as the SSD-NVMe thing, but highly important anyway). I'm talking about the modern UEFI and GPT partitions vs the old BIOS and MBR. You can read a short and clarifying article here.

In the past, we had the BIOS and in order to boot our OS's it needed a MBR in the first sector of the disk. This MBR could store information of up to 4 primary partitions as long with the boot loader. In order to have more than 4 partitions you needed to set one of those partitions as an extended partition.

Modern computers (since 2011 I think) come with UEFI rather than the old BIOS. UEFI can work with a MBR, but it's intended to work with a GPT. This new partitioning scheme is much more powerful, allowing for up to 128 primary partitions. There's still a MBR in the first sector of the disk, the Protective MBR, in case you want to use this disk with a BIOS based computer. After that you have the GPT lied over the next disk sectors (and with a backup copy right at the end of the disk), like this:

UEFI systems use an EFI System Partition that is normally the first disk partition and contains:

An ESP contains the boot loaders or kernel images for all installed operating systems (which are contained in other partitions), device driver files for hardware devices present in a computer and used by the firmware at boot time, system utility programs that are intended to be run before an operating system is booted, and data files such as error logs.

Usually USB drives and external disks come with a MBR partitioning scheme, but if you intend use it as a bootable USB device with multiple partitions you could find useful to change it to use a GPT scheme.

Sunday 12 July 2020

My Laptop Flies

I've recently bought a new laptop, nothing out of the ordinary, 8GBs of RAM, 512 GBs SSD, 4 physical cores - 8 logical ones... in principle it's just a bit better than the laptop I got at work 2 years ago, so I got shocked by how fast it boots, the Windows 10 logon screen shows up in no time! I then installed Ubuntu, and it's the same, it hardly takes 10 seconds for the laptop to be fully operational. I assume the Windows 10 in my work laptop is bloated with some corporate stuff... but anyway, that can not account for the difference, my personal laptop boots like 10 times faster! When I got my current work laptop I thought the SSD would make a huge difference (all my personal laptops so far had magnetic hard drives), and well, it works obviously faster, but I did not get the "Quantum Leap" feeling that I'm getting now with my new one.

I did not think more about it, just enjoyed the situation :-), but a few days ago when visiting the local shop (yes, I'm an old school idiot that whenever possible favors buying stuff in a physical store rather than online, Fuck you Amazon and alikes!) for another reason, I mentioned it to the owner and he told me "yes, it's really amazing, it's all thanks to the new NVMe thing, and yes, your laptop comes with that for sure".

Far away in the past I used to be quite into hardware (northbridge-southbridge chipsets, the different buses, the internals of processors and GPUs...) but that's not been the case in at least the last decade. Before buying this laptop I did some "research" to make sure I was buying something "decent". The 8 GBs RAM and 512 GBs SSD had been clear to me since a long while, but I learned that I should discard i3 processors and go for an i5 (or in my case an AMD Ryzen 5), and that I should have a USB C port (ideally it should be USB C charging enabled, but in the end few laptops support that so far). I was surprised to note that some processors do not implement SMT (HyperThreading in Intel's parlor...), so you don't have the physical vs logical/virtual cores thing. You'll still find for sale laptopts with a 4 physical/virtual cores, and even laptops with 2 physical/4 logical cores... So the 4 physical/8 logical combo became also a must-have for me. The other must-haves were: 14" screen, weight below 1.5KGs and an ethernet port. As for this last requirement, I honestly can not understand how someone can buy a laptop without an ethernet connector (yes, you can use an usb-ethernet adapter, but anyway...). Both in Asturies and in France, with the same laptop, my WiFi speed would never go above 50 Mbps, while the ethernet speed would be around 160-190 Mbps, very close to the official limit of 200 Mbps for both internet providers. On the other side, a touch screen was not in my list. My last laptops both have touch screen and I hardly ever use it.

So I was totally unaware of this NVMe thing, and it seems to me as an improvement as big (if not bigger) as when the multicore processors where introduced. This article is a pretty nice introduction. As a summary, we all know that SSDs are much faster than magnetic HDDs (with their moving parts), but until recently we were accessing them through a slow SATA bus, that was acting as a bottleneck. Now we're using the NVMe protocol over the PCI express bus. This takes advantage of the parallelism provided by SSD's, and we end up with this (from the article):

NVMe SSD reads and writes data literally four times faster than the SATA SSDs found in previous generations. Not only that, but it locates them 10 times as fast (seek). That’s on top of the four- to five-fold improvement in throughput and ten-fold improvement in seek times that was already provided by SATA SSDs when compared to hard drives.

Thursday 2 July 2020

Throw Expressions

I've not been aware until a few days ago that C# (since version 7) features throw expressions, meaning that we can throw exceptions in an expression, not just in a statement. This is very useful combined with the "?" and "??" operators. I've put up an example:

 

class Salary
{
   public string Payer {get; private set;}

    public int Amount {get; private set;}

    public Salary(string payer, int amount)
     {
        this.Payer = payer ?? throw new ArgumentNullException(nameof(payer));
        this.Amount = amount;
     }

     public void IncreaseByPercentage(int percentage)
     {
         this.Amount = percentage > 5
            ? percentage + (this.Amount * percentage / 100)
            : throw new Exception("Increase percentage is too low"); 
     }
}

This nice feature is missing so far in JavaScript, though I think it's been proposed. For the moment we can use an arrow function to get a similar effect, though the resulting code is quite more verbose.

 

class Salary
{

    constructor (payer, amount)
    {
        //does not compile "throw expressions" are not supported so far
        //this.payer = payer ?? throw new Error("ArgumentNullException: payer");
        
        this.payer = payer || (() => {throw new Error("ArgumentNullException: payer");})();
        this.amount = amount;
    }

     increaseByPercentage(percentage)
     {
         this.amount = percentage > 5
            ? percentage + (this.Amount * percentage / 100)
            : (() => {throw new Error("Increase percentage is too low");})(); 
     }
}

It's interesting to note that theres a new feature proposed for C# 9, Simplified null parameter validation that would allow us to rewrite the constructor above like this:

 

public Salary(string payer!, int amount)
     {
        this.Payer = payer;
        this.Amount = amount;
     }

Thursday 25 June 2020

Cancellable Async Function

Standard JavaScript Promises do not provide a mechanism for cancellation. It seems like there have been long discussions about it, but for the moment, nothing has been done. On the other side Bluebird.js provides a very powerful cancellation mechanism.

One major point is to clarify what cancellation means for us. Does it mean that it never resolves or that it gets rejected? Bluebird authors went through this thought process, and while in version 2.0 cancelling was rejecting, in 3.0 it means that it never resolves:

The new cancellation has "don't care" semantics while the old cancellation had abort semantics. Cancelling a promise simply means that its handler callbacks will not be called.

With Bluebird you can cancel a Promise chain, which can be very useful and I won't try to implement myself... I was thinking of a simpler case, when we have an async function performing multiple async calls via await and we would like to ask that method to stop "as soon as possible", which means waiting for the current async call to complete and not performing the remaining ones. .Net makes use of the Cancellation Token concept, and I'll use something similar for this case that I've just described. I'll allow both rejecting and just "abandoning". It mainly comes down to this function:

 

function checkCancelation(cancelationToken){
    if (cancelationToken && cancelationToken.reject){
        console.log("throwing");
        throw new Error("Rejection forced");
    }
    if (cancelationToken && cancelationToken.cancel){
        console.log("cancelling");
        //return a Promise that never resolves
        return new Promise(()=>{});
    }

    return false;
}

//to be used like this:
//let result = await (checkCancelation(cancelationToken) 
//        || getResultAsync());


that we'll use like this:

 

function getLastMessageId(){
 return new Promise(res => {
  setTimeout(() => res(111), 1500);
 });
}

function getMessageText(id){
 return new Promise(res => {
  setTimeout(() => res("this is the last message"), 1500);
 });
}


function formatText(txt){
 let formattedTxt = `[[${txt}]]`; 
 return new Promise(res => {
  setTimeout(() => res(formattedTxt), 1500);
 });
}

async function getLastMessageFormatted(cancelationToken){
    let id = await (checkCancelation(cancelationToken) 
        || getLastMessageId());
    console.log("ID obtained");
    
    let txt = await (checkCancelation(cancelationToken)
        || getMessageText(id));
    console.log("Message obtained");

    let msg = await (checkCancelation(cancelationToken)
        || formatText(txt));
    console.log("Message formatted");

    return msg;
} 

(async () => {
    console.log("-- test 1");
    let msg = await getLastMessageFormatted();
    console.log("message: " + msg);

    
    console.log("-- test 2");
    let cancellationToken = {};
    //reject after 1 second
    setTimeout(() => cancellationToken.reject = true, 1000);
    try{
        msg = await getLastMessageFormatted(cancellationToken);
        console.log("message: " + msg);
    }
    catch (ex){
        console.error("Exception: " + ex.message);
    }

    console.log("-- test 3");
    cancellationToken = {};
    //cancel after 1 second
    setTimeout(() => cancellationToken.cancel = true, 1000);

    //when cancelling we return a simple Promise, that won't keep the program running 
    //(it's real IO-timeout calls, not the Promise itself, what keeps the node.js loop running)
    //If I just want to keep it running longer, just use this keep alive timeout
    setTimeout(() => console.log("keep alive finished"), 10000);
    try{
        msg = await getLastMessageFormatted(cancellationToken);
        console.log("message: " + msg);
    }
    catch (ex){
        console.error("Exception: " + ex.message);
    }
})();

 

I've uploaded the above code to a gist.

Another simple case that I've implemented is having a Promise and preventing its "then handlers" from executing. Notice that I'm just cancelling from the initial promise (not from the one returned by then), while Bluebird's promise chain cancellation is invoked on the last promise returned (and goes all the way up the chain to cancel the active one, which I guess means that in Bluebird Promises are double linked). As you can see I use 2 different strategies for creating a new "cancellable Promise" from the original one.

 

function formatTextAsync(txt){
 let formattedTxt = `[[${txt}]]`; 
 return new Promise(res => {
  setTimeout(() => res(formattedTxt), 1500);
 });
}

function createCancellablePromiseStrategy1(pr){
 let cancelled = false;

 let cancellablePromise = new Promise(res => {
  pr.then(value => {
   if (!cancelled){
    res(value);
   }
   //else we never resolve
   else{
    console.log("promise has been cancelled");
   }
  })
 });
 
 cancellablePromise.cancel = () => cancelled = true;
 return cancellablePromise;
}

function createCancellablePromiseStrategy2(pr){
 let cancelled = false;

 //if the function ran by "then" returns a promise, the promise initially returned by "then" is resolved
 // when that other promise is resolved
 let cancellablePromise = pr.then((value) => {
  return new Promise(res => {
   if (!cancelled){
    res(value);
   }
   //else we never resolve
   else{
    console.log("promise has been cancelled");
   }
  });
 });
 
 cancellablePromise.cancel = () => cancelled = true;
 return cancellablePromise;
}


//let creationStrategy = createCancellablePromiseStrategy1;
let creationStrategy = createCancellablePromiseStrategy2;

let pr1;

const operation = async () =>{
 pr1 = creationStrategy(formatTextAsync("hi"));
 let result = await pr1; //if cancelling pr1 will never resolve, so the next line won't run
 console.log("result: " + result);
};

operation();
pr1.cancel();

 

Sunday 21 June 2020

Arsen (aka OEGP's lost recording)

If you've read this post you know how much I love that amazing Canadian (Quebecoise) ultra-dark screamo band, One Eyed God Prophecy. Many bands have been inspired by them (and by Uranus, that have managed to achieve much more of a cult status), but honestly I'd never found any band that sounded 100% like them, that had a song that I could place in the middle of the OEGP Lp and think that had been produced by them.

I've finally found that band! Arsen (aka Konig the Monster). This German band released stuff in the 2002-2003 period, right when I was starting to move from Screamo to other sorts of sounds, so they went unnoticed for me. I've recently found about them and they are absolutely amazing. Almost all their songs are a brutal display of darkness, rage, violence and sadness, there's one Erde Meldet Sich Zuruck that particularly stands out and obsesses me. It's as if these guys had found a lost OEGP recording and re-recorded it!

Reading the interview linked above you'll see that there's a bunch of bands associated to the smart guys (and lady) that made up this act (and in turn, some of those bands are associated via other members to highly praised bands like Tristan Tzara, Louise Cyphre...) I already knew one of these excellent bands, Saligia that even if I did not get fully hooked into them, really surprised me with their very Uranus like sound. The other band, Republic of Dreams has been a beautiful discovery. They've been around for many years, playing dark, violent, German ("German" here involves that characteristic metallic touch) Scremo. Their last split with another amazing new band , Alles Brennt is particularly worthy of a deep listen.
Enjoy the darkness!!!

Saturday 13 June 2020

Lazy Objects via JavaScript Proxy

Related to my last post about Lazy promises it came to my mind the idea of creating a Generic Lazy object. A long while ago I had already tinkered with Lazy objects in JavaScript. At the end of that post I mention that sometime in the future I should try a different and much more transparent approach, using proxies, as I had done some weeks earlier in C#. Well, it has taken more than 7 years to turn that "in the future" into present... but finally, here it is.

The idea is pretty simple, we create a Proxy object with a get and set traps. The first time get or set are invoked, the lazy object has to be created. For that, the get and set trap functions keep as closure state the constructor function and the parameters to use in order to create the lazy object. When we create a Proxy we provide as first parameter the object that we are proxying. In this case such object does not exist, the proxy is on its own just to allow us to later create the object, so I was intending to pass null as parameter, but the compiler won't allow that, so I pass and "empty object": {}.
I'm thinking now that I could have used that object to hold the constructor and arguments, rather than keep them as closure variables... well, both approaches are equally valid.

I paste here the code, and you can also find it in this gist

 


//Create a Proxy for the lazy construction of an object
function buildLazyObject(constructorFn, ...args){
    let internalObject = null;
    //we don't care about the target, but compiler does not allow a null one, so let's pass an "empty object" {}
    return new Proxy({}, {
        get: function(target, property, receiver){
              internalObject = internalObject || (() => {
                console.log("Creating object");
                return new constructorFn(...args);
            })();
            
            //this way, if it's a method, internal calls to other methods of the class would go through the proxy also
            //for this case it makes no sense
            //return internalObject[property];
            
            let item = internalObject[property];
            if (typeof(item) === "function"){
                return function(...args){
                    return item.call(internalObject, ...args);
                };
            }
            else{
                return item;
            }
        },

        set: function(target, property, value, receiver){
            internalObject = internalObject || (() => {
                console.log("Creating object");
                return new constructorFn(...args);
            })();
            
            internalObject[property] = value;
        }
            
    });
}

//--------------------------------------------

class Person{
    constructor(name, age){
        this.name = name;
        this.age = age;
    }       

    sayHello(){
        return "Bonjour, Je suis " + this.name + " et j'ai " + this.age + ", ma ville est " + (this.city || "inconnu");
    }
}

let p1 = buildLazyObject(Person, "Francois", 2);
console.log("Proxy created");

console.log(p1.sayHello());

console.log("Second call");

console.log(p1.sayHello());

let p2 = buildLazyObject(Person, "Didier", 4);
console.log("Proxy created");
p2.city = "Marseille";
console.log("after set value");

console.log(p2.sayHello());
 
// Proxy created
// Creating object
// Bonjour, Je suis Francois et j'ai 2, ma ville est inconnu
// Second call
// Bonjour, Je suis Francois et j'ai 2, ma ville est inconnu
// Proxy created
// Creating object
// after set value
// Bonjour, Je suis Didier et j'ai 4, ma ville est Marseille