Friday, 30 November 2012

prototype, [[Prototype]], inheritance chains, contructors...

The JS101 section in dailyjs is always worth a reading. However much I think I know something I always find here some nuance on the subject at hand that I hadn't realized of and makes me mull over it. Reading their entry about prototypes was not different.

Prototypes were a confusing matter for me for a long time. While the idea of a prototype chain to look for methods and properties is simple, powerful and beautiful, it took me time to realize that the prototype chain (inheritance chain) is based on the [[Prototype]] property (accessible in many environments through the non standard __proto__ property), and that the prototype property that we find only in Function objects, is not part of that chain (but is used for setting it up...) Bear in mind then, that functions have both a prototype (pointing to an object which constructor property points back to the function) and a [[Prototype]] that sets their "inheritance chain". This prototype chain for functions contains two Objects: Function.prototype and Object.prototype.
(function(){}).__proto__.__proto__ === Object.prototype;
I like the name they use in the article for referring to [[Prototype]]: internal prototype, to distinguish it from prototype.

As I said, __proto__ is not standard, but if we want to get hold of the [[Prototype]] object, we can do it using ES5's Object.getPrototypeOf

One very important point is that Object.prototype.__proto__ is null, that's the way the interpreter can find the end of the prototype chain. This used to be the only object without a prototype chain, but with the advent of Object.create(), we're now allowed to create other objects with their [[Prototype]] set to null. I already mentioned an odd JS behavior related to this in this previous post

There's one interesting point mentioned in the daily.js article when they talk about Object.create as the only way to obtain an object without a prototype chain (that is, [[Prototype]] points to null). We could think of achieving the same by setting the constructorFunction.prototype to null, but contrary to what we could expect, that does not give us a null [[Prototype]] when creating an instance, but makes it point to Object.prototype. I guess it's one of those corner cases explained somewhere in the ECMAScript reference

To make sure we understand how prototype and [[Prototype]] work, we can do some checks like these (all of them are true):

given: function f1(){};
  • f1.prototype.__proto__ === Object.prototype;
    (the prototype of a function is just a normal object, so it's __proto__ is Object.prototype)
  • Function.prototype.__proto__ === Object.prototype;
  • f1.__proto__ === Function.prototype;
    (notice that functions are instances of Function)

Anyway, there are some cases that can't be inferred, and work that way just cause the standard says so, these are some of them:

  • Function is an odd object, it's an instance of Object, but does get its __proto__set to an empty function, so:
    Function.__proto__ === Object.prototype; is false
    Moreover, I found somewhere this interesting additional info:
    Function.prototype is a function, that returns always undefined and can accept any number of arguments, but why?. Maybe just for consistency, every built-in constructor's prototype is like that, Number.prototype is a Number object, Array.prototype is an Array object, RegExp.prototype is a RegExp object, and so on...
  • Indeed, we should not care much about Object.__proto__ and Function.__proto__, as I can't think of them being used in any inheritance chain
This question in SO has additional interesting notes about the above.

Talking about prototype chains, we should mention the instanceof operator:
ob instanceof MyFunction;
that checks if the given constructor function is in the the prototype chain of the given object (by walking up the prototype chain and comparing the objects it finds with MyFunction.prototype). We can achieve just the same by using Object.isPrototypeOf. We would write:

The best way to complete this post is to add this extraordinary diagram that so perfectly explains all this and that I found here

and to summarize below (also taken from there) what happens when an object is created by a call to a constructor (i.e. var f = new Foo();)

The interpreter creates a new, blank object.
The interpreter sets the new object's underlying prototype to be Foo.prototype.
The interpreter calls Foo in a way that makes this refer to the new object.
When Foo completes, if Foo doesn't return a value (or doesn't return an object), the result of the new expression is the object created in step 1. (If Foo returns an object, that object is used instead; that's an advanced thing most people don't have to do.)

Sunday, 25 November 2012

Le fils de l'autre / The other son

As usual I'm preparing a review for this year's edition of FIC Xixón, but this exceptional film deserves its own separate review. Le fils de l'autre (The other son) by Lorraine Levi is a profoundly human story, a touching and questioning work dealing with such serious topics as our feeling of identity and belonging to a family, a religion, an ethnic group...

Imagine you're an 18 years Israeli guy that suddenly finds out (along with his family) that due to a mistake in the hospital where he was born he got shifted with a Palestinian Arab newborn. This would be shocking enough in any other society, but when in happens in a fucked up place where both families are supposed to hate each other (the invaders that thew us away from our land vs the ones that send suicide bombers to our cafes) it turns into an astonishing exercise of re-evaluation of oneself and everything around. The idea is brilliant, as it is the way it rolls out to show us the lives and the perceptions of "the other" on both sides of the Wall. The selection of the phenotypes of the characters is excellent, tearing apart any ideas of "racial differentiation" between both peoples. The Israeli father (an Israel army colonel) looks almost like an Arab (dark eyes and dark skinned, not the kind of Ashkenazim Jew which ancestors intermixed with central Europeans and that now would look more like a German than like someone with roots in the Middle East), while the Palestinian father is a clear eyed Arab. The film is also pretty good in showing the revolting racist ideal that in a way seems to underlie Judaism (sort of Jus sanguinis applied to ethnicity and religion). The Israeli guy, though having been brought up as a Jew (circumcised, has studied the Torah in depth...) for 18 years, is not a Jew now, cause he has no "Jew blood", and he will have to go through a complicated process of conversion... On the other side, his Palestinian counterpart is now a Jew, that should have all the rights that it involves in the state of Israel. This idea of a religion "for the chosen ones" should be in stark contrast with Islam, that seems like a religion for everyone (apparently this sounds pretty good, but also think than that's why radical Muslims try to force their insane beliefs upon everyone). This said, I'm quite a bit surprised with the reaction of the brother of the Palestinian guy, who at first vehemently rejects him as "one of the others" (I guess Islam says that if he has grown up as a Muslim, he should be considered a Muslim, regardless of his parents' beliefs).

The reactions of the different members of both families are really instructive:
Both mothers react likewise, though deeply in pain, they quickly seem to accept the situation, on the one side, now they have a new son to worry about and to feel proud of, on the other side, nothing has changed with regards to the other son: genes or 9 months in your womb are nothing when compared to 18 years of shared love. From the first moment a feeling of confidence springs up and will help them to assimilate and deal with this new situation.
For both parents it's a bit more difficult, a soldier defending Israel from "the others" versus an engineer that has to work as a mechanic cause there are no engineering jobs in the West Bank because of "the others"... 2 men for which the conflict has had more personal implications, their whole lives, their professions... have been shaped by these borders. First they get engaged in a discussion about stolen lands and suicide bombers... then the gap seems to narrow, but it's still very difficult to find words to share.
I love the attitude of the 2 little girls in both families, these young creatures still not poisoned by this never ending conflict react happily saying "now I have a new brother" and playing together with their barbies.
As for the main characters of this drama, after the initial deep shock, both guys strive to handle this crazy situation and manage to establish a strong relation, not only between them but also with their new families (and even with their new cultures).

I could keep writing paragraphs praising this masterpiece for a good while, but I think it's enough to prompt you to watch it at the first chance you get

Sunday, 18 November 2012

Switch between Tabs and Accordion

Working with jQuery UI sometimes I've hesitated between using an accordion widget or a tabs widget to lay out some data. In both cases the information to be displayed consists of headers and contents, so though the html tags expected by jquery UI to create tabs or accordions are different, it should be pretty simple to switch from one to another.

The idea is to have an object holding a reference to a container DOM element and to a list of headers/contents items, and exposing 2 methods, one for creating a Tab and one for creating an Accordion. You'll use it like this:

var books = [
  name: "First Book",
  text: "

this is the first

" + "
  • aa
  • bb
" }, { name: "Second Book", text: "this is the second" }, { name: "Third Book", text: "this is the third" }, ]; $(document).ready(function(){ var myTabsAccordion = new animatedTabsAccordion("#booksContainer", books); $("#accordionBt").click(function(){ myTabsAccordion.generateAccordion(); }); $("#tabsBt").click(function(){ myTabsAccordion.generateTabs(); }); });

We can do it a bit more nicely looking if we show/hide the widgets using some kind of animation. I had already developed some months ago an animator to show an accordion in an animated way. I've added a hiding animation and expanded it for being used with tabs

You can see the animations on its own here and the animated tabs-accordion here

I've uploaded the final result to GitHub, so you can check the code there.

Tuesday, 13 November 2012

Ubuntu everywhere

Technology evolves so fast that it's quite hard to catch up with all the changes (and more if you're into a broad range of its aspects, from hardware to high level development abstractions). So the thing is that some months ago I was taken aback for how easy it's now to create a bootable USB key (even more, a multiboot USB key). Last time I had looked into that had been several years ago, and at that time it was far from trivial and error prone (using dd in Linux and so on, I don't think back then you had many options for doing it from Windows).

So, PendriveLinux was an astonishing discovery that allowed me to create a multiboot key with Ubuntu and several rescue distros. So, I can boot the Ubuntu ISO from there and do a live session or an install, or run some of the rescue distros if my PC has gone into some trouble. So once the mission was accomplished I didn't ponder on it much more until a few weeks ago, when I felt the need to go one step further, and have a "fully usable Ubuntu" on my key, that I could run from my home laptop, my work laptop and so on. What I mean with this is doing a normal Ubuntu installation to the USB key, and use it normally, installing and removing packages and so on.

The idea seemed quite feasible to me. If creating a bootable USB is now so easy and works so neat, installing a bootloader and the OS to the USB Key rather than to the HD (I mean a normal installation, not just a bootable ISO) should not be a problem. I searched the net seeking for confirmation, and It's then when things turned a bit confusing. Most articles seemed to deal with a slightly different scenario, creating a sort of "Live USB" with persistence (space where you can store your settings and even (I think) install new packages). After doing some good readings I decided to follow this detailed and simple article

There's a first point not mentioned there that has to be taken into account. If you (as I did) decide to partition your USB key to leave a Data partition there where you can store data without messing with the OS installation, you should use the first partition for data and install Ubuntu to the second partition. Well, let me be more specific, you must do that if you want to use that data partition from Windows. While Windows has no problem with viewing partitions in an external USB Hard Disk, when it comes to USB Keys, it will only work with the first partition, so you better use that first one as Data partition.
On a side note, my own experience tells me that Ubuntu has no problem working with all the partitions in a USB key, but Android (at least version 2.3) will only see the first partition of a SD card.

So, I used the Startup disk creator utility mentioned in the article, in a few minutes I had my bootable USB. Nevertheless, the persistence option did not seem to work, cause though I had no problem installing a new package and running it (I installed node.js) once I did a restart, the installation was gone (so I guess it just got installed in memory, which confirms the idea that this sort of installations is just a "Live USB".

I decided then to try what I really wanted from the beginning, doing a normal installation, but targeting my USB key instead of the Hard Disk. Well, the process is that simple as it sounds. Boot a Live Ubuntu ISO, start the installation and when presented with the menu to select a target partition, choose your USB key from the combo box. That's all to it, the difference with the other install explained above is that as this is doing a normal install, it takes much longer (while the "Live USB" creation that I did before is very fast).

Once installed, I could happily verify that my "portable Ubuntu installation" works nice on both my Home Laptop and my Work Laptop (Ubuntu 32 bits runs sweet on both my 32 bits and a 64 bits machines). What surprises me most of all of this is that I used to think that at installation time Linux checked what drivers (Kernel modules) were needed for the specific hardware, copied them to disk and generated some sort of list to load them in following start ups (obviously, it would also check for hw changes at boot time, to load or ask for the corresponding new "drivers"). Well, things seem to be quite more dynamic, and at each start up the OS checks the hardware and dynamically loads the needed kernel modules (so no list with the kernel modules needed for the hw present at installation time). This process is done by udev, and is explained here

When your system boots, one of the startup scripts starts up Udev and prods the kernel. The kernel then sends back a stream of events for all the hardware detected when it started up ("add this device", "add that device", ...). Udev picks up these events as if the devices had just been added, creates the appropriate /dev nodes, sends off the appropriate notifications to HAL, and so on.

I guess this also means that Ubuntu seems to copy a whole lot of different kernel modules to disk (not just the ones needed at install time) so that the can be used if needed

All this is really interesting and makes me wonder if it could also be done with Windows, or whether its installation determines the hardware on which that "image" can run

Monday, 12 November 2012

The Day

Long in short, The Day is a total must see (indeed if more people shared my dubious taste it could end up being a cult piece).

The main elements are common to last dystopic films starting by "The Road": ultra dark photography, a bunch of survivors being chased not by zombies, but by other (dehumanized) humans aiming to feed on them, lack of information about how and why the world has come to this gloomy present (in fact, it would be complicated to explain what the real problem is, as there does not seem to be any radiation and climate seems OK (at least it rains)... but both crops and animal life (excluding humans) appears to be missing...

What sets this film ahead of the pack is its pure intensity and how it combines it with themes like redemption, revenge and deception. The hard and tortured character played by Ashley Bell (and how well she performs it) is also one of the strong points of the film. Also noteworthy for me is the presence of the beautiful Shannin Sossamon (I mainly know her for her roles in Wristcutters and Catacombs, but it seems like she's done a deal of interesting films in between).

Well, I don't feel like extending this post anymore, take it just as a fast gift I'm giving you :-) to prompt you to treat yourself watching it.

Saturday, 3 November 2012

Safe dereferencing in C#

There's one feature in Groovy that I really miss in other languages, the Safe dereference operator. (name = p?.Name)
It's amazing how such a simple thing spares you cluttering your code. This C# (or Java) code:

if (person != null && person.Address != null)
Console.WriteLine("person.Address.City: " + person.Address.City);

can be written in Groovy using the "?." safe dereference operator just like this:

println person?.Address?.City

In the last days I've had to write in C# a good bunch of ugly looking code like the above, which made me raise 2 questions:

  1. Why haven't the cool guys in the C# team added something like "?." to the language?
  2. Can't I find a workaround?

For the second question, I quickly came up with an apparently rather usable solution, but once I started to code it I realized it was not that perfect because of the Reference types, Value types, Nullable types differences... In fact, maybe that's the reason for not having "?." in the language itself.
Anyway, I think my solutions is usable enough to share it here and add it to my toolbox:

public static TResult SafeGet<T, TResult> (this T target, Func<T, TResult> getter) where T: class
  //Type of conditional expression cannot be determined because there is no implicit conversion between 'TResult' and '<null>'
  //return target != null ? getter(target) : null; 
  return target != null ? getter(target) : default(TResult); 

that we can use like this:

Console.WriteLine("p1.Address.City: " + 
   p1.SafeGet(item => item.Address)
   .SafeGet(item => item.City)

instead of this:

  var city = null;
  if (p1 != null && p1.Address != null)
   city = p1.Address.City;
  Console.WriteLine("p1.Address.City: " + city);

Some notes about that so simple code:

  • I'm using the "where T:class" generic constraint, forcing T to be a Reference type. I would also like to accept Nullable types there, but these are considered value types, so I have no way to indicate that in the constraint, which means that unfortunately this solution won't work for Nullable types
  • I initially intended to return null:
    return target != null ? getter(target) : null;
    but then we would get the infamous compilation error:

    Type of conditional expression cannot be determined because there is no implicit conversion between 'TResult' and ''

    The compiler can't be sure that the type being returned there can be null (again for this we would need to be able to indicate a nullable constraint)... so in the end I've changed it to default

I then had a look to see what solutions others had thought up for this same issue, and found one guy following a very similar approach

You can download a sample here