Saturday 25 May 2019

Huawei Campus

Innovation and imitation, those two forces shape our present and our future. There's a general tendency to disdain imitation, but I fiercely dissent. If something is good, imitating it is a smart decision. As long as the original is good and the copy is good, imitation is a pretty good approach, much better than creating something new but worse. I think we all have heard of some examples of imitating/replicating some iconic architecture landmarks, even imitation of certain neighbourhoods. A small "Tour Eiffel", a neighbourhood inspired by Venice canals... things like that.

As usual, modern China has taken this to the next level. Already in 2007 they opened a neighbourhood in one more of those multi-million inhabitants cities of odd name, that tries to be a replica of Paris. It seems like it has not been particularly successful, but well, if sadly the islamic invasion of Europe continues at the current rate, maybe in 20 years anyone that wants to enjoy the marvels of Hausmannian Parisian Architecture without being surrounded by stinky "barbus", burkas and drug dealers will have have no other choice but flying to China... It seems they have built similar neighbourhoods imitating London and other famous cities.

If that seems cool/crazy, what Huawei has done with their new campus in Dongguan is just mesmerizing:

12 separate "towns." Each section is modeled after a different major European city, and each city can hold around 2,000 people.

Some of the cities used for inspiration for the campus include Paris; Verona, Italy; Granada, Spain; and Bruges, Belgium.

It's absolutely amazing, and what shocked me the most is the German Castle dominating the artificial lake. Wow, that's beautiful!!!

Sunday 19 May 2019

C#, JSON, dynamic...

I've been doing some json stuff in .Net (Json.Net) lately slightly beyond serializing and deserializing. It's basic stuff anyway, but it'll be useful (to me) to dump it here.

First, the basic distinction between the two main classes that we use when dealing with Json in .Net, JsonConvert and JObject. When we already have a .Net class that matches with the json string, we use JsonConvert.DeserializeObject<MyType>. If that's not the case, we use JObject.Parse to deserialize into a JObject, that with the power of dynamic provides us a very convenient way to read and manipulate that json structure. The only source of confusion is that JsonConvert.DeserializeObject<Object>(item), JsonConvert.DeserializeObject<dynamic>(item) and JsonConvert.DeserializeObject(item) will return a JObject, so it's equivalent to doing a JObject.Parse().

The casing difference between camelCase json and PascalCase properties in C# objects continues to make things a bit messy. Out of the box, when deserializing with JsonConvert.DeserializeObject Json.Net will nicely try to find in our .Net class a public property that matches either camel (Person.name) or Pascal (Person.Name) with the json camelCase property ("{'name':'Iyan'}"). So we don't need the horror of changing the standar naming convention.

class PersonCamel
{
 public string name {get;set;}
 public int age {get;set;}
 public override string ToString()
 {
  return this.name + " - " + this.age;
 }
}

class Person
{
 public string Name {get;set;}
 public int Age {get;set;}
 public override string ToString()
 {
  return this.Name + " - " + this.Age;
 }
}

var jsonPerson = @"{
 'name':'Eric',
 'age': 49
}";

personCamel = JsonConvert.DeserializeObject<PersonCamel>(jsonPerson);
Console.WriteLine(personCamel.ToString());
//xuan - 45

person = JsonConvert.DeserializeObject<Person>(jsonPerson);
Console.WriteLine(person.ToString());
//xuan - 45

When serializing our C# class to json, by default JsonConvert will use the name property as it is in our class. So if we want to avoid the pain of ending up with json strings in PascalCase, we'll have to help the serializer a bit with one of these 2 techniques:

JsonConvert.SerializeObject(person, new JsonSerializerSettings
{
 ContractResolver = new DefaultContractResolver
 {
  NamingStrategy = new CamelCaseNamingStrategy()
 }
});


JsonConvert.SerializeObject(person, new JsonSerializerSettings 
{ 
 ContractResolver = new CamelCasePropertyNamesContractResolver() 
});

Let's move now to more interesting stuff with JObject, dynamic and explicit/implicit conversions (I had already talked about this in this post). Let's see some code:

var jsonResult = @"{
'result':'OK',
'users': [
 {
 'name':'xuan',
 'age': 45
 }, 
 {
 'name':'Francois',
 'age': 47
 }
  ]
}";

var jsonPerson = @"{
 'name':'Eric',
 'age': 49
}";

void TestJObjQuerying(string jsonResult, string jsonPerson)
{
 Console.WriteLine("TestJObjQuerying");
 
 dynamic resultJObj = JsonConvert.DeserializeObject(jsonResult);
 //to be able to use a linq query on the array we need to cast as JArray
 //then, as JArray implements IEnumerable of JToken we need to cast to dynamic again
 Console.WriteLine(String.Join(",", ((JArray)(resultJObj.users)).Select(it => (((dynamic)it).name))));
 
 
 
 dynamic personJObj = JObject.Parse(jsonPerson);
 
 var name1 = personJObj.name;
 Console.WriteLine(name1.GetType().Name);//JValue
 
 string name2 = personJObj.name; //implicit conversion
 Console.WriteLine(name2.GetType().Name); //string
 
 var name3 = (string)(personJObj.name); //explicit conversion
 Console.WriteLine(name3.GetType().Name);//string
 
 Func myFunc = jObj => jObj.name; //implicit conversion for the return typle
 var name4 = myFunc(personJObj);
 Console.WriteLine(name4.GetType().Name);//string
 
}

TestJObjQuerying(jsonResult, jsonPerson);

We know that resultJObj.users is a JArray, but we need to hint the compiler with a cast, otherwise the compiler will consider it just as dynamic (the field of a dynamic object is a dynamic). JArray provides the Select method not as an own method, but as an Extension method available because it implements IEnumerable. Extension methods and dynamic do not play well together, so we need that casting for the compiler to set the call to Linq.Enumerable.Select. That Select returns an IEnumerable so in order for the access to the name property to work we need to do a casting to dynamic.

The next lines in the method are a reminder of how implicit and explicit conversions make JObjects and dynamic work like magic.

A long while ago I already posted about a very interesting possibility, deserialize an object into a JObject and then take one part of that structure for which we have a .Net Type and convert it to that type by means of ToObject. In my old sample I was passing a serializer to ToObject because of the camel to Pascal, but now, same as I've just explained for JsonConvert, this is managed automatically.

 dynamic obj = JObject.Parse(jsonResult);
 personJObject = obj.users[0];
 var person = personJObject.ToObject<Person>(); //camel to Pascal works good automatically 

The inverse procedure can also be useful. We have a .Net class and we want to serialize it to json with some additional fields. We'll use FromObject like this:

var person = new Person{
 Name = "Jean",
 Age = 52
};
dynamic jPerson = JObject.FromObject(person);
jPerson.City = "Marseille";
Console.WriteLine(jPerson.ToString());
 
//{
//  "Name": "Jean",
//  "Age": 52,
//  "City": "Marseille"
//}

The problem with the above is that there's not an out of the box way to serialize the JObject to camelCase json. You'll have to use some more elaborate technique as the one described here.

Sunday 12 May 2019

Map and parseInt

It was quite a long time that I didn't stumble upon with some JavaScript unexpected behaviour. Well, indeed this is not one issue with the language itself, but a the combination of 2 unfrequently used parameters in 2 frequently used functions.

I was using Array.prototype.map and parseInt and was getting some crazy outputs:

let items = [1.2, 0.3, 2.4, 3.7];
console.log(items.map(parseInt).join(" - "));
//1 - NaN - NaN - NaN

Even more crazy, when using an arrow function invoking parseInt it was working as expected:

console.log(items.map(it => parseInt(it)).join(" - "));
//1 - 0 - 2 - 3

So what? Well, a first google search brought up this explanation. Just read it, but summing up, I was missing that parseInt receives a second parameter, radix, (that we rarely need) and was missing that map (same as filter, foreach...) passes over a second parameter (index) that in many cases we don't need. Both of them combined we get this unexpected behaviour.

One can come up with more examples where we could get similar unexpected behaviours, for example with a format function recieving an optional second parameter:

const format = (st, wrapSt) => (wrapSt || "") +  st.toUpperCase() + (wrapSt || "");
console.log(format("bonjour"));
//bonjour
console.log(format("bonjour", "|"));
//|bonjour|

items = ["bonjour", "ca va", "hey"];
console.log(items.map(format).join(" - "));
//BONJOUR - 1CA VA1 - 2HEY2

Wednesday 8 May 2019

WebAssembly Update

Somehow the other day I came across some article talking about WebAssembly. I had some far memories of having read about this thing some years ago and even posting about it. Well, it seems like in this 4 years WebAssembly has changed and evolved a lot. What I explain in that post about the webassembly binaries describing an AST that at runtime would be compiled into bytecodes for the same Virtual Machine that runs the JavaScript code no longer applies at all. Now the WebAssembly binaries contain bytecodes for a stack based VM, that is different from the VM that runs the JavaScript code. Notice that all modern JavaScript engines are register based.

This series about JavaScript and WebAssembly is a really excellent reading, particularly this entry.

Once it reaches the browser, JavaScript source gets parsed into an Abstract Syntax Tree.

Browsers often do this lazily, only parsing what they really need to at first and just creating stubs for functions which haven’t been called yet.

From there, the AST is converted to an intermediate representation (called bytecode) that is specific to that JS engine.

In contrast, WebAssembly doesn’t need to go through this transformation because it is already an intermediate representation. It just needs to be decoded and validated to make sure there aren’t any errors in it.

In my first paragraph I've just said that all modern JavaScript engines/VMs are register based. So far for me the term engine and VM were interchangeable. The thing is that after reading this article, maybe that's not always correct. The article is mainly about V8 adding an interpreter to its combination of baseline and optimized JIT compilers, but what I find really important is the picture where it seems to say that when not using the interpreter the JavaScript code gets parsed and JIT compiled to native code without passing through a bytecode representation. So I guess we should not talk about a VM. This has quite surprised me cause it's a new scenario to add to the 2 ones that were familiar to me:

  • Source code (C#, java) gets compiled to bytecodes for a stack based virtual machine (and stored in an assembly or javac file) before running the code. Then the runtime either interprets and JIT's (JVM HotSpot) or only JIT's (.NET) those bytecodes.
  • The runtime parses the (JavaScript) sourcecode into a bytecodes representation (for a register based VM), and then interprets or JITs those bytecodes.

And now we have that the V8 runtime can directly JIT compile from JavaScript to Native code without going through a bytecodes Intermediate Representation, interesting... By the way, in this post from some years ago I explain the interpreter, basic/optimized JIT combinations that JavaScript runtimes were using at that time.

Thursday 2 May 2019

Generic Builder

At the beginning of this previous post I showed one common pattern for Immutable classes in C#, having a setPropertyX method in the immutable class that returns a copy of the object with the new property value (I think this is called chained setters). Another more basic approach is using the Builder pattern. I think this is more basic because you are not changing certain properties, you are passing the values for all the properties. This means we'll have constructors that can have a large number of parameters, and this is one of the use cases of the Builder pattern. You can see a Java example here, and I put my own C# sample below:

using System;

namespace Builder
{
    
public class Person
    {
        public string Name {get; private set;}
        public int Age {get; private set;}

        public Person(string name, int age)
        {
            this.Name = name;
            this.Age = age;
        }
    }

public class PersonBuilder
    {
        private string Name;
        private int Age;

        public PersonBuilder SetName(string name)
        {
            this.Name = name;
            return this;
        }

        public PersonBuilder SetAge(int age)
        {
            this.Age = age;
            return this;
        }

        public Person Build()
        {
            return new Person(this.Name, this.Age);
        }
    }

var person = new PersonBuilder()
                .SetAge(20)
                .SetName("Xuan")
                .Build();


Manually writing a Builder class for each immutable class that you want in your application can be a bit of a repetitive pain so, could we have something more generic? Well, here's my try at building a GenericBuilder in C#.

   public class GenericBuilder<T>
    {
        private Dictionary parametersDic = new Dictionary<string, object>();

        public GenericBuilder<T> SetValue(string key, object param)
        {
            //the values are given in PascalCase but the constructor params are obviously in CamelCase
            this.parametersDic[this.toCamelCase(key)] = param;
            return this;
        }

        public T Build()
        {
            var ctors = typeof(T).GetConstructors();
            var ctor = ctors.FirstOrDefault(it => it.GetParameters().Length == this.parametersDic.Count);
            if (ctor != null)
            {
                var orderedParams = new List<object>();
                foreach (var param in ctor.GetParameters())
                {
                    orderedParams.Add(this.parametersDic[param.Name]);
                }
                return (T) ctor.Invoke(orderedParams.ToArray());
            }
            else
            {
                throw new Exception("Not Suitable Constructor Found");
            }
        } 

        private string toCamelCase(string st) => char.ToLower(st[0])+ st.Substring(1);
	}
	

That we can use like this to build an instance of the aforementioned Person class:

		
	var person =  new GenericBuilder<Person>()
	.SetValue("Age", 1)
	.SetValue("Name", "Francois")
	.Build();

We set values passing the property/field name and the value and store it in a Dictionary. When we finally want to create the object we invoke the constructor via reflection passing the values in the correct order expected by the constructor.

This is the first time that I make use of ConstructorInfo to create an object, so far I'd always used Activator.CreateInstance. For this case, we need to use GetConstructors to obtain the constructor, cause we don't know the parameters order and hence we can not use Activator.CreateInstace. In other cases both options would be equally valid, for example:

//Factory method creating an object that expects a string as parameter to its constructor
        public static T ItemFactory1<T>(string id)
        {
            return (T) Activator.CreateInstance(typeof(T), new [] {id});
        }

        public static T ItemFactory2<T>(string id)
        {	
            return (T) typeof(T).GetConstructor(new [] {typeof(string)})
                .Invoke(new [] {id});
        }

I guess Activator.CreateInstance has to locate the constructor in the same (costly) way that GetConstructor does, and it has to do it each time we want to create an object. If we use GetConstructor, we can call it once and store it for ensuing calls, so if we are going to create multiple objects it's a better option. It is is mentioned in one of the comments here.