Sunday, 29 September 2013

Windows Services, WCF Named Pipes and Windows Sessions

In the last months, while working on a project at work, I've come across a good number of pretty interesting things (interesting and challenging in Production projects tend to involve stress and hair loss... so well, let's say that I tend to wear a cap or a wool cap more and more often :-) One of these items has to do with the odd relationship between Windows Services and WCF Named Pipes. As usual, I'm not revealing here anything new, just mainly putting together the different pieces of information given by others that helped us solve the problem.

So, let's say that you have a Windows Service that wants to communicate with other processes (let's call them Activities) running in the same machine (in our case the Service was also launching these processes (activities), but that's not relevant to this specific problem). Well, this kind of IPC scenario seems a perfect fit for using named pipes. Our processes (and also the Windows Service) are .Net applications, so we should be able to set up this solution pretty easily by using WCF with the NamedPipesBinding.

The idea seems pretty straight forward, we want the Windows Service to establish a communication with the Activity processes, so these processes will work as servers and the Windows Service as client. This means that each of our Activity processes will contain a self hosted WCF server (System.ServiceModel.ServiceHost) with an Endpoint using a NetNamedPipeBinding, basically:

ServiceHost host = new ServiceHost(typeof(MyActivityImplementation));
NetNamedPipeBinding binding = new NetNamedPipeBinding();
host.AddServiceEndpoint(typeof(IActivity), binding, "NamedPipeName");

The problem with the above is that your Windows Service won't be able to open the connection to the Named Pipe. There are several posts and StackOverflow entries discussing this issue:


Really valuable pieces of information, and all of them pointing to the same solution, to use a Callback Contract. This Callback Contract thing is really ingenious and comes as a neat surprise when you're so used to the Web (Services/APIs) world and its connectionless nature, with requestes always started by the Client. With the Callback Contract we're able to reverse the process, so that the Server can call the client through a callback. So, rather than A always doing requests to B, we can have B do an initial request to A, that request will contain a callback object that later on A can use for doing requests to B, so we've created a bidirectional channel, where both A can send requests to B and B send requests to A. We'll have two contracts then, one for the methods that B will invoke in A, and another one for the methods that A will invoke in B. Obviously WCF makes Callbacks available only to those bindings that can really support a bidirectional communication, that is, NetTcpBinding and NetNamedPipeBinding:

This was not intended to be a post about WCF and Callback Contracts, so if you want to see some code just google for it.
Once we've found a work around to allow our Windows Service and normal processes to happily talk to each other, the next point should be to understand why the initial approach was failing. In the links above there's some confusing information concerning this, but this paragraph below seem like the correct explanation to me:

If you are running on Windows Vista or later, a WCF net.pipe service will only be accessible to processes running in the same logon session (e.g. within the same interactive user's session) unless the process hosting the WCF service is running with the elevated privilege SeCreateGlobalPrivilege. Windows Services run in their own logon session, and have the privilege SeCreateGlobalPrivilege, so self-hosted and IIS-hosted WCF net.pipe services are visible to processes in other logon sessions on the same machine. This is nothing to do with "named pipe hardening". It is all about the separation of kernel object namespaces (Global and Local) introduced in Vista... and it is the change this brought about for security on shared memory sections which causes the issue, not pipe security itself. Named pipes themselves are visible across sessions; but the shared memory used by NetNamedPipeBinding to publish the current pipe name is in Local namespace, not visible to other sessions, if the server is running as normal user without SeCreateGlobalPrivilege.

Well, admittedly, those references to separation of kernel object namespaces and shared memory sections left me in shock, I had not a clue of what they were talking about.

Let's start off by understanding Kernel Objects and Kernel Objects namespaces. When I think of Windows Kernel Objects I mainly think in terms of handles and the per-process handle table, but we have to notice that many of these Kernel Objects have a name, and this name can be used to get a handle to them. Because of Remote Desktop Service (bear in mind that this is not just for Remote Sessions, the switch user functionality is also based on RDS), the namespace for these named objects was split between Global and Local in order to avoid clashes.

OK, so far so good, but how does that relate to Named Pipes and Shared Memory Sections?. This excellent article explains most of it. I'll summarize it below:

The uri style name used by WCF for the endpoint addresses of NetNamedPipeBindings has little to do with the real name that the OS will assign to the Named Pipe object (that in this case will be a GUID). Obviously WCF has to go down to the Win32 level for all this Pipes communications, so how does the WCF client machinery know, based on the .Net Pipe Name, the name of the OS Pipe that it has to connect to (that GUID I've just mentioned)?
The server will publish this GUID in a Named Shared Object (a Memory Mapped File Object). The name for this Named File Mapping Object is obtained with a simple algorithm from the .NetNamedPipeBinding endpoint address, and so the client part will use this same algorithm to generate this name of the File Mapping Object, open it and read the GUID stored there. And it's here where the problem lies. A normal process running in a Windows Session other than 0 (and usually we'll want to have our normal processes running in the Windows Session of the logged user that has started them, rather than in session 0) can't create a Named File Mapping Object in the Global namespace, so it'll create it in its Local namespace (corresponding to its Windows Session). Later on, when the Windows Service (that runs always in Session 0) tries to get access to that File Mapping, it'll try to open it from the Global namespace rather than from the namespace local to that process. This means that it won't be able to find the object and the whole thing will fail. That's why we have to sort of reverse our architecture using Callback Contracts. The Windows Service will create a File Mapping Object in the global namespace (contrary to a normal process, a Windows Service is allowed to do this) and then the Client process will open that File Mapping in the global namespace (it can't create a file mapping there, but it can open a file mapping). Now the Client process has the GUID for the name of the pipe and can connect to it. Once the connection is established, the Windows Service can send requests to the Client process through the Callback Contract Object.

My statements above are backed by that MSDN article that I've previously linked about Kernel Object Namespaces (and also by some painful trial and error). This paragraph contains the final information that I needed to put together the whole puzzle:

The creation of a file-mapping object in the global namespace, by using CreateFileMapping, from a session other than session zero is a privileged operation. Because of this, an application running in an arbitrary Remote Desktop Session Host (RD Session Host) server session must have SeCreateGlobalPrivilege enabled in order to create a file-mapping object in the global namespace successfully. The privilege check is limited to the creation of file-mapping objects, and does not apply to opening existing ones. For example, if a service or the system creates a file-mapping object, any process running in any session can access that file-mapping object provided that the user has the necessary access.

Friday, 27 September 2013

Street Crap

Since a few years ago I'm very much into different forms of Street Art. Though I'm mainly interested in pieces with a social/political meaning, the truth is that I can very much appreciate Street Art works just for its aesthetic appeal, even when no message is intended (it's astonishing how much life a few stickers and a few marker strokes can cast on an plainly dead city wall). This said, there are places that are beautiful enough on their own and do not need any addition, and such addition would be vandalism rather than art. Furthermore, if such addition is plainly bland and tasteless, it turns out being purely grotesque.

A few days ago I found one of the most flagrant displays of pure stupidity and disrespect of that kind. Some idiotic scum bag decided to leave a shitty graffiti in one of the cute old buildings in the imposing Toompea Hill in the delightful city of Tallinn

I really felt like rewarding the author/s of such an exhibition of mental diarrohea by shoving up his ass all the sprays used in such felony

Friday, 6 September 2013

Node Scope, Html Classes and more

Time for one of my typical listings of interesting notes/tricks/whatever that I want to have easily accessible for further reference.

  • If you've ever been dubious about how scope between different modules works in Node.js, you should read this brilliant explanation, it makes it crystal clear

    Unlike the browser, were variables are by default assigned to the global space (i.e. window) in Node variables are scoped to the module (the file) unless you explicitly assign them to module.exports. In fact, when you run "node myfile.js" or "require('somefile.js')" the code in your file is wrapped as follow: (function (exports, require, module, __filename, __dirname) { // your code is here });

  • I'd never been aware of how incorrectly I was using the term "CSS class" until I read this eye-opener. So I think we should speak like this: We assign html classes to html elements and style is applied to those elements by means of CSS rules consisting of a Class Selector and a declaration block. I fully agree with the author in that using the correct naming is not just a matter of being pedant:

    This isn't just pedanticism. By using the phrases "CSS class(es)" or "CSS class name(s)" you're not only being imprecise (or just plain wrong), you're tying the presentational context/framing of "CSS" to class names which implies and even encourages the bad practice of using presentational class names.

  • One of my first thoughts after learning about CSS Animations was how useful it would be to create CSS rules from JavaScript, you can read a good explanation here.
  • Even if you've never written a jQuery plugin you'll probably have wondered what the jQuery.fn thing is, well, it's that simple like this: jQuery.fn is an alias to jQuery.prototype (yes, in the end jQuery is just a function, so it has a prototype property). This explanation is excellent.
  • I'll start by saying that I'm not the least interested in languages that compile to JavaScript, be it Dart, CoffeeScript, TypeScript... JavaScript is a beautiful language and I can't understand people not wanting to use it for Normal Web Development. Nevertheless, this asm.js thing is quite different stuff, it comes with the promise of allowing us to run in our browsers things that were not thought for the web, and at a decent speed. You can read this beautiful explanation by John Resig or this quite detailed one (but admittedly quite harder to digest)
  • While doing a debugging session in Visul Studio and having a look at the Threads window I noticed that one of the threads was running with Priority set to Highest. I was not creating any thread implicitly, there was a bunch of worker threads being created by a WCF Service Host, so what could this be? Well, pretty easy, the .Net runtime will always create at minimum 3 threads for an application, the Main Thread, a Debugger Thread (a helper thread to work along with the debugger) and the Finalizer thread (and depending on the .Net version you can also have a separate GC thread). So it struck me that it had to be the Finalizer Thread that was running at High priority. This question in StackOverflow confirms it.

Sunday, 1 September 2013

Some Notes on Concurrency

My last assignment at work compelled me to go through a fast and pleasant upgrade of my very basic and rusted knowledge about concurrent programming, so there are some thoughts that I'd like to write about, and even when they're rather unconnected, I think I'm not going to refrain myself from throwing them into a messy post...

A basic understanding of Concurrency (threads, locks...) should be fundamental for any software developer, and now that multi-core processors are everywhere, such knowledge should be even more important. However, as more and more of our time as developers is spent writing code in a thread unaware language like Javascript, I think less and less exposed we are to concurrency constructs. In a way we could say that this is a time of concurrency ready hardware and concurrency unready developers...

A multi-core reality

The omnipresence of multi-core processors not only increases the number of situations where using multiple threads would be beneficial (think of CPU bound scenarios), but also adds new elements to account for:
  • Spinlocks I was totally unaware of this synchronization construct until I realized it had been added to .Net 4.0.

    In software engineering, a spinlock is a lock which causes a thread trying to acquire it to simply wait in a loop ("spin") while repeatedly checking if the lock is available. Since the thread remains active but is not performing a useful task, the use of such a lock is a kind of busy waiting.

    What should come to mind after reading that is that this thread doing a busy-wait to avoid a context switch makes only sense in a multi-core processor (or a multi-processor machine). I'll give an example. Let's say Thread1 acquires a Spinlock and gets scheduled out before completing its operation. Then Thread2 tries to acquire the Spinlock and gets blocked there doing busy wait. As there's an only core, as this busy wait is using it, Thread1 can not be scheduled to complete its action and release the Spinlock until the quantum of Thread2 finishes and a context switch happens. So in this case a spinlock is completely counterproductive. You can confirm here that what I'm saying is correct.

  • You should be aware that the same instruction could run at right the same moment in 2 different cores. The other day I found a bug in some code because in order to generate a pseudo-random string (that would be used for creating a named pipe) the current date to the millisecond level was being used (something like yyyyMMddHHmmssFFF). Well, on occasion, under heavy load testing, a thread would fail to create the named pipe cause another thread had already created it. This means that the 2 threads had run the instruction MyDateToStringFormatter(DateTime.Now); in its respective cores at just the same millisecond!!!

Atomicity, Thread Safety, Concurrent Collections

Another essential point we need to take into account when working on multithreaded applications is the atomocity of the operations. For example, it should be clear than in a multithreaded environment with several threads reading/writing from/to a normal Dictionary, this operation is unsafe:

myVar = myDictionary["myKey"];

cause the first instruction could return true, but before we run the second instruction another thread could remove that key (and obviously in multi-core systems chances increase).

Hopefully .Net 4 introduced a whole set of Thread Safe collections, Concurrent Collections which means that we can easily fix that problematic code by using a ConcurrentDictionary this way:

myDictionary.TryGetValue("myKey", out myVar);

But there are cases that are not so obvious. For example, is there any problem if different threads running in parallel add elements to a normal List? an apparently innocent myList.Add(item);. That Add call is far from atomic. Adding an element to a list involves checking the size of the list and resizing it if necessary, so thread1 could be resizing the list, and before it has time to set its new size thread2 could run its own Add and start a new resize... It's a common question in Stackoverflow.

With this in mind, you set out to use a ConcurrentList but get slapped by the fact that such collection does not exist. Well, pondering over it one realizes that such collection would make little sense [good discussion here]. If several threads can be reading/writing from/to the List, you won't want to access its elements by index, as what you inserted as element 3, could be now at index 5 due to other threads's work. So maybe what you really need is a ConcurrentQueue or a ConcurrentStack, or maybe you just want a sort of container where you can insert/remove items in a thread-safe fashion and apply typical Linq to Objects operation... In this case, a fast look at the collections available in the System.Collections.Concurrent namespace gives us the solution, ConcurrentBag. The fact that it's optimized for situations where the same thread is both producing and consuming items from the collection should not lead you to confusion, you can use it for other concurrent scenarios without a problem (you can read more here and here.

As I already noted here enumerating a Concurrent Collection is safe, I mean, if another thread modifies the collection while your thread is iterating it, you won't get an InvalidOperationException on your next MoveNext, but something that I've found that has quite called my attention is that while
the Enumerator returned from a ConcurrentDictionary enumerates over the "live Dictionary":

The enumerator returned from the dictionary is safe to use concurrently with reads and writes to the dictionary, however it does not represent a moment-in-time snapshot of the dictionary. The contents exposed through the enumerator may contain modifications made to the dictionary after GetEnumerator was called.

the Enumerator returned for a ConcurrentBag iterates over a snapshot of the Bag:

The enumeration represents a moment-in-time snapshot of the contents of the bag. It does not reflect any updates to the collection after GetEnumerator was called. The enumerator is safe to use concurrently with reads from and writes to the bag.

Regarding Atomicity, I'll add that atomicity does not just refer to whether we have 1 or 2 C#/Java instructions, it comes down to the machine level instructions. That's why we have the interlocked class, and furthermore, that's why this class features a Read method (needed for safely reading 64 bits values in 32 bits systems, otherwise thread1 could read the first 32 bits, and thread2 overwrite the second block of 32 bits before thread1 had readed it, obtaining then a "mixed" value)