Thursday, December 17, 2015

Getting CoreCLR to run on CentOS

At work, we are forced to use CentOS (because reasons), but we also want to use CoreCLR (because other reasons). Unfortunately, CoreCLR doesn't currently work straight out of the box because of various library conflicts and missing dependencies. Fortunately, there is a way to install all of them with a little bash hands-on time.

TL;DR

Here's the short list of libraries. Further below there are concrete bash scripts for installing each.
Where versions are specified, it means that CoreCLR (as of rc2-16317) requires these particular versions, others won't work.
  • Through yum: epel-release (this is needed, because libunwind is in epel repo).
  • Through yum: automake, libtool, curl, libunwind, gettext, libcurl-devel, openssl-devel, zlib.
  • libuv: https://github.com/libuv/libuv/archive/v1.4.2.tar.gz
  • libicu 52: http://download.icu-project.org/files/icu4c/52.1/icu4c-52_1-RHEL6-x64.tgz
  • openssl 1.0.0: https://www.openssl.org/source/openssl-1.0.0t.tar.gz

dnvm

First, get DNVM (DotNet Version Manager). This is standard.

 $ curl -sSL https://raw.githubusercontent.com/aspnet/Home/dev/dnvminstall.sh | DNX_BRANCH=dev PROFILE=~/.bashrc sh  
 $ source ~/.dnx/dnvm/dnvm.sh  

Notice the "PROFILE=~/.bashrc" part before "sh". This is needed to make dnvminstall put bootstrap in the right place (i.e. "bashrc", not "bash_profile"), because currently it doesn't.

coreclr

Then, need to make sure unzip is installed (usually is, but just in case).

 sudo yum install -y unzip  

Now we can install CoreCLR using dnvm itself:

 dnvm install latest -r coreclr  

For extra thrill, go commando and update from "unstable dev" channel:

 dnvm install latest -u -r coreclr  

This, however, will not work out of the box because of a bunch of missing libraries. Most of them can be easily installed with yum:

# epel-release repo is needed, because libunwind is in it
sudo yum -y install epel-release
sudo yum -y install automake libtool curl libunwind gettext libcurl-devel openssl-devel zlib

libuv

But libuv isn't available like that, so needs to be installed manually:

mkdir ~/libuv
curl -sSL https://github.com/libuv/libuv/archive/v1.4.2.tar.gz | tar zxfv - -C ~/libuv
pushd ~/libuv/libuv-1.4.2
sudo sh autogen.sh && ./configure && make && sudo make install
popd && rm -rf ~/libuv
sudo ldconfig

libicu52

At this point, CoreCLR beta7 is fine. But beta8 and later still require one more thing: libicu52. CentOS 7.2 comes with libicu50 standard, and there is no later versions available through yum by default (as of this writing). But even if there were, the upgrade would break some other packages already installed - welcome to dependency nightmare!
And don't even think of upgrading to 53 or later: apparently CoreCLR specifically needs version 52.

Thankfully, there is a way hack to have your cake and eat it, too: install both versions side by side, using LD_LIBRARY_PATH to make both visible.

mkdir ~/libicu52

# Note: the link below is for x64. Check your architecture.
curl -sSL http://download.icu-project.org/files/icu4c/52.1/icu4c-52_1-RHEL6-x64.tgz | tar zxfv - -C ~/libicu52
sudo cp -r ~/libicu52/usr/local/lib /opt/libicu52
rm -r ~/libicu52

# This will set the var locally, use ~/.bashrc or /etc/environment for permanence
export LD_LIBRARY_PATH=/opt/libicu52:$LD_LIBRARY_PATH

openssl 1.0.0

Next up - openssl. CentOS comes with 1.0.1, but alas, CoreCLR requires 1.0.0, and creating a symlink wouldn't do.

mkdir ~/openssl
curl -sSL https://www.openssl.org/source/openssl-1.0.0t.tar.gz | tar zxfv - -C ~/openssl
pushd ~/openssl/openssl-1.0.0t
./config shared && make && sudo make install
popd && rm -rf ~/libuv
sudo ldconfig

Notice the "./config shared" part. The argument "shared" is essential, otherwise only statically linked version will get installed.

libcurl-gnutls.4

There is already a libcurl.4 that comes with CentOS, but CoreCLR is compiled against the gnutls version for some reason. Since they provide the same API, a mere symlink solves the problem:

 sudo ln /lib64/libcurl.so.4 /lib64/libcurl-gnutls.so.4  



And voila! CoreCLR is good to go! (at least up to rc2-16317).



Thursday, May 15, 2014

Dynamic generics in .NET

Ever found yourself in a situation where you're given a System.Type object and need to call some generic method, say M<T>, with that type as argument? Sticky situation.
       static void CallIt( Type type ) {
             this.CallMe<type>(); // How do I do that?!
       }

       static void CallMe<T>() {
             Console.WriteLine( "Boo-ya!" );
       }
You can't just do it directly. Generic instantiation (i.e. adding <T> to signature) is a compile-time construct, while Type object only exists at runtime.

It would be preferable, of course, to make the whole CallIt method generic and not pass a Type object at all. But you can't always do that. Examples include all kinds of late binding - from dependency injection to object serialization.
Fortunately, .NET framework is there to help you. You can make yourself a generic proxy with non-generic interface, and then create instances of it dynamically at runtime.
Here's an illustration:
       interface IProxy {
             void CallIt();
       }

       class Proxy<T> : IProxy {
             public void CallIt() { CallMe<T>(); }
       }

       static void CallIt( Type type ) {
            var proxyOfT = typeof( Proxy<> ).MakeGenericType( type );
            var proxy = Activator.CreateInstance( proxyOfT ) as IProxy;
            proxy.CallIt();
       }

Notice how I first construct a generic type Proxy<T> - dynamically, at runtime, - using the Type.MakeGenericType method, and then call Activator.CreateInstance to create an instance of that type. 

Of course, because Activator.CreateInstance returns an object, I have to then cast that object to IProxy. I know this cast will succeed, because my class Proxy<T> does implement that interface.

Now all that's left to do is call IProxy.CallIt - and voila! - I am now inside Proxy<T>.CallIt implementation, where I'm free to use the generic parameter as I please.

Of course, to avoid the performance hit of Activator.CreateInstance every time, you probably want to cache the proxy instance. To make the whole thing thread-safe, I will use ConcurrentDictionary for that purpose:

       static readonly ConcurrentDictionary<Type, IProxy> _proxies 
          = new ConcurrentDictionary<Type, IProxy>();

       void CallIt( Type type ) {
             var proxy = _proxies.GetOrAdd( type, _ => {
                    var proxyOfT = typeof( Proxy<> ).MakeGenericType( type );
                    return Activator.CreateInstance( proxyOfT ) as IProxy;
             } );
             proxy.CallIt();
       }

Thursday, November 15, 2012

Raise events properly


Don't raise an event just by "calling" it. You'll get NullReferenceException when there are no subscribers. Surprise!
Instead, you should check for null before trying to invoke the delegate:
if ( MyEvent != null ) MyEvent( this, EventArgs.Empty );
But that's also not good enough: in a multithreaded application, the last subscriber might unsubscribe from the event between the null check and the calling point, rendering the variable null and causing same old NullReferenceException.

To safeguard from that, one should cache the value in a local variable first:

var ev = MyEvent;
if ( ev != null ) ev( ... );

I have also found it useful to have an extension method for this case:
public static void Raise ( this EventHandler h, object sender )
{
    if ( h != null) h( sender, EventArgs.Empty );
}
And then:
MyEvent.Raise ( this );

And also add the generic case:
public static void Raise<T>( this EventHandler<T> h, object sender, T args )
   where T : EventArgs
{
    if ( h != null) h( sender, args );
}

What's AMD and why you need it

Here is a little AMD 101.
AMD, by the way, stands for Asynchronous Module Definition, so saying "AMD module" is a bit of a RAS syndrome, but everybody does it anyway.

First, let's define the problem. Say you're developing a Web application with a fair amount of client side functionality. Or, perhaps, a totally client side application. When your Javascript code is more than "hello world", you may want to split it into several components and, perhaps, multiple files (you know, just for the sake of mental health). Further, you may want to develop some of those components independently as a library. Or, perhaps, you want to use a library that somebody else already developed for you. And then those libraries may want to use other libraries, and so on. You know, very much the way you build other, non-Javascript applications.
But here is the problem: how do you know which Javascript files to include in your page? Obviously, the libraries must have some way of specifying which files they need, and these specifications must propagate from the deepest layers of the libraries all the way to the top. This problem can be solved with some trickery, but... But that's only half of the problem. The other half - correct order. How do you ensure that the browser doesn't execute a particular script until all its dependencies are loaded? And with the internet being asynchronous and all, the order of loading can be anything.

So you must create sort of a dependency graph of all the libraries, and only then go about loading them in the correct order.


A piece of software that does this graph thing is called an "AMD loader".


There are many of those available. The one I'm using is called RequireJS. But they all, thank the great open source community, have the exact same standard API.


Here is how it works.

The AMD loader provides two functions: "define" and "require".


The "define" function defines a module. Every time you call it, the AMD loader knows: "aha, here is another module". The first argument of define() is an array that contains names of modules that the module being defined depends upon. The second argument is a function that gets called once all the dependencies are loaded, and the arguments passed to that function will be the values exported from those dependencies, in the same order. The return value of that function is the value that this module wants to export.

 define ( ["jQuery"], function ($) {
   // the module code goes here
   return "my exported value";
 });


The require() method works almost exactly the same, except the function doesn't return anything. The require() call is the "top level" of the module hierarchy.