Get Up and Running With SQL Server Express, Clojure, SQL Korma and Local Jars

Just a sweet and short little post to help others get up to speed accessing SQL Server Express 2008 with Clojure’s SQL Korma library.

Set Up SQL Server Express

I’m using SQL Server 2008 Express. To configure your DB server go to Start > All Programs > Microsoft SQL Server 2008 > Configuration Tools > SQL Server Configuration Manager. Under SQL Server Network Configuration select Protocols for SQLEXPRESS. On the panel on the right size of the screen, make sure TCP/IP is Enabled, then right click it and select Properties. Select the IP Addresses tab, and make sure you have the following settings:

  1. For IP Address 127.0.0.1
    • Active: Yes
    • Enabled: Yes
    • TCP Dynamic Ports: Make sure this entry is empty.
    • TCP Port: Make sure this entry is empty.
  2. For IPAll:
    1. TCP Dynamic Ports: Make sure this entry is empty.
    2. TCP Port: 1433

Enable SQL Server mixed mode authentication (SQL Korma doesn’t do integrated/Windows authentication). Run regedit.exe and go to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL10.SQLEXPRESS\MSSQLServer. Change the LoginMode to 2, and restart the SQL Server Express service.

Okay, so we can now create SQL accounts on SQL Server Express, so let’s enable the SA account with SQLCMD:

  • sqlcmd .\SQLEXPRESS -E
  • ALTER LOGIN sa ENABLE
  • GO
  • ALTER LOGIN sa with password=’aPassword’
  • GO
  • exit

Maybe you don’t want to enable the SA account, but rather create a new non-sysadmin account, as it’s a security risk to use SA for your apps. I just used it here, as it was the shortest way to get a SQL account ;-)

Load The SQL Server JDBC Driver Into a Local Artifactory Repository

Download JFrog’s Artifactory. Go to Artifactory’s bin folder and run InstallService.bat. Then launch Artifactory with  artifactory.bat, and browse to http://localhost:8081/artifactory/webappGrab Microsoft SQL Server JDBC Driver 3.0 and load it into a local Artifactory repository with the following settings:

  • GroupId: sqljdbc4
  • ArtifactId: sqljdbc4
  • Version: 3.0

The reason we’re using Artifactory, is that Leiningen demands that all your dependencies come from a repository somewhere. Since Microsoft’s JDBC driver isn’t available on a public repository like Clojars, is make it available from a local repository. You’re other option is to directly load it into a Maven repository, but trust me, this is time consuming and something you want to avoid.

Now add your local repository to Lein’s defproject, :repositories {“ext-release-local” “http://localhost:8081/artifactory/ext-release-local/”}. You can see I chose to load sqljdbc4.jar into the ext-release-local repository. Also add the SQL Server JDBC driver as a dependency: [sqljdbc4/sqljdbc4 “3.0”]

Make a Nice Clojure SQL Korma

And finally drop SQL Korma as a dependency into your Leiningen project, and do the rest of its configuration as specified on Git Hub:  [korma “0.3.0-RC4”]. If you’re new to SQL Korma, like I am, you might think that (defentity …) will also create your database objects for you. Not so! SQL Korma doesn’t have this feature currently (version 0.3.0) – hopefully they’ll add it in future. So make sure you’re DB objects exist and that the defentity statements correctly map to them.

Sounds like a few simple steps, but I took me unnecessarily long to piece all the above together, and get to the point of accessing SQL Server Express 2008 with SQL Korma. Hopefully the above will help others to reach SQL Server and Korma Nirvana, one time!

 


My New Chomma, Clojure

A few months ago I decided that it’s time to learn a new programming language that’s completely outside my normal frame of reference. After watching a video on InfoQ about the future of programming languages, I decided Clojure would make an interesting choice. I bought the Pragmatic Programmer book, and what can I say, we’ve been becoming good friends these days.

I think I’m taking to the functional programming paradigm, and like the idea of pure functions, without shared state. Even when I program C#, I tend to make heavy use of its closures like delegates, events, anonymous methods, and lambdas, using methods as data. Learning Clojure is quite a challenge if you’ve been conditioned by years of static typed, imperative programming.

A Very Basic Clojure Work Flow For Beginners

What I’ll describe here is a very basic workflow that will enable you to write some trivial Clojure apps and work through some of the examples you might find in books and on the web. Some of these tasks aren’t that clear reading the available material, as the authors usually skip to the more exciting parts of the language.

1. Get Counterclockwise for Eclipse
2. Create an executable command line script (in my case I called it repl.sh) to launch Clojure REPL (Read Evaluate Print Loop or Interactive Console) with the required libraries referenced in the class path (Clojure runs on the Java VM):

java -cp .:src:lib/JLine/jline-0_9_5.jar:lib/clojure-1.2.1.jar:lib/clojure-contrib.jar jline.ConsoleRunner clojure.main

This adds the necessary packages to the Java class path, and launches Clojure. JLine adds functionality to the Clojure REPL, like being able to press the up arrow to retrieve the previous command.

3. Compose your Clojure application by grouping related functions in the same namespaces. The first difference you’ll notice between Clojure, and an imperative language like Java or C#, is that it doesn’t have classes, only namespaces or packages. You define a namespace with the ns function:

(ns algorhythm.test.geometry.trigonometry-tests)

This tells Clojure to switch to that namespace, and if it doesn’t exist to create it, and to create all consecutive functions defined under it. To import another namespace to make its functions available you use the “(:use”:

(:use clojure.contrib.test-is algorhythm.geometry.trigonometry algorhythm.geometry.geometric-vector)

4. Start by writing some unit tests for your Clojure application. Or if you don’t do Test Driven Development (TDD), you can skip straight to 7, writing your actual implementation.

(ns algorhythm.test.geometry.trigonometry-tests
(:use clojure.contrib.test-is algorhythm.geometry.trigonometry algorhythm.geometry.geometric-vector))

(deftest find-longest-vertex-should-find-the-longest-vertex
(def triangle {:vertex-y (struct-map geometric-vertex :length 65 :link (struct-map vertex-link :angle 90 :vertex-name “vertex-x”)),
:vertex-x (struct-map geometric-vertex :length 99 :link (struct-map vertex-link :angle 20 :vertex-name “hypotenuse”)),
:hypotenuse (struct-map geometric-vertex :length 91 :link (struct-map vertex-link :vertex-name “vertex-y”))})

(is (= 99 (find-longest-vertex triangle))))

First create and switch to the namespace where we’re going to define our unit tests. Then we tell Clojure to reference the Clojure.contrib library, where Clojure’s unit test framework is located. You then declare your unit test functions, with (deftest …), and do assertions with (is …).

5. Launch Clojure’s REPL from terminal and load your unit test .clj files with (load-file …).

jacquesd@ubuntu:~> ./repl.sh

Clojure 1.2.1
user=> (load-file “source/algorhythm/test/geometry/trigonometry_tests.clj”)

6. Run all loaded unit tests on the Clojure REPL with (run-tests). They will fail.

user=> (run-tests)

7. Now, write your required functions and repeat the cycle from 5. More specifically in our example, we should be writing algorhythm.geometry.trigonometry and algorhythm.geometry.geometric-vector, that’s required by our example unit test.

Okay, that’s my 2c to help your Clojure baby steps along. Preferably you’d opt for a proper project build system, like Leinigen, instead of manually loading and executing files through the Clojure REPL. But that’s a story for another day…


Zippy Tips Working With ServiceStack, Backbone.js, jQuery & Mono-Develop on Mac

Okay, just some random nigglies I’m experiencing and sort of solved, working with ServiceStack, Backbone.js, Mono & Mono-Develop on Mac.

MonoDevelop & XSP dev web server

Not sure who else is experiencing this issue, but my MonoDevelop and XSP dev web server gets confused sometimes, after a while. I launch two projects when I click play in my project, a web service based on ServiceStack and a ASP .NET project based on Backbone.js. For some weird reason after N number of times relaunching these 2 web projects from MonoDevelop – XSP and MonoDevelop loses track of the web service’s XSP process. The problem then is that I’m unable to re-launch the web service project with a new version.

So… in terminal do “lsof -i -P | grep [port-number-of-xsp-project-website]“. This will give you the XSP process ID. Then, again in terminal, do a “sudo kill [pid]“, to kill the ghost XSP process.

Great now you can continue launching XSP from MonoDevelop.

Cross Domain/Site Scripts with jQuery & Backbone.js

Riiight, so, I was doing the whole preflight thing with jQuery $.ajax. In Firebug I could see the OPTIONS request being made and the server returning the a 200 OK, with the following headers: Access-Control-Allow-Origin and Access-Control-Allow-Methods. BUT, $.ajax never made the actual request to PUT or POST the data to my ServiceStack web service. Well, it turns out another header is required to be returned by the web server’s response to the OPTIONS request: Access-Control-Allow-Headers, with a value of Content-Type.

So when using ServiceStack make sure you set your GlobalResponseHeaders in your AppHosts Configure(…) method:

public override void Configure(Container container)
{
SetConfig(new EndpointHostConfig
{
GlobalResponseHeaders =
{
{ "Access-Control-Allow-Origin", "*" }, // You probably want to restrict this to a specific origin
{ "Access-Control-Allow-Methods", "PUT, GET, POST, DELETE, OPTIONS" },
{ "Access-Control-Allow-Headers", "Content-Type" }
},
});
}

Saving Models In Backbone.js

When you want to update a model, make sure you call model.save({ anUpdatedProperty: newValue, anotherUpdatedProperty: newValue }), instead of just model.save(), otherwise boggerrol will happen.

When your Backbone.js app talks to a web service that serializes data into a CamelCase format, like C#’s properties, then Backbone’s collection.get(…) won’t work for you, because your model’s id property‘s name will be “Id” and not “id”. To get around this add the idAttribute to your Backbone.js Model, to reroute id to your chosen property on the model object:


var theModel = Backbone.Model.extend(
{
idAttribute: "Id"
});
<pre>

Cool. That’s it for now. Catch you on the flip side!


Discover Dynamic Object Creation In Ruby

Let me quickly explain Ruby‘s dynamic object creation. When I talk about dynamic object creation, I’m referring to when you instantiate a new object instance from meta-data using a class (also referred to a as a Type in .NET) name, or class meta-data object. In languages like C#, and Java you will use reflection to dynamically invoke objects like this. Ruby has two equivalents, depending on whether you’re invoking an object from a class’s name or a class meta-data object. Invoking an object from a class meta data object is very straight forward:

class_meta_obj = Module1::Module2::Module3::SomeClass
return class_meta_obj.new

All you do to instantiate a new object instance from a class definition, is to call new on it.

Invoking an object from a class name is more cumbersome than I think is necessary, because you first need to load each module in the full class name:

class_name = "Module1::Module2::Module3::SomeClass"
result_class_meta_obj = class_name.split('::').inject(Object)
{ |result_class_meta_obj, item|
    result_class_meta_obj.const_get item
}
return result_class_meta_obj.new

First we split the module & class hierarchy on the separator “::”. This gives us each individual module and class in an Array. For each item in the array we pass in the last result returned by the block (result_class_meta_obj), and the current module or class name (item). The argument passed to the inject method (Object) is the first iteration’s last result (result_class_meta_obj).

On the class/module meta object we send the current module/class’s name to const_get. This returns the current module/class’s meta data object, that then becomes the latest result. Each class and module name is a constant in Ruby that points to its corresponding class/module definition. Now that we have the class definition meta data object, we can invoke its constructor the same way as in the first example.


Hey NHibernate, Don’t Mess With My Enums!

So I’ve been using Fluent NHibernate for a short while now. Initially I had to overcome some minor challenges, but since I got those out of the way it’s been pretty smooth sailing. One thing that stands out, which required more tinkering and timeshare than I would’ve liked is the way NHibernate handles the .NET enum type. Natively NHibernate allows you to save your enum’s value as a string or number property/column in the referencing object’s table. In other words, by default it doesn’t allow you to map your enum to its own separate table, and then let your objects refer to it through an association/foreign key. For NHibernate enums are primitive values, and not “entity objects” (logically speaking – ignoring the technical internal mechanics of .NET’s enum). I would argue that enums can be both a primitive string or number, or a more complex entity. Under certain circumstances an enum can be viewed as a simple “object” that consists of two properties:

  • An Id, represented by the enum member’s number value
  • And a name, represented by the enum member’s string name.

I’ve found that it’s very convenient to use the “entity object” version of enums for very simple, slow changing look-up data with a fair amount of business logic attached to it. For instance in a credit application app, you might only support 3 or 4 types of loans, but you know that over time app’s life, the company won’t add more than 2 or 3 new types of loans. Adding a loan type requires some additional work, and isn’t merely a matter of just inserting a new loan type into a look-up table. The reason is that a fair amount of the app’s business logic, mainly in the form of conditional logic statements, must also be adapted to accommodate the new loan type. From a coding perspective it’s very convenient to use enum types in these cases, because you can refer to the various options through DRY strong typed members, with a simultaneous string and number representation. So instead of

var loan = loanRepository.FindById(234);
var loanType = loanTypeRepository.FindById(123);

// ...

if (loan.Type == "PersonalLoan")
{
    // ...
}

rather do

var loan = loanRepository.FindById(234);

if (loan.Type == LoanType.Personal)
{
    //...
}

Okay, schweet, you get the point. Next logical question: How do you get NHibernate to treat your enums as objects with their own table, and not primitive values? To do this you have to create a generic class that can wrap your enum types, and then create a mapping for this enum  wrapper class. I call this class Reference:

public class Reference<TEnum>
{
    private TEnum enm;

    public Reference(TEnum enm)
    {
        this.enm = enm;
    }

    public Reference() {}

    public virtual int Id
    {
        get { return Convert.ToInt32(enm); }
        set { enm = (TEnum)Enum.Parse(typeof(TEnum), value.ToString(), true); }
    }

    public virtual string Name
    {
        get { return enm.ToString(); }
        set { enm = (TEnum)Enum.Parse(typeof(TEnum), value, true); }
    }

    public virtual TEnum Value
    {
        get { return enm; }
        set { enm = value; }
    }
}

The Reference class is pretty straight forward. All it does is translate the contained enum into an object with three properties:

  • Id – the integer value of the enum member.
  • Name – the string name of the enum member.
  • Value – the contained enum member.

You might wonder why I didn’t bother to restrict the allowed generic Types to enums. Well, it so happens that .NET generics doesn’t allow you to restrict generic type declarations to enums. It allows you to restrict generic types to structs, and all sort of other things, but not to enums. So you will never be able to get an exact generic restriction for the Reference class. So I thought, aag what the hell, if I can’t get an exact restriction, then what’s the point anyways? I’ll have to trust that whoever is using the code, knows what he’s doing.

Now, for example, instead of directly using the LoanTypes enum, the Loan class’s Type property will be a Reference object, with its generic type set to the LoanTypes enum:

public class Loan
{
    // ...
    public Reference<LoanType> Type { get; set; }
    // ...
}

This is not completely tidy, because to a degree the limitations of the data access infrastructure, i.e. NHibernate, force us to adopt a compromise solution that’s not necessary if we changed to something else. In other words things from the data infrastructure layers spills into the domain.

What’s left to do is (1) create a mapping for Reference<LoanType>, and (2) get NHibernate to use the right table name, i.e. LoanType, instead of Reference[LoanType]. Here the Fluent NHibernate mapping for Reference<LoanType>:

public class LoanTypeMap: ClassMap<Reference<LoanType>>
{
    public LoanTypeMap()
    {
        Table(typeof(LoanType).Name);
        Id(loanType => loanType.Id).GeneratedBy.Assigned();
        Map(loanType => loanType.Name);
    }
}

The above Fluent NHibernate mapping tells NHibernate to use whatever value property Id has for the primary key, and not generate one for it. You also have to explicitly specify the table’s name you’d like NHibernate to use, because you want to ignore “Reference” as part of the table name, and only use the enum type name.

And that’s it. You will now have a separate table called LoanType, with the foreign keys of other classes’ tables referencing the LoanType enum’s table. Just keep in mind that this solution might not always be feasible. For example it might not work too well when you write a multilingual application. Also should you want to get a pretty description for each enum’s member, for example “Personal Loan”, instead of “PersonalLoan” you’ll have to throw in some intelligent text parsing that split’s a text string before each uppercase character. Hopefully this post gave you another option to map your enum types with NHibernate.


REST Web Services with ServiceStack

Over the past month I ventured deep into the alternative side of the .NET web world. I took quite a few web frameworks for a test drive, including OpenRasta, Nancy, Kayak and ServiceStack. All of the aforementioned supports Mono, except OpenRasta, that has it on its road-map. While kicking the tires of each framework, some harder than others, I saw the extent of just how far .NET has grown beyond its Microsoft roots, and how spoiled .NET developers have become with a long list of viable alternative .NET solutions from the valley of open source.

ServiceStack really impressed me, with its solid mix of components that speak to the heart of any modern C# web application. From Redis NOSQL and lightweight relational database libraries, right through to an extremely simple REST and SOAP web service framework. As the name suggests, it is indeed a complete stack.

Anyways, enough with the marketing fluff, let’s pop the bonnet and get our hands dirty. What I’m going to show you isn’t anything advanced. Just a few basic steps to help you to get to like the ServiceStack web framework as much as I do. You can learn the same things I’ll be explaining here by investigating the very complete ServiceStack example applications, but I thought some extra tidbits I picked up working through some of them should make life even easier for you.

Some Background Info On REST

I’m going to show you how to build a REpresentation State Transfer (REST) web service with ServiceStack. RESTful web services declare resources that have a URI and can be accessed through HTTP methods, or verbs (GET, PUT, POST and DELETE), to our domain services and entities. This is different from SOAP web services that require you to expose methods RPC style, that are ignorant of the underlying HTTP methods and headers. To implement a REST resource and its HTTP-methods in ServiceStack requires the use of two classes, RestService and RestServiceAttribute.

Another feature of REST is that data resources are encoded in either XML or JSON. However, the latest trend is to encode objects in JSON for its brevity and smaller size, rather than its more clunky counterpart, XML. We will therefore follow suit and do the same. Okay, I think you’re ready now to write your first line of ServiceStack code.

Create a Web Service Host with AppHostBase

The first thing you have to do is specify how you’d like ServiceStack to run your web services. You can choose to either run your web services from Internet Information Services (IIS) or Apache, or from the embedded HTTP listener based web server. Both of these approaches require you to declare a class that inherits from AppHostBase:

public class AppHost: AppHostBase
{
    public AppHost()
        : base("Robots Web Service: It's alive!", typeof(RobotRestResource).Assembly) {}

    public override void Configure(Container container)
    {
        SetConfig(new EndpointHostConfig
        {
            GlobalResponseHeaders =
            {
                { "Access-Control-Allow-Origin", "*" },
                { "Access-Control-Allow-Methods", "GET, POST, PUT, DELETE, OPTIONS" },
            },
        });
     }
 }

Class AppHost‘s default constructor makes a call to AppHostBase‘s constructor that takes 2 arguments. This first argument is the name of the web app, and the second argument tells ServiceStack to scan the Assembly where class RobotRestResource is defined, for REST web services and resources.

AppHostBase‘s Configure method must be overridden, even if it’s empty, otherwise you’ll get and exception. If you plan on making cross domain JavaScript calls from your web user interface (i.e. your web interface is written in JavaScript and hosted on a separate web site from your web services) to your REST resources, then adding the correct global response headers are very important. Together the two Access-Control-Allow headers tell browsers that do a pre-fetch OPTIONS request that their cross domain request will be allowed. I’m not going to explain the internals, but any Google search on this topic should yield sufficient info.

Now all that’s left to do is to initialize your custom web service host in Global.asax‘s Application_Start method:

public class Global : System.Web.HttpApplication
{
    protected void Application_Start(object sender, EventArgs e)
    {
        new AppHost().Init();
    }
}

The last thing you might be wondering about, before we move on, is the web.config of your ServiceStack web service. For reasons of brevity I’m not going to cover this, but please download ServiceStack’s examples and use one of their web.configs. The setup require to run ServiceStack from IIS is really minimal, and very easy to configure.

Define REST Resources with RestService

Now that we’ve created a host for our services, we’re ready to create some REST resources. In a very basic sense you could say a REST resource is like a Data Transfer Object (DTO) that provides a suitable external representation of your domain. Let’s create a resource that represents a robot:

using System.Collections.Generic;
using System.Runtime.Serialization;

[RestService("/robot", "GET,POST,PUT,OPTIONS")]
[DataContract]
public class RobotRestResource
{
[DataMember]
public string Name { get; set; }

[DataMember]
public double IntelligenceRating { get; set; }

[DataMember]
public bool IsATerminator { get; set; }

[DataMember]
public IList<string> Predecessors { get; set; }

public IList<Thought> Thoughts { get; set; }

}

The minimum requirement for a class to be recognized as a REST resource by ServiceStack, is that it must inherit from IRestResource, and have a RestServiceAttribute with a URL template, and that’s it. ServiceStack doesn’t force you to use the DataContractAttribute or DataMemberAttribute. The only reason I used it for the example is to demonstrate how to exclude a member from being serialized to JSON when it’s sent to the client. The Thoughts member will not be serialized and the web client will never know the value of this object. I had a situation where I wanted to have a member  on my resource for internal use in my application, but I didn’t want to send it to clients over the web service. In this situation you have to apply the DataContractAttribute to your resource’s class definition, and the DataMemberAttribute to each property you want to expose. And that’s it, nothing else is required to declare a REST resource ffor ServiceStack.

Provide a Service for Each Resource with RestServiceBase

Each resource you declare requires a corresponding service that implements the supported HTTP verb-methods:


public class RobotRestService: RestServiceBase<RobotRestResource>
{
    public override object OnPut(RobotRestResource robotRestResource)
    {
        // Do something here & return a
        // new RobotRestResource here,
        // or any other serializable
        // object, if you like.
    }

    public override object OnGet(RobotRestResource robotRestResource)
    {
        // Do some things here ...
        // Return the list of RobotRestResources
        // here, or any other serializable
        // object, if you like.

        return new []
        {
            new RobotRestResource(),
            new RobotRestResource()
        };
    }
}

In order for ServiceStack to assign a class as a service for a resource, you have to inherit from RestServiceBase,  specifying the resource class as the generic type. RestServiceBase provides virtual methods for each REST approved HTTP-verb: OnGet for GET, OnPut for PUT, OnPost for POST and OnDelete for DELETE. You can selectively override each one that your resource supports.

Each HTTP-verb method may return one of the following results:

  1. Your IRestResource DTO object. This will send the object to the client in the specified format JSON, or XML.
  2. ServiceStack.Common.Web.HtmlResult, when you want to render the page on the server and send that to the client.
  3. ServiceStack.Common.Web.HttpResult, when you want to send a HTTP status to the client, for instance to redirect the client:
    var httpResult = new HttpResult(new object(), null, HttpStatusCode.Redirect);
    httpResult.Headers[HttpHeaders.Location] = "https://openlandscape.wordpress.com";
    return httpResult;
    

And that’s it. Launch your web site, and call the OnGet methof at /robot?format=json, or if you prefer XML /robot?format=xml. To debug your RESTful service API I can highly recommend the Poster Firefox plug-in. Poster allows you to manually construct HTTP commands and send them to the server.

You might be wondering what the purpose is of RobotRestResource that gets passed to each HTTP-verb method. Well, that is basically an aggregation of the posted form parameters and URL query string parameters. In other words if the submitted form has a corresponding field name to one of RobotRestResource’s properties, ServiceStack will automatically assign the parameter’s value to the supplied RobotRestResource. The same applies for query strings, the query strings ?Name=”TheTerminator”&IsATerminator=true: robotRestResource’s Name will be assigned the value of “TheTerminator” and IsATerminator will be true.

Using ServiceStack’s Built-In Web Service as a Service Host

The above discussion assumed that you’ll be hosting your ServiceStack service in IIS or with mod_mono in Apache. However, ServiceStack has another pretty cool option available, self hosting. That’s right, services can be independently hosted on their own and embedded in your application. This might be useful in scenarios where you don’t want to be dependent on IIS. I imagine something like a Windows service, or similar, that also serves as small web server to expose a web service API to clients, without the need for lengthy and complicated IIS setup procedures.

var appHost = new AppHost();
appHost.Init();
appHost.Start("http://localhost:82/");

To start the self hosted ServiceStack you configure your host as usual, and then call Start(…), passing the URL (with free port) where the web server will be accessed.

Why Use ServiceStack

For me one of the big reasons for choosing ServiceStack is that it has a solid library to build web services running on Mono. However, after using if for a while I found its easy setup and simple conventions very refreshing from the often confusing and cumbersome configuration of Windows Communication Foundation (WCF) web services.ServiceStack also does a much better job of RESTful services, than WCF’s current implementation. I know future versions of WCF will enable a more mature RESTful architecture, but for now it’s pretty much RPC hacked into REST. Another bonus was the complete set of example apps that were a great help to quickly get things working. So if you’re tired of WCF’s heavy configuration and you’re looking for something to quickly implement mature RESTful web services, then definitely give ServiceStack a try.


Fluent NHibernate on PostgreSQL

When you write your first Fluent NHibernate application with Mono/.NET based on the Getting started tutorial, you eventually discover that you require a few extra assembly-dll references not mentioned. For my Postgres (PostgreSQL) project my references are:

Fluent NHibernate References

I won’t go into the detail of the matter, other than to say that many of these don’t give you a very clear indication as to what exactly is missing.

To configure Fluent NHibernate to work with Postgres you will need the following:

var connectionStr = "Server=127.0.0.1;Port=5432;Database=the_db;User Id=user_name;Password=password;"
ISessionFactory sessionFactory = Fluently
 .Configure()
 .Database(PostgreSQLConfiguration.Standard.ConnectionString(connectionStr))
 .Mappings(m => m.FluentMappings.AddFromAssemblyOf<TypeOfFluentNHibernateMapping>())
 .ExposeConfiguration(BuildSchema)
 .BuildSessionFactory();

private static void BuildSchema(Configuration config)
 {
// This NHibernate tool takes a configuration (with mapping info in)
// and exports a database schema from it.
var dbSchemaExport = new SchemaExport(config);
//dbSchemaExport.Drop(false, true);
dbSchemaExport.Create(false, true);
 }

TypeOfFluentNHibernateMapping is a class that inherits from FluentNHibernate.Mapping.ClassMap<T>. This tells Fluent to load all ClassMappings from the assembly where this type is defined.

BuildSchema(…) creates the database’s schema based on the specified mapping configuration and recreates the tables and the rest of it in the database specified by the connection string. I included the call to the schema export’s drop method, because the code originates from my unit tests, where I drop & recreate the database on each test run.

So far I like Fluent NHibernate, and the only complaint I have so far is the way NHibernate (not Fluent) handles enums. It assumes you want to use the enum member’s string name. The way I like to store my enums, are to have a separate table for them.