Tuesday, April 10, 2012

F# Agents and SQLCLR

Recently, I was presented with a business problem. Two different MSSQL databases need to be updated and in sync as quickly as possible. This means that when data is inserted into one of the databases table that same inserted data needs to be replicated in the other database table. Oh, and I can't modify the application that's making the database inserts to one of the databases because it's legacy code that uses ado (not ado.net). :-(
Okay, so the obvious solution is to have a service wake up on an scheduled time interval and poll the database table and figure out what data has changed, keep track and update the other database table. That would certainly work but I want to achieve a more reactive result. Not to mention that polling is arguably inefficient and requires a great deal of overhead to keep track of changes. Then I did a little research on reactive approaches. A quick google search  revealed SqlDependency and Change Tracking which are both good approaches but not really what I had in mind. So I did what any functional programmer would do. I went to the black board and started to draw boxes and functional arrows. I quickly realized that all I needed was something (perhaps an Agent or Actor) to listen for changes (or a message) from one table and update the other table with the received message. Hmmmm..... so how do I send a message to an Agent from MSSQL. And then it hit me... SQLCLR. I remember reading about it some time ago back in 2005. I even remember writing a stored procedure in C# just because I could. That was basically the extent of my knowledge on the subject. However, things are different now... it's 2012 and I have the power of F# at my disposal. :-)
I know I'm going to need a trigger so that I can get access to the inserted table which I'll need to parse and send to my Agent. So how do I write an F# SQLCLR trigger? No clue but a quick google search revealed an excellent blog post on how to write an F# SQLCLR stored procedure. All I have to do is tweak the process a bit to create a trigger instead. Low and behold the F# trigger was born.

open System
open System.Data
open System.Data.Sql
open System.Data.SqlClient
open System.Data.SqlTypes
open System.IO
open System.Net
open System.Runtime.Serialization.Formatters.Binary
open Microsoft.SqlServer.Server
open System.Text

type Sample =
    static member SendIt() =
        let triggContext = SqlContext.TriggerContext;

        let conn = new SqlConnection("context connection =true ");


        let sqlComm = conn.CreateCommand();

        sqlComm.CommandText <- "SELECT fieldname, fieldname from inserted";

        let dr = sqlComm.ExecuteReader();
        let inserted_data =
            seq {
                    while dr.Read() do
                        yield (dr.[0] :?> string) + "," + (dr.[1] :?> string)
        // Create a request using a URL that can receive a post.
        let request = WebRequest.Create ("http://localhost:50245/")
        // Set the Method property of the request to POST.
        request.Method <- "POST"
        // Create POST data and convert it to a byte array.
        let postData = Seq.head(inserted_data)
        let bf = new BinaryFormatter()
        let ms = new MemoryStream()
        bf.Serialize(ms, postData)
        let byteArray =  ms.ToArray()
        // Set the ContentType property of the WebRequest.
        request.ContentType <- "application/x-www-form-urlencoded"
        // Set the ContentLength property of the WebRequest.
        request.ContentLength <- byteArray.Length |> int64
        // Get the request stream.
        let dataStream = request.GetRequestStream()
        // Write the data to the request stream.
        dataStream.Write (byteArray, 0, byteArray.Length)
        // Close the Stream object.
        dataStream.Close ()
        // Send that puppy!!
        do request.GetResponse() |> ignore

It's worth noting that the trigger code above is not the final code that got used its merely the result
of a 15-20min exploration session in FSI. Definitely one of the reasons why I love F#.
So that's cool... we have this trigger that we can install in MSSQL 2005 or 2008. Oh, and I should
warn you if you're interested in doing this sort of thing. You must target your F# code 
to .NET 2.0. This is imperative due to the fact that MSSQL CLR stuff only understands .NET 2.0.

Okay, so if you're still with me then you're probably wondering what is behind the address in the
web request above. http://localhost:50245 -> is an available port on the same machine where one of the databases is hosted. Behind that address is a HttpListener and behind that is an F# Agent. Some time ago Tomas wrote a really cool HttpAgent and demo'd it over at skills matter.

You can head on over to his blog to grab the code. The code below gives you an idea of how you might implement a listener to accept the trigger request.

open System
open System.Net
open System.Net.NetworkInformation
open System.Threading

let generate_next_available_tcpPort =
    let properties = IPGlobalProperties.GetIPGlobalProperties()
    let active_tcp_connections = properties.GetActiveTcpConnections()
    Seq.map(fun (x: TcpConnectionInformation) -> x.LocalEndPoint.Address, x.LocalEndPoint.Port)       active_tcp_connections
    |> Seq.sortBy(fun x -> snd(x))
    |> Seq.map(fun x -> snd(x))
    |> fun x ->
           let index = Seq.length(x) - 1
           Seq.nth index x
    |> fun x -> x + 1

let url = sprintf "http://localhost:%i/" generate_next_available_tcpPort

let convert_stream_toBytes (stream: Stream) =
    let memoryStream = new MemoryStream();

(* Computation for sending email *)
let sendMail (from: string) (To: string) (subject: string) (body: string) =
  let client = new SmtpClient(Host = "localhost", DeliveryMethod = SmtpDeliveryMethod.Network, Port = 25)
        let mm = new MailMessage()
        mm.From <- new MailAddress(from)
        mm.Subject <- subject
        mm.Body <- body

(* Http Listener *)
let server = HttpAgent.Start(url, fun server -> async {
    while true do
        let! ctx = server.Receive()
        ctx.Response.Reply("Message Forwarded")
        let stream = ctx.Request.InputStream
        let memStream = new MemoryStream()
        let binForm = new BinaryFormatter()
        let arrBytes = convert_stream_toBytes stream
        memStream.Write(arrBytes, 0, arrBytes.Length)
        memStream.Seek((0 |> int64), SeekOrigin.Begin) |> ignore
        let obj: string = binForm.Deserialize(memStream) |> fun x -> x :?> string
        sendMail "noreply@F#Agent.com" "destination@destination.com" "BackEnd SQL Message" obj})
// Stop the HTTP server and release the port

Very cool! We're able to receive the request from a SQLCLR Trigger.
Note: You'll notice on the second line of the agent loop that I'm immediately returning the response back to MSSQL so that the trigger can complete thus allowing the data to actually be inserted. The time it takes the MSSQL trigger to hit the port and return is about one second. which is more than acceptable in my scenario. However, you may want to be careful that you don't do all agent work first and then return the response because the MSSQL insert statement could time out.
Once I received the raw data as a string I simply emailed it to myself just to show that I could do whatever
I wanted to do with the data once it was inside my Agent. So far the approach has turned out to be a really
nice alternative to polling. And the future possibilities  are endless now that we're able to get the data as it's being inserted, updated, or deleted. :-)

Until next time...


  1. Thanks! I had a lot of fun putting this one together.
    It was recently brought to my attention that SQL 2008 understands 2.0 & 3.5. I've not confirmed that but it'd be worth a shot if you're targeting SQL2008.

    1. Hia Frederick!

      Yes that is correct SQL Server 2008 understands .NET 3.5 (and SQL 2012 can use .NET 4.0). Cool post!!

      Anyway, if you really want to send "stuff" from SQL Server to external apps, you should have a look at SQL Server Service Broker and the functionality called External Activator. Your post gave me inspiration - so I'll see if I can hack something together for this using F# instead of the supplied C# External Activator app.


    2. @Niels Awesome! Thanks for the tip (and comment)! I'll definitely take a look at External Activator. :)

  2. Nice to see your excitement about F#, however I wonder why you bother to code functionality that SQL server can handle natively (Replication/Mirroring/HADR). Same goes for your export pipeline, SQL data from my experience is most efficient handled on the db engine, especially filtering. As soon as I go into dealing with set based data on a row level, my warning lights go on. Mind to explain the motivation to do such tasks in F# and not within SQL Server?

  3. @Unknown- you're absolutely right. When it comes to keeping two databases in sync or filtering data in general SQL definitely handles that sort of thing (natively) much better. My reasoning for coding the "replication" was simply because I didn't have the privilege or time to setup a transaction replication or introduce some (HADR) specific solution. I just needed to prove that the "replication" could be done with minimal privileges scoped to just triggers. :)

    As for the pipeline - It's really meant to address dealing with data from any data source i.e. relational,key/value,document,graph,file on disk, etc.

    Albeit a little misleading the sample illustrated in the post is showing SQL for familiarity with .NET developers and happens to be one of many data sources that is used with the pipeline in practice.

    At my day job we have tons of customers who all have specific business rules for how their data should be exported from our system and what format that data should be exported in. Prior to the birth of the Pipeline each business rule for each customer was represented as a stored procedures and any time there was a change (often) the stored procedure needed to change. This became cumbersome because we didn't have a SQL person on staff to handle every change in a timely manner,
    the developer did not like to context switch from C# to apply the fix,
    and finally we did not like having the business rule in SQL because our back end was in the process of switching from relational to a more polyglot solution. The nice thing about the Pipeline is that it is just a sequence of functions. Since the time of writing the post I've discovered some interesting techniques for optimizing certain functions in the pipeline which ultimately came down to introducing a new function to run instead of the old one. So you could imagine that executing a function like filtering could be extremely complex and dependent on ever changing business rules. In the end maintenance across all customer business rules is much more productive because functions can be expressed in F# or C#.

    If you have any more questions or comments please feel free to continue the conversation. :)