What does Fa Sho! mean? Well if you consult google or the Urban Dictionary you might find that it means
"For Sure!" In this case it means to use Sho with F#. After watching the cool demo some time ago I started playing around with Sho from time to time. I've come to realize that Sho does an awful lot of useful and interesting things. It's almost like FSChart on steroids. Recently I bought Visualize This and decided to give some of the samples a go using F# and Sho, err Fa Sho! :-)
But before I could get to the samples I had to get my feet wet with the library. Fortunately, F# is amazing at letting me do that so I ported the C# sample code to a much shorter version illustrated below.
#I @"C:\Program Files (x86)\Sho 2.0 for .NET 4\bin"
#r "ShoArray.dll"
#r "ShoViz.dll"
#r "MathFunc.dll"
#r "MatrixInterf.dll"
open ShoNS.Array
open ShoNS.MathFunc
open ShoNS.Visualization
let x = ArrayRandom.RandomDoubleArray(10,10)
in x * x.T
let figure = ShoPlotHelper.Figure() in figure.Bar(x)
Which produces the following output.
If you were to look at the sample C# code provided in the download and compared it with this
then you'd definitely be able to see why F# is supreme (IMO). :-)
Until next time...
Tuesday, March 20, 2012
Simple Data Pipelines in F# Part 2
My apologies on the extremely long delay in posts. So many prototypes have emerged since my last post.
It's one of the reasons why I love my job. I have the freedom to work with and prove all different kinds of technologies like Mono, Erlang, Ocaml, and my latest favorite Haskell. Thanks to these lecture videos and this really great book I scored for $8.00 at a second hand book store - I'm actually starting to understand Haskell! But enough about me... let's get back to this simple data pipeline.
If you recall in the last post we laid out the pipeline steps into distinct function calls or applications.
Fetch -> Filter -> Format -> Archive
Having the pipeline steps laid out like this gives a nice way to plugin new logic at any time which leads to a number of good things especially a nice way to maintain the app and introduce new functionality.
Which brings me to how I initially implemented this functionality. In order to have the flexibility I need a way to dynamically load some logic that could be executed and handled in the case of exceptions. Late binding comes to mind... so basically I'll need to use reflection to load an assembly and execute some code. I'll do this by introducing an abstraction for each pipeline step starting with Fetch.
type IFetchData =
abstract FetchData: PipelineContext -> System.Collections.IEnumerable
Remember the PipelineContext is just a F# record which holds a Dictionary<T,V>. It represents a key (for the pipeline step name) and a value which represents the value of that specific pipeline step computation. That pipeline context gets passed through the pipeline so that each step has access to what's been done in the pipeline thus far. So let's look at how we might implement the fetch pipeline step.
In F# we'd simply express something like the following:
type SampleFetchData() =
let map (reader: IDataReader) (fieldname: string): string =
let index = reader.GetOrdinal(fieldname)
if not <| reader.IsDBNull(index) then
reader.[fieldname].ToString()
else
"0" //means null in the config
interface IFetchData with
member self.FetchData ctx =
let start_end =
match ctx.config.ContainsKey("StartDate") && ctx.config.ContainsKey("EndDate") with
| true -> ctx.config.Item "StartDate" :?> String , ctx.config.Item "EndDate" :?> String
| false -> "04/01/2011", "12/01/2011 11:59:59"
let fetch_query = @"SELECT t.Name, t.ActiveDate
FROM Table t
WHERE t.ActiveDate BETWEEN '{0}' AND '{1}'",
fst(start_end), snd(start_end)
seq {
use connection = new SqlConnection("Data Source={0}",ctx.config.Item"connString")
connection.Open()
use command = new SqlCommand(fetch_query, connection)
use reader = command.ExecuteReader()
while reader.Read() do
yield {Name= map reader "Name";
ActiveDate = map reader "ActiveDate"; }
} :> System.Collections.IEnumerable
Okay, that's kind of cool! It allows us to basically fetch data anyway we want so long as we return it as an IEnumerable. The example above happens to hit SQL but there's nothing stopping us from hitting MongoDb or RavenDb. Now that we've implemented the code for Fetch we just need to build it as a .dll and the pipeline infrastructure will pick it up and execute it!
We're almost there we just need to look at the code that is behind construct_fetch() from the first post.
module PipelineBuilder =
//Reflection Helpers
let getinstance (instance: Type) = Activator.CreateInstance(instance)
let get_referenced_assm (asmName: string) =
AppDomain.CurrentDomain.GetAssemblies()
|> Seq.filter(fun (asm: Assembly) -> asm.GetName().Name = asmName )
|> Seq.head
let fetch_types (contract: string) (defAsm: Assembly) =
let matches = seq { for t in defAsm.GetTypes() do
if t.GetInterface(contract) <> null then
yield (t,t.GetInterface(contract)) }
if Seq.length(matches) > 0 then
let concreteType, contract = Seq.head(matches)
Some(concreteType,contract)
else
None
(* Fetch Plugin *)
let construct_fetch(): (PipelineContext -> PipelineResult option) =
fun (ctx: PipelineContext) ->
let pipeline_step = fetch_types "IFetchData" (get_referenced_assm "SampleFetchData")
match pipeline_step with
| Some(t,_) ->
Some( (
try
let instance = getinstance t :?> IFetchData
let result = instance.FetchData ctx
ctx.accum.Add("FetchData",result)
PipelineStepSuccess(ctx)
with ex ->
let msg = PipelineStepException(ex.Message) :?> PipelineStepException
PipelineStepFailure(msg) ) )
| None -> None
I've highlighted the important part and omitted the other pipeline steps since there implementation is very similar to construct_fetch() above. Allow me to frame out what's going here. construct_fetch() takes no arguments and returns a function which accepts one argument (PipelineContext) and returns a result which is PipelineStepSuccess or PipelineStepFailure as a PipelineResult which is expressed for clarity below.
exception PipelineStepException of string
type PipelineResult = | PipelineStepSuccess of PipelineContext | PipelineStepFailure of PipelineStepException | EmptyPipelineResult
That's essentially all there is to it. Plugins like Fetch can be written in F# or C# which makes the
team members happy to continue C# as I ease them into F#.
I think the last post in this series will be a link to all the code. :-)
Until next time...
It's one of the reasons why I love my job. I have the freedom to work with and prove all different kinds of technologies like Mono, Erlang, Ocaml, and my latest favorite Haskell. Thanks to these lecture videos and this really great book I scored for $8.00 at a second hand book store - I'm actually starting to understand Haskell! But enough about me... let's get back to this simple data pipeline.
If you recall in the last post we laid out the pipeline steps into distinct function calls or applications.
Fetch -> Filter -> Format -> Archive
Having the pipeline steps laid out like this gives a nice way to plugin new logic at any time which leads to a number of good things especially a nice way to maintain the app and introduce new functionality.
Which brings me to how I initially implemented this functionality. In order to have the flexibility I need a way to dynamically load some logic that could be executed and handled in the case of exceptions. Late binding comes to mind... so basically I'll need to use reflection to load an assembly and execute some code. I'll do this by introducing an abstraction for each pipeline step starting with Fetch.
type IFetchData =
abstract FetchData: PipelineContext -> System.Collections.IEnumerable
type IApplyFilters =
abstract ApplyFilters : PipelineContext -> System.Collections.IEnumerable -> System.Collections.IEnumerable
type IFormatData =
abstract FormatData : PipelineContext -> System.Collections.IEnumerable -> unit
type IArchiveData =
abstract ArchiveData : PipelineContext -> unit
In F# we'd simply express something like the following:
type SampleFetchData() =
let map (reader: IDataReader) (fieldname: string): string =
let index = reader.GetOrdinal(fieldname)
if not <| reader.IsDBNull(index) then
reader.[fieldname].ToString()
else
"0" //means null in the config
interface IFetchData with
member self.FetchData ctx =
let start_end =
match ctx.config.ContainsKey("StartDate") && ctx.config.ContainsKey("EndDate") with
| true -> ctx.config.Item "StartDate" :?> String , ctx.config.Item "EndDate" :?> String
| false -> "04/01/2011", "12/01/2011 11:59:59"
let fetch_query = @"SELECT t.Name, t.ActiveDate
FROM Table t
WHERE t.ActiveDate BETWEEN '{0}' AND '{1}'",
fst(start_end), snd(start_end)
seq {
use connection = new SqlConnection("Data Source={0}",ctx.config.Item"connString")
connection.Open()
use command = new SqlCommand(fetch_query, connection)
use reader = command.ExecuteReader()
while reader.Read() do
yield {Name= map reader "Name";
ActiveDate = map reader "ActiveDate"; }
} :> System.Collections.IEnumerable
Okay, that's kind of cool! It allows us to basically fetch data anyway we want so long as we return it as an IEnumerable. The example above happens to hit SQL but there's nothing stopping us from hitting MongoDb or RavenDb. Now that we've implemented the code for Fetch we just need to build it as a .dll and the pipeline infrastructure will pick it up and execute it!
We're almost there we just need to look at the code that is behind construct_fetch() from the first post.
module PipelineBuilder =
//Reflection Helpers
let getinstance (instance: Type) = Activator.CreateInstance(instance)
let get_referenced_assm (asmName: string) =
AppDomain.CurrentDomain.GetAssemblies()
|> Seq.filter(fun (asm: Assembly) -> asm.GetName().Name = asmName )
|> Seq.head
let fetch_types (contract: string) (defAsm: Assembly) =
let matches = seq { for t in defAsm.GetTypes() do
if t.GetInterface(contract) <> null then
yield (t,t.GetInterface(contract)) }
if Seq.length(matches) > 0 then
let concreteType, contract = Seq.head(matches)
Some(concreteType,contract)
else
None
(* Fetch Plugin *)
let construct_fetch(): (PipelineContext -> PipelineResult option) =
fun (ctx: PipelineContext) ->
let pipeline_step = fetch_types "IFetchData" (get_referenced_assm "SampleFetchData")
match pipeline_step with
| Some(t,_) ->
Some( (
try
let instance = getinstance t :?> IFetchData
let result = instance.FetchData ctx
ctx.accum.Add("FetchData",result)
PipelineStepSuccess(ctx)
with ex ->
let msg = PipelineStepException(ex.Message) :?> PipelineStepException
PipelineStepFailure(msg) ) )
| None -> None
I've highlighted the important part and omitted the other pipeline steps since there implementation is very similar to construct_fetch() above. Allow me to frame out what's going here. construct_fetch() takes no arguments and returns a function which accepts one argument (PipelineContext) and returns a result which is PipelineStepSuccess or PipelineStepFailure as a PipelineResult which is expressed for clarity below.
exception PipelineStepException of string
type PipelineResult = | PipelineStepSuccess of PipelineContext | PipelineStepFailure of PipelineStepException | EmptyPipelineResult
That's essentially all there is to it. Plugins like Fetch can be written in F# or C# which makes the
team members happy to continue C# as I ease them into F#.
I think the last post in this series will be a link to all the code. :-)
Until next time...
Subscribe to:
Posts (Atom)