Sunday, 19 October 2008

I’m very excited that Silverlight 2 has been released. I’m too new to Microsoft to claim even the slightest involvement, but it’s wonderful to see the excitement both within Microsoft as outside.
It will be interesting to see what will happen with the (already great) uptake of Silverlight by the market.

As I am spending most of my days knee-deep in Xaml nowadays, I always try to find things that will help me be more productive. It happens quite often that I want to select a complete xaml tag. It’s way too much effort to use the mouse to select it, so I often use the control-m-m shortcut to collapse a tag and then select it. However, a few days ago I took 5 minutes to automate this.


When you put your mouse somewhere in the Grid tag, and use my macro, you end up with this:


You can even put your mouse in the endtag. I bound it to Control-Q and it has made my life that much better!

The macro is as simple as this:

    Sub SelectXMLTagContents()
End Sub

It collapses a tag, jumps to the first column, selects the line and then does an uncollapse.

For those using a tool like Karl’s ‘Xaml Power Toys’, it might also be a worthwile addition to their shortcuts.

Sunday, 19 October 2008 21:17:36 (Romance Standard Time, UTC+01:00)  #    Comments [26]  |  Trackback
 Tuesday, 04 March 2008

Ever since I first got into workflow foundation, I've taken a fancy to statemachines. Once you wrap your head around them, they are a natural fit for most business processes.
The main problem everybody seems to be having with workflow though, is the versioning story. There is none!
That might be a bit harsh, you can certainly version your workflows, but to tell you the truth, you will be in a world of hurt.

The sample solution can be downloaded at the end of the post. It contains two workflows and a console application that you can play with.

Why is this updating so tough?
The workflow template is serialized to the persistence store. Any change in the workflow (adding or removing an activity) will make it impossible to deserialize the workflow again. It's serialized as a blob, so no easy transformation. I've written extensively about problems surrounding updating workflows here.

Your options pretty much exist of running side by side (which gives you a world of even more hurt, because now you have your data exchange services to version as well, and the activity library you have built) or use dynamic changes to alter the structure.
The latter being your best bet, but so much work that it takes away from the flexibility and speed of development that workflow brings to the table.

In my previous post I concluded that you would be best of just destroying your old workflow and create a new one. I stand by that! Today I was finally able to revisit the problem, and I hacked together a solution that might be interesting to people.

This solution has the following restriction:

It will only work for statemachines, that are waiting inside a state for an eventdriven activity, not inside an eventdriven activity. In other words: it is only able to update workflows that have entered a state and started waiting, not ones that have executed a few activities and is now waiting on some other input within a sequence.

Luckily for me, that is no problem at all, and it should not be a problem for you either. Statemachines should be modeled such that waiting happens when entered in a state, never inside a sequence. You can model waits inside a sequence, but I would suggest you make the delays short (minutes, as opposed to days/months/years).

My goal here is to be able to do a relatively easy update, where I have control over how I update (what to do with state etc.) and get my delays initialized to the correct timeouts again. So, in workflow1 I had a delay of 11 months, with 8 months left. When I start workflow2 and update, I need to have 8 months left again, and not 11.

Getting the delays right is the hard part.

I use some nice reflection to get to the actual type of a workflow instance. I described how to do that here. However, I was being silly. It's much easier:

            Workflow1 oldWF = workflowRuntime.GetRootActivity(instance) as Workflow1;

Made possible by these extensions:

    public static class WFExtensions
        public static object GetExecutor(this WorkflowRuntime workflowRuntime, WorkflowInstance instance)
            return workflowRuntime.GetType().InvokeMember(
                "Load", BindingFlags.NonPublic | BindingFlags.Instance | BindingFlags.InvokeMethod, null, workflowRuntime,
                new object[] { instance.InstanceId, null, instance });
        public static object GetRootActivity(this WorkflowRuntime workflowRuntime, WorkflowInstance instance)
            object executor = workflowRuntime.GetExecutor(instance);
            return executor.GetType().GetField("rootActivity",
                    BindingFlags.NonPublic | BindingFlags.Instance | BindingFlags.GetField).GetValue(executor) as CompositeActivity;

So, here goes:

  1. Get to your old workflow instance. In my sample I use types Workflow1 and Workflow2.
                WorkflowInstance instance = runtime.GetWorkflow(g);
                WorkflowRuntime workflowRuntime = runtime;
                Workflow1 oldWF = workflowRuntime.GetRootActivity(instance) as Workflow1;
                if (oldWF == null)
                object executor = workflowRuntime.GetExecutor(instance);
                instance.Suspend("asdf");   // need not to unload, otherwise the database record would be unlocked

    I suspend the workflow, so it does not get into the way, but I can not unload, or worse: terminate. That would kill the record in the database.

  2. Create a new workflow, of your desired type, and copy the workflowInstanceID to it:
                // get a handle to the instanceid property
                DependencyProperty instanceidDP = (DependencyProperty)executor.GetType().GetField("WorkflowInstanceIdProperty",
                    BindingFlags.NonPublic | BindingFlags.Static | BindingFlags.Instance).GetValue(executor);
                // create new wf2, not starting it yet
                WorkflowInstance newWFInstance = workflowRuntime.CreateWorkflow(typeof(Workflow2));
                Workflow2 newWF = workflowRuntime.GetRootActivity(newWFInstance) as Workflow2;
                // copy the guid
                newWF.SetValue(instanceidDP, instance.InstanceId);
  3. Build up a list of activities that are on timers and remember their name and when they expire:
                Dictionary<string, DateTime> activitiesExpireList = new Dictionary<string, DateTime>();
                TimerEventSubscriptionCollection subscriptions = ((TimerEventSubscriptionCollection)
                foreach (TimerEventSubscription subscription in subscriptions)
                    // find out what activity was subscribed
                    var x = from queueInfo in instance.GetWorkflowQueueData()
                            where subscription.QueueName.GetType().Equals(queueInfo.QueueName.GetType())
                            where subscription.QueueName.CompareTo(queueInfo.QueueName) == 0
                            select new { ExpiresAt = subscription.ExpiresAt, Activities = queueInfo.SubscribedActivityNames };
                    foreach (var combination in x)
                        foreach (string activityname in combination.Activities)
                            activitiesExpireList.Add(activityname, combination.ExpiresAt);

    The weird part being the fact that the queue names are mostly guids (for delays atleast).

  4. Call a method on your new type. See how cool it is we can actually communicate this way with it, instead of having to go through communication services!!
                // allow new workflow to read information from old workflow to init itself.
                newWF.Update(oldWF, instance, activitiesExpireList);
  5. Copy the new workflow to the rootactivity of our executor. Ouch.. yeah.. don't worry.
                // copy the new rootactivity to the executor
                    BindingFlags.NonPublic | BindingFlags.Instance | BindingFlags.GetField).SetValue(executor, newWF);
  6. Last bits:
                // start it up
                newWFInstance.Unload(); // overwrites current record in persistence store
                instance.Abort();   // kills of our original
                newWFInstance = runtime.GetWorkflow(g);
                StateMachineWorkflowInstance statemachine = new StateMachineWorkflowInstance(runtime, g);
                // still need to unload or unload the runtime to get all timers correctly!
                Console.WriteLine("updated" + newWFInstance.InstanceId);

    You can see me starting and unloading, then killing our old instance. Finally I am trying to be smart by using the statemachineworkflowinstance to do a transition to a new state on the new workflow. The newstate can be determined by the new workflow (who has knowledge of these things) but is usually the same as in your old workflow. (This was build so that you could rename a state).

  7. That's it. In the Workflow2 class, I have an update method, which will set a boolean to true. The initialization activity will look for it in an if/else and not do anything if it is set to true. All the delays in the new workflow have an initTimeout method like so:
            private void initTimeout(object sender, EventArgs e)
                DelayActivity delay = (DelayActivity)sender;
                if (activitiesExpireList.ContainsKey(delay.Name))
                    delay.TimeoutDuration = activitiesExpireList[delay.Name].Subtract(DateTime.Now.ToUniversalTime());

I have uploaded the complete sample here.

When you run it, you can press 'c' to create a new workflow of type Workflow1. Then you can press 'u' and paste in the guid of the workflow just created. It will update the workflow. Pressing 'b' will break and unload the workflow.
Your created workflow has this state:


Where the delay is 40 seconds. Workflow2 has the same state, but has a delay of only 10 seconds.

As a test you can see that after updating, you will have a workflow2 running (there is another activity present that will print out debug information). The delay was set correctly.

Obviously, you might want to deal with the delays your own way. Because you have all the information in your workflow codebehind, you can think of your own rules on how the delay timeouts should be set.

Realize that touching the internals of WF like this is not what Microsoft envisioned and should be done with care.

Have fun, and let me know what you think.

kick it on

Tuesday, 04 March 2008 23:56:35 (Romance Standard Time, UTC+01:00)  #    Comments [15]  |  Trackback
 Thursday, 28 February 2008

Finally wrapping up.

This is the seventh of a series about using Workflow Foundation to control your UI Logic in a WPF application. The full table of contents:


In the first post, the complete solution was presented. I am presenting a solution to use workflow as the controller part in your MVC inspired WPF application. It is inspired on the thought that you do not need complex frameworks, because WPF already gives you great power (routed eventing, resources). So, no IOC is used, no event aggregator etcetera: it's taken care of by WPF and WF, a natural fit.
The solution is very decoupled and I feel it's a great advantage to be able to visual your control logic.
In the previous post we looked at injecting objects and retrieving.


CAB (and other systems) uses an event aggregator to publish events. Subscribers (other controllers) can subscribe to a specific 'topic' using a string to identify it. This works well, but does mean yet another communication method is introduced.

Since every workflow/controller is added to the workflow runtime, we could easily ask for all the loaded workflows and send these a message. However, since all adapters subscribe to a weakevent manager to manage communication, I thought I'd stick to this pattern.

The BroadcastCommandMessage was created for the adapter to react on and check if it's controller is interested in it. If it is, the message is transformed to a command message and send to the controller.

I have not yet build an activity to do this.
The Bankteller sample has a CustomerQueueController. When it gets focus or loses focus, it wants to tell 'someone' (just someone who will listen) that it has a popular command to (un)register. The BanktellerLogic controller will use this information to put the command in a list and the view decides to make a menu item for it. You see, I do not believe that the CustomerQueueController should be able to decide that a menu is to be created out of it. He just wants to let the world know about a command.

        private void RegisterCommands(object sender, EventArgs e)
                new BroadcastCommandMessage(this.WorkflowInstanceId, "RegisterPopularCommand",
        private void UnRegisterCommands(object sender, EventArgs e)
                new BroadcastCommandMessage(this.WorkflowInstanceId, "UnRegisterPopularCommand",


That concludes this series for now.

I hope you enjoyed it. I hope you take away the feeling that it is pretty easy to build a MVC system using WPF and WF and that the presented solution is about as decoupled as it gets.

Thursday, 28 February 2008 02:16:34 (Romance Standard Time, UTC+01:00)  #    Comments [0]  |  Trackback
 Tuesday, 26 February 2008

Hole crap, this is starting to be a long series!!

This is the seventh of a series about using Workflow Foundation to control your UI Logic in a WPF application. The full table of contents:


In the first post, the complete solution was presented. I am presenting a solution to use workflow as the controller part in your MVC inspired WPF application. It is inspired on the thought that you do not need complex frameworks, because WPF already gives you great power (routed eventing, resources). So, no IOC is used, no event aggregator etcetera: it's taken care of by WPF and WF, a natural fit.
The solution is very decoupled and I feel it's a great advantage to be able to visual your control logic.
In the previous post, we talked about injecting controllers to manage specific parts of your screen, very cab-alike.

IOC - Inversion of Control

Inversion of Control is a pattern that tries to turn everything upside down when it comes to getting to dependencies. Let's say you have a class, and to do its work it needs a helperclass (maybe a communication service). Instead of having your class create that service explicitly, we can have your class just ask for it and have someone else supply it. This is where Dependency Injection comes from: just state what a class needs to work and have a container 'inject' those dependencies.
Doing it this way makes for a more maintainable application and allows you to better manage the lifetime of helperclasses and services. You might want to get back the same service instance, all the time!

Using a MVC approach to construct your application, you might feel the same need. Maybe you are building an application that allows editing of pieces of information of a customer, for instance, her details, her address, etc.
These pieces are implemented in different views. All the views that belong to that one customer, should use the same instance of the 'customer' object.

Inject and retrieve object into resources - activity

In this system, that is easily done, although possibly more explicitly than many great IOC containers (Windsor, Spring.Net, StructureMap) would like it.

Just have one controller create the object and inject it inside of his resources. Because of the way retrieving resources work, all the controllers that live 'below' this controller (are nested within it), will be able to retrieve it.


Here I have dragged in the 'InjectObjectAsResource'Activity, and have bound a public field on my workflow to the 'Service' property of the activity. Well, maybe Service is a bad name, but I just expect you to use it with services most of the time. Also, the activity might better have been called InjectInstanceAsResource, but I guess I didn't.
I used a type as resourcekey this time, instead of a string.

I bet you can figure out how the retrieve activity works ;-)

Tip: since the activity does not know what type of object you want to create, if you let the binding introduce the field or property to your code, it will be typed as object. Just change that to your own type.

The retrieve will work for all controllers that can reach the resource dictionary of the controller that did the inserting. So, that is equivalent to the CAB-term: 'child workitem'.
If you have the need to also be able to share on a global level, just make the inserting happen on the application resources, instead of the adapter resources. Can not be too hard.


I think this mechanism illustrates the way you can use WPF to meet most of your CAB needs. I use it here from a workflow, but that has nothing to do with the core-concept.
I find that the explicit visual call to inject or retrieve, without having to write code to do so, could be beneficial when building systems in a team. There is no need to guess where an object comes from, it is all very much in your face.

Tuesday, 26 February 2008 13:00:22 (Romance Standard Time, UTC+01:00)  #    Comments [0]  |  Trackback
 Monday, 25 February 2008

This is the sixth of a series about using Workflow Foundation to control your UI Logic in a WPF application. The full table of contents:


In the first post, the complete solution was presented. I am presenting a solution to use workflow as the controller part in your MVC inspired WPF application. It is inspired on the thought that you do not need complex frameworks, because WPF already gives you great power (routed eventing, resources). So, no IOC is used, no event aggregator etcetera: it's taken care of by WPF and WF, a natural fit.
The solution is very decoupled and I feel it's a great advantage to be able to visual your control logic.
In the previous post, we talked about decoupling through commands.

This time, we will look at how to inject a controller into a subview

The InjectControllerAsDataTemplate activity

It's all very nice and dandy to have one controller manage it's mainview, but what happens if part of that mainview is different, and should be managed by a completely different controller?

Let's look at ModuleView in the BankTeller sample:

<UserControl x:Class="BankTellerViews.ModuleView"
        <StackPanel Orientation="Horizontal">
        <StackPanel Orientation="Vertical">
            <ContentPresenter ContentTemplate="{DynamicResource userinfo}" />
            <ContentPresenter ContentTemplate="{DynamicResource customerlist}"/>
        <StackPanel Orientation="Vertical">
            <ContentPresenter ContentTemplate="{DynamicResource customerinfo}" />
            <ContentPresenter ContentTemplate="{DynamicResource customersummary}" />

You can see that ModuleView really only determines the way this screen is build up, but the individual pieces are left empty.

When we open up the ModuleLogic controller, we wish to inject controllers with the same names that we used here:


What happens exactly? Well, you selected a controller type, through the convenient typebrowser, and set a specific resource key (in this case, we used a string: userinfo). The adapter is notified by this adapter to do something with it. It will create a datatemplate in code, and just set it as a resource (or replace, if it already exists).

This means that a deeply nested view could easily define a contentpresenter and a higher level controller could inject a controller for it.

Monday, 25 February 2008 16:12:47 (Romance Standard Time, UTC+01:00)  #    Comments [0]  |  Trackback

This is the fifth of a series about using Workflow Foundation to control your UI Logic in a WPF application. The full table of contents:

  • Workflow as controller: Introducing <M,V,C> where M: ViewModel, V : WPF, C : WF
  • Part II, starting the application, and the adapter
  • Intermezzo: new sample application
  • Part III, your first view
  • Part IV, decoupling view from controller
  • Part V, marshalling commands from WPF to WF
  • Part VI, Injecting a controller in a subview / workspace
  • Part VII, IOC on the cheap: injecting and retrieving objects
  • Part VIII, Broadcasting for all to see

    Whoops, I guess I was a bit over enthusiastic in the previous post, because I already explained the hooking mechanism in enough detail.

    It boils down to registering the adapter as global commandhandlers and when a command reaches it, create a commandMessage and send that to the workflow.

  • Monday, 25 February 2008 15:58:49 (Romance Standard Time, UTC+01:00)  #    Comments [0]  |  Trackback
     Friday, 22 February 2008

    This is the fourth of a series about using Workflow Foundation to control your UI Logic in a WPF application. The full table of contents:

  • Workflow as controller: Introducing <M,V,C> where M: ViewModel, V : WPF, C : WF
  • Part II, starting the application, and the adapter
  • Intermezzo: new sample application
  • Part III, your first view
  • Part IV, decoupling view from controller
  • Part V, marshalling commands from WPF to WF
  • Part VI, Injecting a controller in a subview / workspace
  • Part VII, IOC on the cheap: injecting and retrieving objects
  • Part VIII, Broadcasting for all to see

    In the first post, the complete solution was presented. I am presenting a solution to use workflow as the controller part in your MVC inspired WPF application. It is inspired on the thought that you do not need complex frameworks, because WPF already gives you great power (routed eventing, resources). So, no IOC is used, no event aggregator etcetera: it's taken care of by WPF and WF, a natural fit.
    The solution is very decoupled and I feel it's a great advantage to be able to visual your control logic.
    In the previous post, I talked about the various ways to show a view, and actually already talked about the decoupling mechanism: commands.

  • I'm very lucky to have received some good comments from Wekemf about tight coupling. I urge you to read those comments and maybe chime in.

    In this post I will not return on that subject, but will quickly address two important activities that help in configuring your system quickly: the SetMainContent activity and the SetDataContext activity and sending WPF commands to WF's HandleCommand.

    SetMainContent activity

    A controller adapter is a normal WPF contentcontrol. It has the job to participate in the visual tree on behalf of our workflow controller class. To actually attach a view to it, we need to set it's content property.

    As shown in the previous post, you can just set on in xaml yourself, but it's more logical to let the workflow decide on the view. Ofcourse, the best approach is totally up to you.

    I usually use the stateinitalization activity to set up a view for us. I drag in the SetMainContent activity, and choose a type from the references assemblies.
    If it weren't for this step, the controller assembly would not need a reference to the view assembly at all. I found it very cool to be able to select a type with the typebrowser and just have it show up.

    The typebrowser is located in an assembly I have put in the externalAssemblies folder. It is a project, not started by me. The code did not work when I got my hands on it, but I managed to fix it by using a hammer. Check out this post to learn more about this great design-time experience!

    If you have a business need to decouple even further, you would need to adjust the SetMainContent activity, and instead of sending a real Type, send a string key or whatever. Then you would create some mapping functionality to map that key to the actual view.

    When the adapter get's notified by the SetMainContentMessage that it needs to set a content, it will just create the view (using reflection) and place it as it's own content.

    SetDataContext activity

    I do not like MVP at all, where the presenter talks back to the view directly (using an interface or something). I feel it's way too 'pushy' and way too much work. I believe in databinding (especially WPF bindings, I think Microsoft got it right this time). You view should just bind to your domain objects. In many cases, it's better to create a wrapper for the domainobjects, so you have the opportunity to supply some shortcut properties or view specific stuff: you might have a list of products, and you want the view to display the sum of the prices. That is a great opportunity for the viewmodel to expose a 'Sum' property that the view can simply bind to.

    The object that is used as a ViewModel should live with the controller who will be able to communicate with it.
    I usually create an internal public class, simply called ViewModel and have the controller inject that class with domain objects.

    The Set DataContext activity is very similar to the SetMainContent activity, in that it let's the adapter know it has to set a datacontext on itself.
    You configure the SetDataContext activity simply by choosing a field or property of your controller.

    In small sample applications, I have used the 'invoking' event, to hook up some code that actually initializes the ViewModel object.

    Sending WPF commands to the Workflow: HandleCommandActivity

    The HandleCommandActivity is really what makes using the solution so easy. I have blogged about it already extensively, and I will just summarize here:

    Workflow has a difficult communication story. You need to define your incoming and outgoing calls in an ExternalDataExchangeService. Then you have to hook up events in your workflow to listen to incoming calls/events. It is not possible to listen to the same events in two different states, without using the very difficult CorrelationTechnique.

    This is not necessary for our usage. I have created the HandleCommand activity to just listen to a queue with a specific name. That name is defined by the command we are listening to. So, if you want your workflow to react when you send it the string 'workflowRules', you would just drag in the HandleCommand and configure the Command property to read 'workflowRules'. No need to setup a special event for it.

    The commandService class has a PostCommand method, that you can call to put a message on the queue. That's all there is to it.

    So, when we receive a WPF command, we cast it to a RoutedUI command. The commandname is used to form a SimpleCommandMessage which can be used as input to the PostCommand method.

      1         #region command sinks
      2         private void CmdExecuted(object sender, ExecutedRoutedEventArgs e)
      3         {
      4             string commandname = (e.Command as RoutedUICommand).Name;
      6             PostCommand(commandname, e.Parameter);
      8         }
    10         private void PostCommand(string commandname, object Parameter)
    11         {
    12             if (implementedCommands.Contains(commandname))
    13             {
    14                 commandSvc.PostCommand(new SimpleCommandMessage(instance.InstanceId, commandname, Parameter));
    15             }
    16         }
    18         private void CmdCanExecute(object sender, CanExecuteRoutedEventArgs e)
    19         {
    20             string commandName = (e.Command as RoutedUICommand).Name;
    22             if (implementedCommands.Contains(commandName))
    23             {
    24                 e.CanExecute = commandSvc.CanExecute(new SimpleCommandMessage(instance.InstanceId, commandName));
    25             }
    26         }
    28         #endregion

    As you can see, I first check if the workflow even implements such a command. If not, it would be too expensive to send it to the workflow.
    Also, check out the CmdCanExecute method. It actually makes it possible for the workflow to put rules on the HandleCommand activity that are used to figure out if a command can be executed. For instance, if you are not authorized to do something, the command was never in CanExecute, so the button that hooks up to it was always dimmed!

    I hope that clears up some questions. Let me know what you think!

    Friday, 22 February 2008 11:24:20 (Romance Standard Time, UTC+01:00)  #    Comments [1]  |  Trackback
     Thursday, 21 February 2008

    I got a mail yesterday from a German student asking about the future of workflow and my thoughts on it. I will share the thread. It was written in a hurry, so take it for what it is. Leave a comment to give him another view point.

    Read from bottom to top.


    my reply:

    What you are describing does indeed sound like a typical WF application, and it is absolutely suitable for that.

    Custom activities: don't be afraid. Just create one that is a wrapper around your huge ole-api. Creating an activity is little more than deriving from Activity and overriding the execute method.
    Put some properties on there and of you go.
    Or create multiple activities that do different things to the ole object.

    It sounds to me that you want to re-host the workflow designer. That is certainly do-able and there is a project from some1 you can download that actually did that. However, it was in need of more debugging. I don't have the url here. Sorry.

    What WF is not, is a magical system that requires no development. It is really meant to be a foundation, which a developer uses and builds upon to create a system that really suits the client wishes. So that means, configuring it, creating external data exchange services and building custom activities. Only then will you create a system that your client can use in the way you described. You need to mold it to behave like you want.

    In our case, it was definitely the developers that created the workflows. Best we could hope for was that business analysts could understand it (and they did). However, I've always felt it was possible to create a system that they could use directly.


    -----Original Message-----
    From: Sven
    Sent: woensdag 20 februari 2008 20:56
    To: me
    Subject: Re: Some questions about the future of WF

    Hi Ruurd,

    thank you very much for your in-depth statement! I had not expected this detailed level ;-)

    Actually as a part of my project I have to evaluate if WWF fits into an existing CRM Application.
    It should be possible for solution partners (customizing the application for their customers) without in-depth programming knowledge for example to "wire together" some custom activites to visually build for example the processing of an incoming mail, a little workflow for some little approval process (like you press a button inside the application on an address form, the workflow gets some field from the current record, decides based on the field which e-mail model to use, sends the email and finally writes some information to the same record, like "e-mail XY sent") or things like that... (sounds like a classic
    But their could be use for some "state machines", too. Like there is a WF-Service running and dispatching incoming mails to different employees...

    Is WWF suitable for this ? These things could be done today in the application by coding some huge VB-Scripts, there is a huge OLE-API in the application...

    What I missed is a "CustomOLEActivity" to call whatever function in an application with OLE-API (there are a lot on the market) and to simply return some values...
    (the ExternalDataExchange/CommunicationActivity with wca.exe-Tool-way looks like beeing very complicated - at least if you have to build a CustomActivity for a huge OLE-API, or have I missed something out ?)

    On my "first look" the designer looked a bit complicated (even for people with some advanced knowledge, i have to target not the computer dummies, but also not the programmers on the other hand, some level "between", lets say "System Administrators"), but perhaps you can give me from your own experiance some hints in which direction I have to go for this...
    (who is editing the workflows in your big project?)

    Implementing everything "from scratch" looks like an even bigger effort... (would be the other choice...)

    Thanks a lot for your help and guidance !


    my reply:
    > Basically, I see quite a few problems surrounding WF. It is very
    > shielded, the designer is not very good still and there is no good
    > update strategy (updating long running persisted workflows to new
    > versions). I think that last issue is one of the biggest problems it
    > has, although it hasn't gotten much publicity.
    > However, as a platform, it does what it should do very well. They are
    > going to use it as the biztalk workflow engine and are already using
    > it as the human workflow engine for sharepoint.
    > I feel we are moving toward an industry that needs to mature (the IT
    > development industry I mean). It is looking for DSL's and other ways
    > to make developing software a more manageable and predictable process.
    > Workflow has a definite place in that eco-system, where you can
    > visualize the flow of your program. This means you have an artifact
    > that will actually help a developer communicate with a business analyst or a client.
    > To be concrete:
    > So, why do I think developers have been slow to take it up: a
    > difficult programming model and some serious issues that are not well understood yet.
    > It is a radically different approach to building software, and it
    > takes time for ppl to feel confident with it.
    > Is there a future: I say _yes_. If you understand the problems of
    > todays WF framework, you can already build great things, and I've
    > heard about some of the stuff that Microsoft is doing on the next
    > version, which will alleviate some big problems. Since we need this
    > kind of technology to build better software, there is definitely a future for it.
    > Is it already used in the industry: Well, I have used it, but I have
    > yet to hear of big projects using it. Then again, Biztalk is used
    > extensively and the WF engine is every bit as powerful. (rules engine maybe slightly less).
    > Sorry, no example possible...)
    > I do not think it will disappear.
    > Kind Regards,
    > Ruurd Boeke
    > -----Original Message-----
    > From: Sven
    > Sent: woensdag 20 februari 2008 19:48
    > To: me
    > Subject: Some questions about the future of WF
    > Hello!
    > I'm a computer student from the university of applied sciences of
    > Emden, Germany.
    > Actually I'm working on a project dealing with the Windows Workflow
    > Foundation.
    > As it was introduced one and a half years ago, but I see not so much
    > implementations or books about it, I wonder why it has been adopted so
    > slowly by the developpers.
    > What do you think about this? (just some thought will be helpful for
    > me!) Is there a "future" ? Or will this stay an "Microsoft Internal" - Affair ?
    > Is this already used in the industry ? Where ?
    > (If you could give me some examples from your experience this would be
    > very helpful for my work)
    > Is this really a technology to build on or might it disappear slowly
    > like other "cool" stuff in the past ?
    > Thank you very much in advance for any hint!
    > Sincerely,
    > Sven

    Thursday, 21 February 2008 17:38:13 (Romance Standard Time, UTC+01:00)  #    Comments [2]  |  Trackback

    This is the third of a series about using Workflow Foundation to control your UI Logic in a WPF application. The full table of contents:

  • Workflow as controller: Introducing <M,V,C> where M: ViewModel, V : WPF, C : WF
  • Part II, starting the application, and the adapter
  • Intermezzo: new sample application
  • Part III, your first view
  • Part IV, decoupling view from controller
  • Part V, marshalling commands from WPF to WF
  • Part VI, Injecting a controller in a subview / workspace
  • Part VII, IOC on the cheap: injecting and retrieving objects
  • Part VIII, Broadcasting for all to see
  • Recap

    In the first post, the complete solution was presented. I am presenting a solution to use workflow as the controller part in your MVC inspired WPF application. It is inspired on the thought that you do not need complex frameworks, because WPF already gives you great power (routed eventing, resources). So, no IOC is used, no event aggregator etcetera: it's taken care of by WPF and WF, a natural fit.
    The solution is very decoupled and I feel it's a great advantage to be able to visual your control logic.
    In the previous post, I showed a wizard style application.

    This post follows Part II, starting the application and the adapter. In that post we started our shell and explained how the adapter communicates with a workflow instance, how it can react to commands (normal RoutedUI events from wpf controls) and react to events by the command service.

    We will now continue, by looking at a simple view.

    View responsibility

    Let's first look at how we perceive a view in the MVC paradigm.
    A view should be nothing more than the visualization of your data. The only authority it has, is the authority to decide how to represent a piece of data on the screen. That means it should not contain any business logic. Be very strict about this: the responsibility of a view is the visualization of data.

    So, let's take a look at a common scenario where these lines may blur.
    Take a list of products and let's say that if we have a new product-line that has been introduced within the past month, we want to use another background color, to alert our customers to this new hot product.

    We could solve this in our binding perhaps (let's just assume that is easy), but we should not do that. That would mean the view is deciding when a product is new and hot. It should not.
    The only thing the view should do is create the two visual representations of products and use a datatemplate selector to decide which is hot or not. The datatemplate selector could be injected by our controller. Another way to solve this, is for the controller to put this information in the ViewModel itself. Like, add a boolean 'new' which the view uses.

    If you do not do it this way, and you are embedding logic inside of your view, you will quickly end up with scattered logic, never knowing where something is defined. Changing rules becomes hard and your application will break at some point.
    Now, I understand, and have done many times, that sometimes you just do not have the time to do it right. But always remember that in the long run, you will get burned. Try to setup a situation where it is easy to do the right thing, by making it easy to use datatemplate selectors or use the viewmodel.

    View decoupling

    MVC advocates not letting your view have any knowledge whatsoever of the controller. It does this, because tight coupling of the view to the controller will destroy maintainability and flexibility. If you tight couple, you are unable to swap controllers or views. Most importantly, if you couple the view to the controller (by making it call specific methods on the controller), it becomes harder to maintain/refactor.

    There are certainly approaches that do couple view to controller. If you look at the very powerful Caliburn framework, you will see that the framework has 'action messages' that directly call methods on the controller. I have yet to work with that extensively, so I can not be sure, but it feels to me there should be a very explicit layer between view and controller, which defines how the view will communicate with the controller.

    Our goals in this project are to use the tools WPF provides us to communicate with the rest of the system. We do so with Commands.
    A command can be seen as a message that is passed upward (and downward) the visual tree. Since our adapter lives just above the view and is part of the visual tree, it will have the opportunity to react to the command.

    When building a view, you should also explicitly define all the interactions that view expects to have with the outside world. Do that in a static class like so:

        public static class ImportantWizardInteractions
            public static readonly RoutedUICommand Next;
            public static readonly RoutedUICommand Back;

            public static readonly RoutedUICommand GotoClientScreen;
            public static readonly RoutedUICommand GotoAdresScreen;
            public static readonly RoutedUICommand GotoRoleScreen;
            public static readonly RoutedUICommand GotoCarScreen;

            public static readonly RoutedUICommand Save;
            public static readonly RoutedUICommand SaveYes;
            public static readonly RoutedUICommand SaveNo;

            static ImportantWizardInteractions()
                Next = new RoutedUICommand("Next", "Next", typeof(ImportantWizardInteractions));
                Back = new RoutedUICommand("Back", "Back", typeof(ImportantWizardInteractions));

                GotoClientScreen = new RoutedUICommand("GotoClientScreen", "GotoClientScreen", typeof(ImportantWizardInteractions));
                GotoAdresScreen = new RoutedUICommand("GotoAdresScreen", "GotoAdresScreen", typeof(ImportantWizardInteractions));
                GotoRoleScreen = new RoutedUICommand("GotoRoleScreen", "GotoRoleScreen", typeof(ImportantWizardInteractions));
                GotoCarScreen = new RoutedUICommand("GotoCarScreen", "GotoCarScreen", typeof(ImportantWizardInteractions));

                Save = new RoutedUICommand("Save", "Save", typeof(ImportantWizardInteractions));
                SaveYes = new RoutedUICommand("SaveYes", "SaveYes", typeof(ImportantWizardInteractions));
                SaveNo = new RoutedUICommand("SaveNo", "SaveNo", typeof(ImportantWizardInteractions));


    By being explicit about your interactions like this, you will be able to unit test more easily as well.

    Use in your view like this:

    <Button Command="{x:Static local:ImportantWizardInteractions.GotoClientScreen}">Client</Button>

    A command is great for buttons and other stuff, but how do you do for instance communicate that a customer was selected from a listview?

    1. well, you bind to a selectedCustomer property hopefully, and when the customer was selected, that property changed on the viewmodel. The controller might pick that up.
    2. More explicitly though: do use the SelectedItemChanged event and use the codebehind of your view as a translation layer to talk to the outside world:
    3.         private void ListBox_SelectionChanged(object sender, SelectionChangedEventArgs e)
                  // send a command
                  CustomerQueueInteractions.SelectNewCustomer.Execute(e.AddedItems, this);

    You can call me on that. It's not a very elegant solution. I'd rather be able to do away with the codebehind of a view entirely. But using the codebehind is actually fine: it is part of the view, and it should not be allowed to do anything else than to act as a translator for view specific things to commands.

    So, how to show a view

    Well, building a view is nothing else than just deriving from usercontrol and doing your thing. Using commands and going wild on the visuals. (try to animate everything!!! your client loves it).

    It depends now how you want to show it.

    1. Let's say your building a project where you don't care about fancy composition and pluggable modules in your application, and you just want your shell to show your view. The shell might have the following code:
      <c:GenericWorkflowAdapter WorkflowController="{x:Type logic:AControllerForYourView}" >

      I am assuming you do want a controller around your view.

      Here a controller is instantiated and it's content is set to your view. Easy.
    2. Let's say we want our controller to choose what view it uses. That seems to me to be the nicest way to go about it. We will again put a controller in the visual tree, but will not set a view already:

      1. <c:GenericWorkflowAdapter WorkflowController="{x:Type logic:ImportantClientWizard}" />

      Then, in the workflow, we might use some fancy logic to determine which view we will show (perhaps looking at the role of the user). To actually set a view, we will use the SetMainContentActivity. Drag that to the canvas and select a type.


      Selecting a type is easy, because of the typebrowser I included:

    3. Yet another way, that is suitable for 'subviews', is to define a contentpresenter on some view anywhere:
    4. <ContentPresenter 
      ContentTemplate="{DynamicResource CurrentWizardScreen}" />

    And use the InjectViewAsDataTemplate activity in a controller to place a contenttemplate in the resources section with the same resourcekey.


    I'll follow up with another take on decoupling the view from the controller, by looking at the SetDataContext activity and talking a bit more about the viewmodel.

    Thursday, 21 February 2008 11:57:02 (Romance Standard Time, UTC+01:00)  #    Comments [4]  |  Trackback
     Wednesday, 20 February 2008

    This is an intermezzo from the MVC with WF series. I have added a new sample to the project, which I hope demonstrates the flexibility of using WF.

    The rest of this series can be found here:

  • Workflow as controller: Introducing <M,V,C> where M: ViewModel, V : WPF, C : WF
  • Part II, starting the application, and the adapter
  • Intermezzo: new sample application
  • Part III, your first view
  • Part IV, decoupling view from controller
  • Part V, marshalling commands from WPF to WF
  • Part VI, Injecting a controller in a subview / workspace
  • Part VII, IOC on the cheap: injecting and retrieving objects
  • Part VIII, Broadcasting for all to see

    It is a simple 4 screen 'wizard' where logic determines that it should skip one screen. Also, when the save button is hit, a popup will show that asks if you are sure. If you are, you will be sent to the first screen, otherwise you will return to your last screen. It has buttons on the left that determine where you can go, as well as 'next' and 'previous' buttons.
    All of this was done with a minimum of code and a maximum of dragging and dropping activities. The whole reason for doing this, is that when you now get a new feature request ("We have a new screen that sits in between the client and adres screen!!!"), using WF it will be dead-simple to add it.

    I have uploaded the executable here, just in case you don't feel like opening up the project and building yourself.
    The application looks like this:


    And when you reach the 'Car' screen, it will look like this:


    Hitting the Save button here:


    A few things to notice about this sample:

    • There are 2 controllers doing their job here:
      • The usersettings controller, with a view on top. It allows you to check a checkbox. Doing so makes you an administrator. Notice how, when you do so, you are able to browse to the 'Role' screen. You see, if you are not an administrator, you are not allowed to enter the role screen.
      • The 'ImportantWizardController', which handles the mainview. It shows a few buttons (Client, Adres, Role and Car) on the left, which will allow you to go to the screens you have already passed. It also shows a previous and next screen button. Finally, it defines a contentpresenter where our subviews will be injected.
    • The buttons react immediately. Go to the Car screen, and then check your checkbox to make yourself Administrator. This means you have the right to visit the Role screen, and it immediately pops up.
    • No code behind. Nowhere. The only codebehind is on the ImportantWizardControl to 'load data' (actually returning an empty client, but you get the drift).
      It felt really cool to build this plumbing without coding.

    Let's look at the steps to produce this application:

    1. I added a Controller project (type workflow), a Domain project (with a few very simple classes), a shell project that will be used to start us up and a view project which holds the views we are going to use:
    2. I created a ClientService and a UserService class which will be classes used by our workflows:
          public class UserService
              public bool IsAdministrator { get; set; }

          public class ClientService
              public Client CurrentClient { get; set; }

    3. Then the shell was used to inject our main view and also inject a global userservice class:

        1 <Window x:Class="EditLogicShell.Window1"
        2     xmlns=""
        3     xmlns:x=""
        4     xmlns:logic="clr-namespace:EditLogicControllers;assembly=EditLogicControllers"      
        5     xmlns:c="clr-namespace:ControllersAdapters;assembly=ControllersAdapters"
        6     Title="Window1" Height="300" Width="300">
        7     <Window.Resources>
        8         <logic:UserService x:Key="globalUserService" />
        9     </Window.Resources>
       10     <StackPanel>
       11         <Border BorderThickness="1" BorderBrush="Black" Background="Beige">
       12             <c:GenericWorkflowAdapter WorkflowController="{x:Type logic:ManageUserSettingsController}" />
       13         </Border>
       15         <c:GenericWorkflowAdapter WorkflowController="{x:Type logic:ImportantClientWizard}" />
       16     </StackPanel>
       17 </Window>

      Note line 8, where we place a UserService instance in the resources section
      On Line 12 we start our UserSettings controller
      On Line 15, we start our main wizard.

    4. Let's not go into the usersettings controller. It's just too simple. It will inject a view, and set the datacontext to the userservice class. He retrieved that class using the 'RetrieveObjectFromResources' activity.

    5. I then asked our designer (yup, that was me too... could you tell??) to design our individual views. Everywhere the designer knew he had to interact with the system, a command was created in a static class. That class turned out to be like this:

          public static class ImportantWizardInteractions
              public static readonly RoutedUICommand Next;
              public static readonly RoutedUICommand Back;

              public static readonly RoutedUICommand GotoClientScreen;
              public static readonly RoutedUICommand GotoAdresScreen;
              public static readonly RoutedUICommand GotoRoleScreen;
              public static readonly RoutedUICommand GotoCarScreen;

              public static readonly RoutedUICommand Save;
              public static readonly RoutedUICommand SaveYes;
              public static readonly RoutedUICommand SaveNo;

              static ImportantWizardInteractions()
                  Next = new RoutedUICommand("Next", "Next", typeof(ImportantWizardInteractions));
                  Back = new RoutedUICommand("Back", "Back", typeof(ImportantWizardInteractions));

                  GotoClientScreen = new RoutedUICommand("GotoClientScreen", "GotoClientScreen", typeof(ImportantWizardInteractions));
                  GotoAdresScreen = new RoutedUICommand("GotoAdresScreen", "GotoAdresScreen", typeof(ImportantWizardInteractions));
                  GotoRoleScreen = new RoutedUICommand("GotoRoleScreen", "GotoRoleScreen", typeof(ImportantWizardInteractions));
                  GotoCarScreen = new RoutedUICommand("GotoCarScreen", "GotoCarScreen", typeof(ImportantWizardInteractions));

                  Save = new RoutedUICommand("Save", "Save", typeof(ImportantWizardInteractions));
                  SaveYes = new RoutedUICommand("SaveYes", "SaveYes", typeof(ImportantWizardInteractions));
                  SaveNo = new RoutedUICommand("SaveNo", "SaveNo", typeof(ImportantWizardInteractions));

      And on individual screens, the commands were used like this:
                  <Border Background="Beige" BorderThickness="1" BorderBrush="Black" DockPanel.Dock="Left" Width="120" >
                          <Label FontWeight="Bold">Previous screens</Label>
                          <Button Command="{x:Static local:ImportantWizardInteractions.GotoClientScreen}">Client</Button>
                          <Button Command="{x:Static local:ImportantWizardInteractions.GotoAdresScreen}">Adres</Button>
                          <Button Command="{x:Static local:ImportantWizardInteractions.GotoRoleScreen}">Role</Button>
                          <Button Command="{x:Static local:ImportantWizardInteractions.GotoCarScreen}">Car</Button>
      (This is the list of buttons that are shown on the left hand side of the screen)
    6. The views were passed to the developer (guess who) and the following state machine was created:

      Obviously a thing of beauty.

      1. The state initialization will retrieve the userservice, load data, set our maincontent to the mainView and set our next state to clientDetails

      2. The clientdetail has an initialization as well: it will set the datacontext to our customer and inject the ClientView as a datatemplate. Then it waits for only one command: Next. If that is triggered, it will simply move to the AdresDetails State.
        When moving out of a state, the state finalization is triggered, which will remove the view from the resources.
        Note how great it is never to have to think about that cleanup code again, it is always executed when moving out of a state.

      3. The adresDetails state has a few more commands it will listen to. When moving 'Next', a piece of logic is executed:
        There is a declerative rule in the IF/ELSE that goes a little something like this: this.GlobalUserService.IsAdministrator == True
        That rule is automatically put in the rules repository and can be used by others. It determines if the next screen will be the Role screen or the CarDetails screen.

      4. Role is simple.

      5. CarDetails also reacts to the Save-command. When it get's triggered, it will inject our popup into the resources section, and move on to the save state.

      6. The Save state will react to 'SaveYes' and 'SaveNo'. It will remove the popup from the resources, and go to another state.

    7. The views are dead simple, just binding to properties. However, the ImportantWizardMainView does require our attention. It has this definition for our subview:

                  <!-- our subview, uses name: CurrentWizardScreen -->
                  Content="{Binding RelativeSource={RelativeSource FindAncestor, 
                  AncestorType={x:Type local:ImportantWizardMainView}, AncestorLevel=1}, Path=DataContext}" 
                  ContentTemplate="{DynamicResource CurrentWizardScreen}" />

      Apparently, when using the contentTemplate, the datacontext is not inherited. So I have to set it up to react to the changing datacontext of our main screen. So, when the DataContext of ImportantWizardMainView is changed, the Content of our presenter is changed to match it. (Leave comment on how I could do this simpler, if you know how).

    8. Also interesting is that I used a Grid on that view, with two children that overlay eachother. The other child is our popup screen:

              <!-- our popup lives on top of that -->
              <ContentPresenter ContentTemplate="{DynamicResource PopupScreen}" />

      When we set a datatemplate with name Popupscreen, it will be shown on top of our regular screen. I like it!

    I have added a new activity, InjectViewAsDataTemplate. We already had the InjectControllerAsDataTemplate, but there are times you don't want a whole controller.

    I've replaced the original project file with the most recent. It can be found here.
    If you are interested in seeing more about this subject, please leave a short comment!

    kick it on

  • Wednesday, 20 February 2008 11:55:27 (Romance Standard Time, UTC+01:00)  #    Comments [18]  |  Trackback
     Tuesday, 19 February 2008

    This is the second of a series about using Workflow Foundation to control your UI Logic in a WPF application. The full table of contents:

  • Workflow as controller: Introducing <M,V,C> where M: ViewModel, V : WPF, C : WF
  • Part II, starting the application, and the adapter
  • Intermezzo: new sample application
  • Part III, your first view
  • Part IV, decoupling view from controller
  • Part V, marshalling commands from WPF to WF
  • Part VI, Injecting a controller in a subview / workspace
  • Part VII, IOC on the cheap: injecting and retrieving objects
  • Part VIII, Broadcasting for all to see
  • I thought it best to just put out that TOC, to force myself to actually write these short posts ;-)


    In the previous post, the complete solution was presented. I am presenting a solution to use workflow as the controller part in your MVC inspired WPF application. It is inspired on the thought that you do not need complex frameworks, because WPF already gives you great power (routed eventing, resources). So, no IOC is used, no event aggregator etcetera: it's taken care of by WPF and WF, a natural fit.
    The solution is very decoupled and I feel it's a great advantage to be able to visual your control logic.

    Starting the application / Shell

    The term 'Shell' is used to indicate a startable 'host' for your application. In WPF, that is probably your App.xaml view. In there, you point to a startupUri of a window. We do not need anything different for our application, but we do need to start the workflow runtime.

    I have chosen not to build a generic application.start method, because I am still thinking about threading. For now, I have chosen to use the ManualWorkflowSchedulerService to let the workflow instances do it's thing. Normally the workflow runtime uses a workerthread to execute the workflow instances on the background. That means that when you send a command to the workflow, it will be run in the background. That sounds great, but will give you some pain when changing data that is bound to the UI thread. For this first version, I did not want that pain, so I used the ManualWorkerThreadScheduler. Now, the workflow instance will do nothing, until we explicitly donate our (UI)thread to it.

    Starting the runtime is simple:

      1         public App()
      2         {
      3             // start a workflow runtime
      4             workflowRuntime = new WorkflowRuntime();
      6             ManualWorkflowSchedulerService manualSvc = new ManualWorkflowSchedulerService(false);
      7             workflowRuntime.AddService(manualSvc);
      9             ExternalDataExchangeService dataSvc = new ExternalDataExchangeService();
    10             workflowRuntime.AddService(dataSvc);
    11             dataSvc.AddService(new CommandService(workflowRuntime));    // add our generic communication service
    15             workflowRuntime.StartRuntime();
    16             workflowRuntime.WorkflowTerminated += new EventHandler<WorkflowTerminatedEventArgs>(workflowRuntime_WorkflowTerminated);
    17             workflowRuntime.WorkflowAborted += new EventHandler<WorkflowEventArgs>(workflowRuntime_WorkflowAborted);
    18             workflowRuntime.WorkflowCompleted += new EventHandler<WorkflowCompletedEventArgs>(workflowRuntime_WorkflowCompleted);
    21             ControllersAdapters.WorkflowRuntimeHolder.SetCurrentRuntime(workflowRuntime);
    23             this.Exit += new ExitEventHandler(App_Exit);
    24         }

    At line 7, the ManualWorkflowSchedulerService is indeed added to the runtime.
    At line 11, our own communicaton class (CommandService) is added to the runtime. You can interpret the runtime as a global object container: when we ever want to use that commandService singleton, we can just ask the runtime for it.
    Lines 15 through 18 hookup some eventhandlers to certain events. We'll cover them next.
    Line 21 sets this runtime at a static propery for the controllerAdapters to fetch. A quick and dirty solution.

    The events that we subscribe to are handled as follows:

      1         void workflowRuntime_WorkflowCompleted(object sender, WorkflowCompletedEventArgs e)
      2         {
      3             ICommandService cmdsvc = workflowRuntime.GetService(typeof(ICommandService)) as ICommandService;
      5             cmdsvc.SendMessage(new InstanceWasRemovedMessage(e.WorkflowInstance.InstanceId));
      6         }
      8         void workflowRuntime_WorkflowAborted(object sender, WorkflowEventArgs e)
      9         {
    10             ICommandService cmdsvc = workflowRuntime.GetService(typeof(ICommandService)) as ICommandService;
    12             cmdsvc.SendMessage(new InstanceWasRemovedMessage(e.WorkflowInstance.InstanceId));
    13         }
    15         void workflowRuntime_WorkflowTerminated(object sender, WorkflowTerminatedEventArgs e)
    16         {
    17             ICommandService cmdsvc = workflowRuntime.GetService(typeof(ICommandService)) as ICommandService;
    19             cmdsvc.SendMessage(new InstanceWasRemovedMessage(e.WorkflowInstance.InstanceId));
    20         }
    22         void App_Exit(object sender, ExitEventArgs e)
    23         {
    24             workflowRuntime.StopRuntime();
    25         }

    As you can see, I fetch the command service from the runtime, and ask it to send a message. The commandservice will 'broadcast' this message to all living controller adapters. When a workflow is finished, either by termination or just because it finished it's process, we need to let the adapter know so that it can unsubscribe from events from the commandservice.

    The adapter

    The GenericWorkflowAdapter is a WPF control that handles the communication between WPF and WF. We will see pieces of it in the upcoming posts, but we'll need to go into a little more detail here.

        /// <summary>
        /// This is a WPF type that can be placed anywhere in your UI tree. It can be configured with a workflow type.
        /// When it is, it will instantiate the Workflow.
        /// This adapter will then be able to pick up WPF Command (RoutedUI) and send them to the workflow, as well
        /// as listen to events coming from the runtime, the commandsvc and the workflow instance
        /// </summary>
        public class GenericWorkflowAdapter : ContentControl, IWeakEventListener

    As you can see, it is a contentControl. The workflowcontroller is able to place an arbitrary view as it's content.
    It has one property: WorkflowControllerProperty, typeof(Type), which will fire off the SetWorkflowController method when it is set.

      1         private void SetWorkflowController(Type type)
      2         {
      3             // actually start the controller!
      4             instance = runtime.CreateWorkflow(type);
      5             instance.Start();
      7             // allow it to do it's thing
      8             threadSvc.RunWorkflow(instance.InstanceId);
    10             Debug.WriteLine(String.Format("Adapter has started workflow instance {0}, of type {1}", instance.InstanceId, type.ToString()));
    13             // we will filter commands to only manage commands that we have defined in our workflow
    14             // so we have to walk recursively through all activities
    15             IEnumerable<System.Workflow.ComponentModel.Activity> flattenedActivities =
    16                 (instance.GetWorkflowDefinition() as System.Workflow.ComponentModel.CompositeActivity).EnabledActivities.
    17                 SelectRecursiveSimple(activity => (activity is System.Workflow.ComponentModel.CompositeActivity) ?
    18                     ((System.Workflow.ComponentModel.CompositeActivity)activity).EnabledActivities :
    19                     new System.Collections.ObjectModel.ReadOnlyCollection<System.Workflow.ComponentModel.Activity>(new List<System.Workflow.ComponentModel.Activity>()))
    20             ;
    22             // let's get the handlecommands
    23             var commands = flattenedActivities.Where(act => act is HandleCommand).Select(act => ((HandleCommand)act).CommandName)
    24             ;
    26             implementedCommands = new ReadOnlyCollection<string>(commands.ToList());
    28             SetupCommandSinks();
    29         }

    As you can see, this method actually goes into the workflowruntime and ask for it to spin up a workflowinstance. Then it donates it's thread to actually 'run' the instance. The workflow instance probably has initialization code attached to it. That code get's run at this point.

    At line 15, I use Linq to go through every activity that is defined in the workflow and look at the HandleCommand activities. These are activities that wait for a command and act upon it. I need to know which commands this workflow might respond to, so I create a readonly collection from this. Later, we will only let the adapter pass commands that are actually implemented by the workflow!

    At line 28, there is a call to setup the command sinks:

            private void SetupCommandSinks()
                // set up command sinks
                CommandManager.AddExecutedHandler(this, CmdExecuted);
                CommandManager.AddCanExecuteHandler(this, CmdCanExecute);

    Here you see the simple code that will register this contentcontrol to handle RoutedUICommands from WPF. As you can probably guess, when a command reaches these handlers, they will be filtered by the 'implementedCommands' collection we defined earlier on, and if they are implemented AND they are currently enabled, the command is posted to the workflow.

    I have setup two events: the lost and gotFocus events, to also send commands to the workflow. If the workflow chooses to do so, it can handle these. I use them to remove and add options to the menu shell.

    The last thing to cover, is the ReceiveWeakEvent method. This adapter will register itself at the commandService, and the commandService will subscribe the adapter to a few events. It uses weakevents to do so, so that the lifespan of this adapter is not tied into the commandService (which will live forever).
    There are a whole host of message that can be sent in the system, and the ReceiveWeakEventMethod will implement different behavior for all of them. It will look at at the arguments that were passed, and check for a specific type.

    (I might refactor that, to actually put the logic into the messages).

    That's it for now, in the next post we will actually get our hands dirty and put together our first application!

    Tuesday, 19 February 2008 11:36:14 (Romance Standard Time, UTC+01:00)  #    Comments [0]  |  Trackback
     Monday, 18 February 2008

    In this post and a few upcoming posts, I would like to present a solution I have built using Workflow as the controller for your WPF applications. I wish I could call it a framework and think of a great name for it, but it does not aim to solve all your UI-building problems in one go. It does however offer a very easy way to build a loosely coupled application, driven by WF and could be used to build upon for your own solution.

    Table of contents

    This is the first of a series about using Workflow Foundation to control your UI Logic in a WPF application. The full table of contents:


    Like I mentioned in a blog post here, Josh Smith writes about using MVC in a WPF application where he does not use a funky IOC-container to help him build a MVC architecture, but uses the WPF framework itself to accomplish most of it.
    This resonated with me, because I just left a project where the combination of WPF and CAB did not make good on all it's promises. The team sometimes felt the combination was overly complex.

    Also Jeremy Miller writes about implementing all of the different aspects of CAB yourself. While doing so, he reasons (my interpretation) that it's best to build the simplest solution that is a precise fit for your problem, instead of using all kinds of big-time frameworks that abstract away so much, that you start to feel constrained.

    A great little post by Rob Teixeira concludes that most frameworks are way too complex to really use.

    I have had a bit of experience using WF on the server side, but have always thought of WF to be an excellent fit for the UI as well. When building complex UI's I would like nothing better than to be able to invite a business analyst to sit next to me and just show him what will happen when a button is pushed.
    I have had a team build a large UI for a LOB application. Although at first glance it looked very simple, there is always going on much more than you expect. Having a visual representation of the flow of actions in your program, is a good thing.

    This project aims to provide the most straight forward easiest plumbing possible, to get the job done. It tries to be explicit and make it easy for you (the developer) to do the right thing. It hopes that the use of WF provides some sort of DSL-feel to your application.

    So, what does this mean

    First, what does the solution not do:

    • It is explicitly not an IOC based solution (but perhaps you don't need that)
    • It is not a complete eventing mechanism (although controllers are able to communicate just fine)
    • It is not a finished solution (I might have called it a framework then!)

    What it is, is this:

    • It is a suggestion for how you could very easily use workflows as a controller
    • It combines some fun tricks I've learned, that will facilitate us here
    • It uses the native power of WPF, so no learning of new concepts just because you have a ShinyNewFramework, if you understand WPF, you understand how to hook things up
    • It uses the native power of WF to create your controller logic, this translates into a very descriptive usecase with easy handoff between developers and opens up possibilities of just letting your business analysts create the first draft themselves! WF always feels like a cheap-ass DSL to me.
    • It is one adapter class, a couple of activities and a command service. Very easy to understand and adjust to fit your own needs
    • It facilitates loose coupling to the extend where your views and your controllers do not need references to eachother
    • It is message based
    • Excellent testability, because of loose coupling and messages.


    Show us the goods

    I have uploaded the goods zipped here.
    It needs .net framework 3.5.

    In the previous post, I explained how you could combine the controller and wpf in one project. It seems that this does not work as well as it should: sometimes I get build errors that aren't there. It's fine to have logic and views separate for the real sample, but the shell consists of two projects now as well, that may seem as a bit overkill.

    I only UnitTested one small view to show how you can go about testing bindings and test the controller separately. I use TypeMock for this, so you might need to unload that project. (I'm considering TypeMock, but it is pretty expensive for a one-man-shop).

    What's in it

    The real stuff is very small.

    • project ControllersAdapters, with only one file. It is a contentcontrol, which acts as an adapter to your controllers [8 kb dll]
    • project WorkflowCommunications, which has the service that we can use to translate in/out of the controller and 6 custom activities, that do specific things [27 kb dll]

    That is all you need.
    I have loosely implemented the BankTeller application from CAB, or rather the SmartClientContrib 1.1 WPF for CAB. I did not look too closely at their implementation details, just copied the xaml and the domain model and build a part of it myself. Just to discover what was needed to build a real application.


    The sample consists of a Shell, Domain, Logic, Views and Test project. It demonstrates how one could go about building such an application. I will follow up with a more detailed look at it. Suffice it to say, implementing it was a breeze.

    The thing with the BankTeller application is, that the logic is too simple. So it mostly demonstrates hooking up views and datacontexts.

    Just to give you a quick glance at what logic in a workflow looks like:


    (Here you see what will happen when a new customer is selected in the listview. It checks if the customer is not null, and then sets a customerinfo view and a customer summary view. If it was null, the views are removed from the visual tree.)


    Go into more detail, please

    Well, I will follow up with more posts, if there is an interest in it. This post has dragged on long enough, so I will keep it very short for now.

    The concepts are:

    1. Use WPF resources as an excellent container for objects. Resource lookups work hierarchically, so it's actually pretty powerful on it's own. There are two activities Inject- and RetrieveObjectFromResource that will put or retrieve an arbitrary object into the resource section of the adapter. This could be a service or something else.
    2. Use WF as an event aggregator. All workflows are registered to the runtime, and all adapters subscribe (with weakevents) to the workflow. So it's easy to send messages around.
    3. Use WPF Commands to communicate from the View to the Controller. Commands go upstream. I have made it easy for a controller to handle a command (just drag a HandleCommand to the screen). I've also made it possible to use rules to determine if the command 'CanExecute'. So you could do a command 'AcceptCustomer' and bind it to a button. The Controller will determine if the command can be executed. (When the customerqueue is empty in the sample, the button to accept a customer is disabled automatically).
    4. Use WPF DataTemplates to inject UI by the Controller. The View can sprinkle ContentPresenters around (with ContentTemplates bound to DynamicResources). The controller will choose what piece of UI to inject as the resource. (cool stuff!)

    The most important class is the GenericWorkflowAdapter that can be placed into the UI like this:

    <c:GenericWorkflowAdapter WorkflowController="{x:Type l:ShellLogic}" />

    Here we tell it to use the workflowcontroller: ShellLogic as it's 'boss'.
    The adapter will hook into the RoutedUI commands coming from WPF and when a command comes that the workflow wants to react to, it will send it to the workflow. The workflow will react to it.

    Than, there is the CommandService, which defines the communication between the workflow and the runtime. The adapters use it to send messages to their workflow. The workflow uses it to communicate to the adapters.

    There are custom activities to do specific UI-things. Like setting a controller in the UI, or an object in the resources. Setting the datacontext of a view with your ViewModel and actually setting the Content of the adapter with it's View.

    More details will follow.


    In order to pull this off, a few things were hacked:

    • I created a much easier way to register commands on a workflow. Just drag a HandleCommand to an Eventdriven activity, set it's CommandName (the string it will react to) and you're off. Normal WF paradigm says you have to create an interface and possibly even implement correlation. Not productive for what we are trying to achieve.
    • Getting the workflow to communicate back to the adapter causes a clone to be made of the message. But since we don't want that, I implement IClonable to return 'this'. Works well, but you have been warned.
    • At one point I use a delegate that is passed to the workflow, that let's it get data from the commandService on the fly.
    • In order to use the custom activities, I needed to let the user (you) select types (what view you wish to inject, what controller you want to instantiate). I've had to jump through hoops to get it working. See this blog post.

    What is next

    I've had great fun implementing this. After a few refactorings, it turned out to be extremely simple. I'm interested in seeing what you think. If there is some interest from the community, it could easily be taken to the next level. However, at this point it was just a nice experiment for me. Let me know what you think of the idea!!

    kick it on

    Monday, 18 February 2008 12:19:33 (Romance Standard Time, UTC+01:00)  #    Comments [2]  |  Trackback
     Thursday, 14 February 2008

    Just ran into a little bug in Visual Studio/msbuild and could not find any answers in the forums, so I thought I'd put it up here for reference:

    When you want to use both workflow classes and wpf classes in one project, you will run into some strange behaviour. Let's do it together.
    If you want to skip the newbie stuff, jump to step 11 and see the bug.

    1. create a WPF project.
    2. unload the project and choose to edit the project file
    3. somewhere in the beginning of the file, you will find the following line:
      That is a way for visual studio to identify this as a WPF application project, and when you add an item, you will be able to choose a wpf item.
    4. You wish to be able to compile WF items, so add to the bottom of the file the correct import for the WF tasks:
      <Import Project="$(MSBuildExtensionsPath)\Microsoft\Windows Workflow Foundation\v3.5\Workflow.Targets" />
    5. Note that WPF in the past needed the import of winfx, but with framework 3.5 you don't need that anymore!
    6. At this point you are able to copy a workflow or activity to your project and compile, but you want to be able to add WF items to your project, so scroll to the top of the file again.
    7. Add the Guid that identifies a WF project ({14822709-B5A1-4724-98CA-57A101D1B079};)  to your projecttypeguids tag. The complete tag should be on one line (!) and look like this:

    8. Save the file and reload the project.
    9. Before adding a the first workflow to the project, first add references to System.Workflow.Runtime, Activities and Componentmodel.
    10. Add your first glorious workflow to the project.
    11. Be prepared to be disappointed: the project will not compile. Mine gave this error:

      Error    1    Error reading resource file 'j:\Users\Ruurd\Documents\Visual Studio 2008\Projects\WF_and_WPF_combined\WpfApplication1\obj\Debug\WpfApplication1.obj.Debug.WpfApplication1.g.resources' -- 'The system cannot find the file specified. '    J:\Users\Ruurd\Documents\Visual Studio 2008\Projects\WF_and_WPF_combined\WpfApplication1
    12. Notice the weird path of the resource file. It is looking for something with dots instead of path dividers. Strange.
      In regular windows explorer, go to the obj/Debug folder, and create a copy of the .g.resources file and use that weird name. I wanted to automate it though, so go to the properties of your project and use this as your pre-build script:
      IF EXIST "$(ProjectDir)obj\$(ConfigurationName)\$(TargetName).g.resources" (copy /-Y "$(ProjectDir)obj\$(ConfigurationName)\$(TargetName).g.resources" "$(ProjectDir)obj\$(ConfigurationName)\$(TargetName).obj.$(ConfigurationName).$(TargetName).g.resources") ELSE (echo "placeholder" > "$(ProjectDir)obj\$(ConfigurationName)\$(TargetName).obj.$(ConfigurationName).$(TargetName).g.resources")

      This checks to see if you already have a g.resources file and if so, copies it. Otherwise it will generate a placeholder file with the correct name. Atleast the project will build without problems.

    I have not tested it a lot yet. It seems to me that when the placeholder is created, there are resources that can not be found. During some quick and dirty tests, I've not had any problem yet and everything works just fine.

    Hope this helps someone out there.

    update: weird stuff. I have this running just fine in a couple of projects, but I have one project that gives an exception during a rebuild (not a build) in the compileworkflowtask. In other 'combined' projects, I can happily build and rebuild using the steps above.

    This is probably caused by wpf renaming the file to tmp_proj and the compileworkflow task is validating it's parameters like so:

            if ((string.Compare(this.ProjectExtension, ".csproj", StringComparison.OrdinalIgnoreCase) != 0) && (string.Compare(this.ProjectExtension, ".vbproj", StringComparison.OrdinalIgnoreCase) != 0))
                base.Log.LogErrorFromResources("UnsupportedProjectType", new object[0]);
                return false;

    The logging statement is the one giving the pain.


    Thursday, 14 February 2008 13:44:34 (Romance Standard Time, UTC+01:00)  #    Comments [9]  |  Trackback
     Wednesday, 13 February 2008

    I'm working on a sweet project at the moment using both WPF and WF. One of my custom activities has a property of type Type, where it would be cool for the user of the activity to be able to use the designer to select a type, just like what happens in the WF designer when I choose a type. However, no type picker popped up.

    So I went googling and found that Daniel Cazzulino also ran into this problem and created a fantastic little project to harness the power of the real WF typebrowser. He writes about it on this blogpost and later moves the project to code project. You can find the article and his download code here.

    However, as you can read in the comments, something was broken. Looking through the code, although small, made me not want to waste time on understanding the System.ComponentModel namespace in that much detail at this point ;-) (although, when working with WF, you will soon need to customize property pickers, so I will have to look into it someday soon).
    Daniel himself points to the Patterns and Practises entlib library: they offer the same functionality. I downloaded their sourcecode, and I'm quite sure they just used Daniels code and improved upon it a bit. However, with all the Entlib references, the project felt a bit heavy.

    What I have done is rip out all the references to entlib that I do not care about, used a few files from Daniels original solution and worked around a few shortcomings. Nothing fancy, I just hacked at it until it worked.



    Now, since I have used some code (without license) by Daniel and code by the Entlib group, I'm not sure if I can publish a derivative without getting into problems. However, I've read their license, and I think it's okay.

    You can download the project here, don't ask for changes because I'm not interested in spending more time on it. All credits go to Daniel.

    (Also, find out how to create your own typefilters in his post).

    Have fun with it. Leave a comment if you find it useful.

    Wednesday, 13 February 2008 13:30:24 (Romance Standard Time, UTC+01:00)  #    Comments [9]  |  Trackback
     Monday, 03 September 2007

    This is turning into a hassle. I must confess that I feel that Microsoft does not have a good story on this one!

    When thinking about versioning within the realm of workflow, there are a few things you have to know:

    • You will need to use strong signing for your processes, the activities, the External Data Exchange services and the items you put on the queue (we use these to correlate commands to queues, bypassing the weirdness of correlation in WF)
    • What is persisted to the datastore is a blob. That blob is created using serialization surrogates and use the normal binary serialization format. However, because of the surrogates, it is difficult (although not impossible) to touch your workflow instance directly, instead of going through the runtime. The surrogates are there for a reason: the serialization process of a workflowinstance is not a straight-forward process: all the activity contexts have to be serialized as well, as do the dependency properties etc.
    • The blob does not only persist your fields, but persists the complete structure of your running instance, called a template. So all the activities (initialized or not) are in that template.
    • Timers and their delays are persisted in a separate list by the surrogate. So, if your workflow instance is in a delay with 9 days left, this information is written in a timerCollectionList, with a guid pointing to the delay (remember, that delay is instantiated in a particular activityContext). It is not simple to correlate these. They are the main problem when you wish to just update your process.

    Microsoft does not offer a smart way to upgrade version 1.0 to 2.0 of your workflow instance. When you have version 1.0 in your database, and make one little change to your process, dehydration will not work because of an index-out-of-bounds exception: remember that the persisted blob has the full template of the instance. So when you changed your process and added or removed an activity anywhere, the dehydration process is trying to map the persisted template to a type in your assembly and fails because of the different activity tree.

    Therefor, you can do two things:

    1. Run both assemblies Side by Side
    2. Use workflow changes to change your current version to a 2.0 compatible version. 

    Let's start by discussing option 2. Say you have created version 2.0 of your instance and try to rehydrate. Since you strongnamed, the runtime will throw an exception because it can not find your old assembly. You can place an assembly redirect in your config, telling the clr to try to use version 2.0 assemblies to instantiate your 1.0 blob. This will then fail because of that changed structure of your template.
    The solution at hand is to use workflow changes to get in there, and change the structure of that 1.0 template to match that of a 2.0 version. You can of course only do this by loading in the old assembly side-by-side, but now you only have to do that once during an update-batch. After that, your normal application is able to use your 2.0 assembly to instantiate your 1.0 (but now structurally modified) instance.

    The problem here, is that you have to build big workflow change scripts. I have not yet seen someone automate that (do a diff on the templates and generate the workflow changes). If that were available and rock-stable, this might be a good strategy to take. Until then, it's way too much work. (Let me know if this turns out to be super-simple!)

    Option 1 is bad as well. Sure, loading in your old assemblies is possible. But what Microsoft forgot is that I want to change my external data exchange service as well (if only in version number) and the objects that I put in my queue. Since your old 1.0 process is expecting a 1.0 service to talk to, or 1.0 version commands, it will not be able to communicate!! This can be mitigated by adding the 1.0 External service to the runtime when loading the 1.0 assemblies, and maybe only using bcl types on the queues, but it's really a shame to have to do that. Certainly when you have processes that last 5 years, and you have 20 versions to keep up with.....

    My advise is to really try to understand the way your application will use workflow. For us, I was able to make these assumptions:

    • There are a few states that need to be monitored by a delay activity of say 20 days.. When after that 20 days our process has not moved out of that state, something needs to happen.
    • Most states do not need that. Therefor, I can actually bring the process to a completed state. That actually is not what I prefer, however, since the state of a process can be derived from my domain objects, I can always construct the process at will and bring it in the correct state (with the new version!!). By completing the processes whenever possible, I will have a much smaller amount of processes to deal with in the datastore.
    • Most importantly: I found out that after processing an external event, the process will always return back into a state. It will never start a delay of more then a few minutes within a sequence. So I can guarantee that my workflow, when persisted, is not waiting inside a while-loop or whatever. All long delays are the first child within an eventdriven activity.

    The last point reduces our problem big time, because it would be nearly impossible to build an update for a workflow instance that is waiting in the middle of a sequence. Basically, that is what we will be doing in our project: Build an update batch, that will load version 1.0 instances, kill them, create 2.0 version instances and write back to the database with the same guid.

    The steps to build an update batch are:

    1. use workflow tracking or something to write the version of your process to your datastore. We have an oracle persistence layer. When we build it, we constructed a new column 'type' in the database and write the fullname of the type in there (which includes version number).
    2. load your old assembly so you can instantiate the blob
    3. instantiate the blob using reflection to directly get access to your instance
    4. do an export of your fields and other stuff you need. You know your processes intimately, so this should not be a problem
    5. delete the row from the datastore  (remember to start a transaction!)
    6. create a new type, using the runtime and the guid of the old instance
    7. call an import event or whatever, that the process will use to bring itself to the correct state
    8. persist

    The hard part are the delays. Basically, you can find the list of timers using reflection. However, it is cumbersome to correlate the guids to the correct delay activities. My solution would be the following: during state changes within your process, keep a dictionary of the statename and the moment (Datetime) you transitioned to it. When importing, use this list and the delay.timeoutdurationEvent to setup your timer: normally, it would be DateTime.Now.Add(timeoutlenght). This time, you will use the original DateTime, and your delay activity will not have been 'reset'.

    It's not pretty, and it will be necessary to put constraints on your processes. But it might work just fine for you! Let me know..

    Monday, 03 September 2007 18:59:38 (Romance Standard Time, UTC+01:00)  #    Comments [7]  |  Trackback
     Wednesday, 09 May 2007

    One very hot issue in workflow foundation is the problems you get when you want to handle some external event in your workflow multiple times, having the correct activity invoked based on some arbitrary piece of data. This can be very helpful if you want to use just one event, for instance: 'procesCommand', and issue different commands to your workflow. So, based on the eventargs of the procesCommandEvent, a different activity will be executed. Having such a system, very rapid development is possible and I like the idea of issuing my workflow 'commands'.

    With the standard 'handle external event activity' [HEEA], this is entirely possible. However, it's usage is seriously hampered by the need to setup correlation before-hand. Let's say you have a statemachine, and in some state we will have a few eventdriven activities. The first child-activity of these eventdriven activities is always a HEEA. Since we just want to raise one event, the HEEA is configured with the same event on your external data service. To let the system know which HEEA should react when you raise your 'procesCommand' event, you have to setup a correlation token.
    You will thus set up one correlation token for each HEEA that should react in your state. This can be done during the state-initialization. So the defining of the token is done some place other then the configuring of your HEEA itself. The whole proces is very cumbersome with more then a few HEEA to configure. Especially because when you configure the HEEA to use a correlation token, visual studio will present you with a dropdown list of all tokens it knows about, including the ones from other states.

    I do not like this mechanism. It's very error-prone, counter intuitive and basically a load of crap.

    A great solution would be to build your own version of HEEA, which will just be configured with a string that identifies the command it will react to. Seems easy enough. You will have to implement IEventActivity and possibly IActivityListener<QueueEventArgs> and your done! There is a great example in the SDK that does this for the filesystem. However, while doing this I found out that when the queuename is not unique, only the first IEventActivity will get the Subscribe call. This means that setting up multiple eventactivities with the same queuename (for instance 'procesCommandQueue') is very hard.

    Enter the correlationservice: The WF-team created an elaborate service that works by registering 'followers' (the HEEA activities that will not get the subscribe call) and delegating to the first HEEA (the one that did get the subscribe call) the responsibility of notifying it's followers when a message was picked up from the queue.
    (In case you haven't read Essential Workflow Foundation by Dharma Shukla and Bob Schmidt: you should, it has an essential introduction into queue's which is the foundation of workflow, which, coincidentally, is the title of their book ;-) ).

    That correlationservice is not something to be proud of, and also not something that one would want to build themselves. But without it, only your first procesCommand activity will be able to react to your event, and it might not be the one that should react!

    This explains the cowardly piece in the title of this post: by making each queueName unique, all your problems will go away. Therefor, when you setup your queue in the initialize of your activity, name it 'procesCommandQueue' + this.CommandToReactTo. That will give it a unique name.
    Then, instead of raising an event, just set your eventargs on the queue of your workflowinstance like this: instance.EnqueueItem('procesCommandQueue' + CommandYouWantToIssue, yourEventargs, null, null).

    Since all your procesCommandHandlers were subscribed to different queue's, the correct one will pickup the eventargs + execute, and all is fine. Thus, the cowardly, but perfectly acceptable way to go about this problem, is to tackle it from the outside, instead of the inside.

    Wednesday, 09 May 2007 21:08:00 (Romance Standard Time, UTC+01:00)  #    Comments [4]  |  Trackback
     Saturday, 28 April 2007

    Microsoft has gone to great lengths to keep you from touching your instance directly, forcing you to always use the workflowinstance class to manipulate your workflow from the outside. They have done this to make sure the integrity of the system is maintained. It is however, a major pain in the ass. ;-)
    I have been looking into solving a serious deficiency of windows workflow foundation: updating current workflows in your database. Microsoft does not have a good story on that one, and the WF architecture seems to be designed to make it as hard as possible to actually pull off! One approach to do this, would be to get to your instance, use reflection to get to private and public fields and create your new type, setting those fields as you go. Therefor, I have been looking into getting to my persisted instance.

    Please be very careful with the following technique. Do not use it lightly, because it does allow you to do things that will break the integrity. It _will_ get you into trouble, if you do not watch out.

    We are going to fetch a persisted workflow, but do not want the workflow runtime interfering. So:

    Database db = DatabaseFactory.CreateDatabase("##Your database##");
    byte[] act = (byte[])db.ExecuteScalar(CommandType.Text,
    "select state from InstanceState where uidInstanceID = '## the guid you are interested in ##'");

    This will directly query your database for some guid. (I am using Entlib 3.0 here).
    The byte array returned is formatted with the binaryformatter. To be able to serialize activities, the WF-team does not use the Serializable attribute, but uses surrogates. A surrogate is a class that has intimate knowledge on how to serialize a certain type. Looking at the class ActivitySurrogate, you will notice that they have quite a bit going on there. Most important seems to be the private sealed class ActivitySurrogateRef, which defines fields like these:

    private Activity cachedActivity;
    private Activity cachedDefinitionActivity;
    private EventHandler disposed;
    private string id = string.Empty;
    private int lastPosition;
    private object memberData;
    private object[] memberDatas;
    private string[] memberNames;
    private string rulesMarkup;
    private Type type;
    private string workflowChanges;
    private Guid workflowChangeVersion = Guid.Empty;
    private string workflowMarkup;

    So, it is clear our binaryformatter is going to have to reuse this logic. Thankfully, we can use the ActivitySurrogateSelector. This selector has logic that allows it to select the correct surrogate for each type. We set it like this:

    BinaryFormatter formatter = new BinaryFormatter();
    formatter.SurrogateSelector = ActivitySurrogateSelector.Default;

    Now it is time to deserialize our byte array. However, it is zipped!! So, let's unzip it:

    MemoryStream stream = new MemoryStream(act);
    stream.Position = 0;
    using (GZipStream stream2 = new GZipStream(stream, CompressionMode.Decompress, true))
    { // here we can finally deserialize.  }

    Inside that using-statement, we will deserialize. Here is the line that does the magic:

    T activity = (T)Activity.Load(stream2, workflowRuntime.CreateWorkflow(typeof(T)).GetWorkflowDefinition(), formatter);

    The 'T' is the type of your workflow.
    I am using the static Load functionality of Activity to load. We first pass in our unzipped stream (stream2) and then we have to pass the workflowdefinition of the type we are deserializing. Finally, we are using our own formatter, with the activity surrogate attached.

    Obviously, the use of the workflowruntime to get to the workflowdefinition is not pretty. Ugly even. However, I have not been able to circumvent it. Passing a null, will result in serializationexceptions.
    I will update this post if I find a better way. Please leave a comment if you know of one!

    I can imagine that it is useful to be able to get to your instance directly. For now, the only way to get to it, seems to be when they have to release it to the persistence store.

    I hope this helps someone!

    Saturday, 28 April 2007 17:43:52 (Romance Standard Time, UTC+01:00)  #    Comments [4]  |  Trackback
     Wednesday, 11 April 2007

    My project is migrating a big (very big) winform application to a WPF (using Xbap) application. At this point, only the front-end is touched. The team will eventually evolve into migrating the data-driven procedural backend to a process-centered, domain-driven, WF (Workflow foundation) managed solution.

    For this, we are looking for a few experienced C# developers. Obviously, WPF knowledge is a big plus. The project will last for quite a few months. When you walk away, you will have a deep understanding of WPF, CAB and WF.

    If this sounds like your cup of tea, please leave a comment or mail me directly. The project is based in The Hague.

    Wednesday, 11 April 2007 20:17:14 (Romance Standard Time, UTC+01:00)  #    Comments [8]  |  Trackback
     Tuesday, 10 April 2007

    I'm also heading up a team that will be migrating existing businessprocesses to a workflow proces layer. It's exciting, because workflow foundation (WF) allows me to model an entire proces, instead of building small pieces and connecting them in code. This offers superior insight into the real businessproces and thus gives flexibility and power because for the first time, I can really sit down with an analist and explain 'code'. (our UML diagrams are outdated ;-) ). Because we then have a common understanding of the proces, we can feel at ease when modifying it.

    Currently I'm working on having the workflow determine the 'actions' that a user (or machine) can perform in some state. WF has the ability to show the possible state transitions and that seems to be the logical piece of information I need to query and present to our client-side code (which well then enable/disable certain commands in the screens). However, it is completely useless because of two things:
    1. it does not take into account the role a user is in
    2. it will just display the possible transitions, but not the HandleExternalEventActivities (HEEA) that lead to them.

    Therefor, I have build my own query. I'm aware that I've probably overlooked some hidden away funtionality, but until then, my code will do perfectly fine!

    Given a workflow instance, I will first retrieve the waiting queue's. Then I will iterate the queueInfo objects. In my case, I will only use HEEA activities to handle the queue's, your workflow might differ. I will find that HEEA using the GetActivityByName method. Then I will check if it has roles assigned to it. I will simply check if the given role is in that array.
    Next, I will have to lookup the correlationtoken, that might be used. If it is, I'm most interested in the correlationproperty. I will put that combination into my own struct (ProcesCommando). Add it to the list and return it!

    ReadOnlyCollection<WorkflowQueueInfo> queues = instance.GetWorkflowQueueData();
    foreach(WorkflowQueueInfo info in queues)
    foreach(string subscribedActivity in info.SubscribedActivityNames)
    HandleExternalEventActivity heea =
    instance.GetWorkflowDefinition().GetActivityByName(subscribedActivity) as HandleExternalEventActivity;

    Debug.Assert(heea != null,
    "Currently only expecting HandleExternalEventActivities");

    #region check roles
    if(heea.Roles != null)
    // there are roles defined, so we need to check if the given role is included

    bool inRole = false;
    // TODO: use predicate
    foreach (WorkflowRole workflowRole in heea.Roles)
    if (workflowRole.Name.Equals(role.Name))
    inRole = true;

    if (!inRole)
    continue; // next subscribed activity.

    // apparently the webworkflowrole does not implement equals and gethashcode correctly, so we can't do a 'contains'
    // if(!heea.Roles.Contains(role))
    // {
    // // it does not, so this subscribed activity should never be executed
    // continue;
    // }

    #region possible correlation
    string correlatie = String.Empty;
    if (heea.CorrelationToken != null)
    // there is a correlationtoken, so let's get the correlationproperty

    EventQueueName queuename = info.QueueName as EventQueueName;
    CorrelationProperty[] corProps = queuename.GetCorrelationValues();

    Debug.Assert(corProps.Length == 1,
    "Currently expecting exactly one correlation value");
    correlatie = corProps[0].Value.ToString();

    ProcesCommands.Add(new ProcesCommando(heea.EventName, correlatie));

    return ProcesCommands;


    Tuesday, 10 April 2007 19:27:21 (Romance Standard Time, UTC+01:00)  #    Comments [8]  |  Trackback