Thursday, 28 February 2008

After writing so much about my own MVC implementation for WPF, I'm happy to see the birth of Prism. This is what the site has to say about it:

"Prism" addresses the challenges around building complex enterprise WPF applications. As the complexity increases and the teams grow, the application becomes increasingly difficult to maintain. Using "Prism" enables designing a composite application that is composed of many discrete, loosely coupled modules. These modules can be developed, tested and deployed by separate teams.
It provides the following benefits:

  • Provides complete support for WPF
  • Dynamically composes user interface components
  • Application modules are developed, tested and deployed by separate teams
  • Allows incremental adoption
  • Provides an integrated user experience

"Prism" is not a port of previous smart client offerings, instead it is a new deliverable that is optimzed for WPF. It aims to deliver a simplified approach that is more easily adoptable.

Very exciting!! Although I ofcourse will be disappointed if there is not a good way to integrate WF into it.

Thursday, 28 February 2008 18:03:16 (Romance Standard Time, UTC+01:00)  #    Comments [2]  |  Trackback

Just a random piece of code that I thought is handy: when you are experimenting with xml, you probably want to see the xml quickly and easily. For instance, when you are using the DataContractSerializer to serialize a type, you want to see how it looks. But it get's printed on one line!! That's not useful.

Use something like the following code:

            MemoryStream m = new MemoryStream();
            XmlTextWriter tw = new XmlTextWriter(m, Encoding.UTF8);
            tw.Formatting = Formatting.Indented;
            tw.Indentation = 4;
            tw.IndentChar = " ".ToCharArray()[0];
            s.WriteObject(tw, p);
            tw.Flush();
            m.Position = 0;
            StreamReader sr = new StreamReader(m);
            string strOutput = sr.ReadToEnd();
            Debug.WriteLine(strOutput);
 
Where s represents your datacontract serializer.
This outputs glorious xml to the debug.output window:
<Person xmlns:i="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.datacontract.org/2004/07/DomainModel">
    <FirstName>Ruurd</FirstName>
    <LastName>Boeke</LastName>
    <Orders>
        <Order>
            <Amount>5</Amount>
            <ProductID>10</ProductID>
        </Order>
        <Order>
            <Amount>12</Amount>
            <ProductID>11</ProductID>
        </Order>
        <Order>
            <Amount>2</Amount>
            <ProductID>1</ProductID>
        </Order>
    </Orders>
</Person>
Thursday, 28 February 2008 15:06:40 (Romance Standard Time, UTC+01:00)  #    Comments [2]  |  Trackback

Finally wrapping up.

This is the seventh of a series about using Workflow Foundation to control your UI Logic in a WPF application. The full table of contents:

Recap

In the first post, the complete solution was presented. I am presenting a solution to use workflow as the controller part in your MVC inspired WPF application. It is inspired on the thought that you do not need complex frameworks, because WPF already gives you great power (routed eventing, resources). So, no IOC is used, no event aggregator etcetera: it's taken care of by WPF and WF, a natural fit.
The solution is very decoupled and I feel it's a great advantage to be able to visual your control logic.
In the previous post we looked at injecting objects and retrieving.

Broadcasting

CAB (and other systems) uses an event aggregator to publish events. Subscribers (other controllers) can subscribe to a specific 'topic' using a string to identify it. This works well, but does mean yet another communication method is introduced.

Since every workflow/controller is added to the workflow runtime, we could easily ask for all the loaded workflows and send these a message. However, since all adapters subscribe to a weakevent manager to manage communication, I thought I'd stick to this pattern.

The BroadcastCommandMessage was created for the adapter to react on and check if it's controller is interested in it. If it is, the message is transformed to a command message and send to the controller.

I have not yet build an activity to do this.
The Bankteller sample has a CustomerQueueController. When it gets focus or loses focus, it wants to tell 'someone' (just someone who will listen) that it has a popular command to (un)register. The BanktellerLogic controller will use this information to put the command in a list and the view decides to make a menu item for it. You see, I do not believe that the CustomerQueueController should be able to decide that a menu is to be created out of it. He just wants to let the world know about a command.

        private void RegisterCommands(object sender, EventArgs e)
        {
            commandSvc.SendBroadcast(
                new BroadcastCommandMessage(this.WorkflowInstanceId, "RegisterPopularCommand",
                   CustomerQueueInteractions.AcceptNextCustomerFromQueue));
        }
        private void UnRegisterCommands(object sender, EventArgs e)
        {
            commandSvc.SendBroadcast(
                new BroadcastCommandMessage(this.WorkflowInstanceId, "UnRegisterPopularCommand",
                   CustomerQueueInteractions.AcceptNextCustomerFromQueue));
         }

 

That concludes this series for now.

I hope you enjoyed it. I hope you take away the feeling that it is pretty easy to build a MVC system using WPF and WF and that the presented solution is about as decoupled as it gets.

Thursday, 28 February 2008 02:16:34 (Romance Standard Time, UTC+01:00)  #    Comments [0]  |  Trackback

I'm a big reflection fan. However, reflection is slow and (for certain actions) requires full-trust. That's a big issue, since that means you can not use it in an unsigned xbap or whatever.

People are finding some cool ways to work around reflection where they can.

Ayende talks about the performance implications of creating objects here, and concludes that reflection might be slow, but you should ask yourself if you really care. If you do, he shows how to use dynamic methods to do the creation for you.

Roger Alsing takes it one step further, and uses Linq to build an expression tree to access private fields. Radical!! I love it. Read it here.
Also note his performance gains:

This approach is also much faster than Reflection.FieldInfo.
I made a little benchmark where I accessed the same field a million times.

The Reflection.FieldInfo approach took 6.2 seconds to complete.
The Compiled lambda approach took 0.013 seconds to complete.
That’s quit a big difference.

But keep in mind that actually compiling the expression is many times slower than reflection.

Thursday, 28 February 2008 01:41:48 (Romance Standard Time, UTC+01:00)  #    Comments [2]  |  Trackback
 Tuesday, 26 February 2008

Hole crap, this is starting to be a long series!!

This is the seventh of a series about using Workflow Foundation to control your UI Logic in a WPF application. The full table of contents:

Recap

In the first post, the complete solution was presented. I am presenting a solution to use workflow as the controller part in your MVC inspired WPF application. It is inspired on the thought that you do not need complex frameworks, because WPF already gives you great power (routed eventing, resources). So, no IOC is used, no event aggregator etcetera: it's taken care of by WPF and WF, a natural fit.
The solution is very decoupled and I feel it's a great advantage to be able to visual your control logic.
In the previous post, we talked about injecting controllers to manage specific parts of your screen, very cab-alike.

IOC - Inversion of Control

Inversion of Control is a pattern that tries to turn everything upside down when it comes to getting to dependencies. Let's say you have a class, and to do its work it needs a helperclass (maybe a communication service). Instead of having your class create that service explicitly, we can have your class just ask for it and have someone else supply it. This is where Dependency Injection comes from: just state what a class needs to work and have a container 'inject' those dependencies.
Doing it this way makes for a more maintainable application and allows you to better manage the lifetime of helperclasses and services. You might want to get back the same service instance, all the time!

Using a MVC approach to construct your application, you might feel the same need. Maybe you are building an application that allows editing of pieces of information of a customer, for instance, her details, her address, etc.
These pieces are implemented in different views. All the views that belong to that one customer, should use the same instance of the 'customer' object.

Inject and retrieve object into resources - activity

In this system, that is easily done, although possibly more explicitly than many great IOC containers (Windsor, Spring.Net, StructureMap) would like it.

Just have one controller create the object and inject it inside of his resources. Because of the way retrieving resources work, all the controllers that live 'below' this controller (are nested within it), will be able to retrieve it.

image

Here I have dragged in the 'InjectObjectAsResource'Activity, and have bound a public field on my workflow to the 'Service' property of the activity. Well, maybe Service is a bad name, but I just expect you to use it with services most of the time. Also, the activity might better have been called InjectInstanceAsResource, but I guess I didn't.
I used a type as resourcekey this time, instead of a string.

I bet you can figure out how the retrieve activity works ;-)

Tip: since the activity does not know what type of object you want to create, if you let the binding introduce the field or property to your code, it will be typed as object. Just change that to your own type.

The retrieve will work for all controllers that can reach the resource dictionary of the controller that did the inserting. So, that is equivalent to the CAB-term: 'child workitem'.
If you have the need to also be able to share on a global level, just make the inserting happen on the application resources, instead of the adapter resources. Can not be too hard.

Conclusion

I think this mechanism illustrates the way you can use WPF to meet most of your CAB needs. I use it here from a workflow, but that has nothing to do with the core-concept.
I find that the explicit visual call to inject or retrieve, without having to write code to do so, could be beneficial when building systems in a team. There is no need to guess where an object comes from, it is all very much in your face.

Tuesday, 26 February 2008 13:00:22 (Romance Standard Time, UTC+01:00)  #    Comments [0]  |  Trackback
 Monday, 25 February 2008

This is the sixth of a series about using Workflow Foundation to control your UI Logic in a WPF application. The full table of contents:

Recap

In the first post, the complete solution was presented. I am presenting a solution to use workflow as the controller part in your MVC inspired WPF application. It is inspired on the thought that you do not need complex frameworks, because WPF already gives you great power (routed eventing, resources). So, no IOC is used, no event aggregator etcetera: it's taken care of by WPF and WF, a natural fit.
The solution is very decoupled and I feel it's a great advantage to be able to visual your control logic.
In the previous post, we talked about decoupling through commands.

This time, we will look at how to inject a controller into a subview

The InjectControllerAsDataTemplate activity

It's all very nice and dandy to have one controller manage it's mainview, but what happens if part of that mainview is different, and should be managed by a completely different controller?

Let's look at ModuleView in the BankTeller sample:

<UserControl x:Class="BankTellerViews.ModuleView"
    xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
    xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">
        <StackPanel Orientation="Horizontal">
        <StackPanel Orientation="Vertical">
            <ContentPresenter ContentTemplate="{DynamicResource userinfo}" />
            <ContentPresenter ContentTemplate="{DynamicResource customerlist}"/>
        </StackPanel>
        <StackPanel Orientation="Vertical">
            <ContentPresenter ContentTemplate="{DynamicResource customerinfo}" />
            <ContentPresenter ContentTemplate="{DynamicResource customersummary}" />
        </StackPanel>
    </StackPanel>
</UserControl>

You can see that ModuleView really only determines the way this screen is build up, but the individual pieces are left empty.

When we open up the ModuleLogic controller, we wish to inject controllers with the same names that we used here:

image

What happens exactly? Well, you selected a controller type, through the convenient typebrowser, and set a specific resource key (in this case, we used a string: userinfo). The adapter is notified by this adapter to do something with it. It will create a datatemplate in code, and just set it as a resource (or replace, if it already exists).

This means that a deeply nested view could easily define a contentpresenter and a higher level controller could inject a controller for it.

Monday, 25 February 2008 16:12:47 (Romance Standard Time, UTC+01:00)  #    Comments [0]  |  Trackback

This is the fifth of a series about using Workflow Foundation to control your UI Logic in a WPF application. The full table of contents:

  • Workflow as controller: Introducing <M,V,C> where M: ViewModel, V : WPF, C : WF
  • Part II, starting the application, and the adapter
  • Intermezzo: new sample application
  • Part III, your first view
  • Part IV, decoupling view from controller
  • Part V, marshalling commands from WPF to WF
  • Part VI, Injecting a controller in a subview / workspace
  • Part VII, IOC on the cheap: injecting and retrieving objects
  • Part VIII, Broadcasting for all to see

    Whoops, I guess I was a bit over enthusiastic in the previous post, because I already explained the hooking mechanism in enough detail.

    It boils down to registering the adapter as global commandhandlers and when a command reaches it, create a commandMessage and send that to the workflow.

  • Monday, 25 February 2008 15:58:49 (Romance Standard Time, UTC+01:00)  #    Comments [0]  |  Trackback

    This is the sixth of a series about how to go about using postcompilation in your solutions. You can read it as a tutorial on how to use PostSharp. I am very much new to that framework, but the power it provides could seriously change how you build your applications. While working on the EF contrib project, I had to dive into PostSharp, and I hope to share some of the things I learned along the way.

    This post delves into using the weaver, to do some funky stuff for us!

    The full table of contents:

  • Introducing Entity Framework Contrib- Easy IPoco implementation V 0.1
  • Part II, Postsharp Core versus Postsharp Laos
  • Part III, the compound aspect
  • Part IV, the PocoInterfaceSubaspect (composition aspect)
  • Part V, hooking up the weaver
  • Part VI, the EdmScalarWeaver
    Recap

    We wish to create an attribute that can be placed on top of our ordinary Poco class, that will magically transform it into a class that implements the 3 IPoco interfaces. These are needed by the Entity Framework to do it's work. We will use PostSharp to do this.
    Our previous post talked about the compound attribute and how it goes about implementing interfaces on classes for you.

    We want to put custom attributes on our type that EF needs (the EDM scalar attributes on top of properties and the EDM type attribute that connects your type to a EDM type). Laos does not seem to have a ready-to-use aspect that provides that functionality, so we are going to need to hook into the weaver ourselves! How exciting!
    Thankfully, we can derive from TypeLevelAspectWeaver to make life easy enough.

    In the previous post, we hooked up the weaver, this post we are actually going to do stuff.

    The Implement method

    Our weaver derives from TypeLevelAspectWeaver, and thus can override the Implement() method. I have to do some stuff to get to your config file, using PostSharp to get the Path to the original App.Config. When I have that, I load it, and look at the connectionstring that matches the containername you have supplied the attribute. Then, I use the EntityConnectionBuilder to create a connection string and finally load in the metadata workspace from EDM. With the metadata in hand, I can start looking at the transformation I have to do.

    Setting EDMScalarAttributes

    I recently chatted with Gael (creator of PostSharp) and he assured me that there would be a highlevel method to add attributes to code. In this version of PostSharp, that is not directly possible (hence, the weaver we are using). So, we will do it ourselves.

    First, let's loop through all the properties defined on our supplied businessEntity:

      1             foreach (PropertyDeclaration prop in typeDef.Properties)
      2             {
      3                 EdmProperty memberProperty;
      4 
      5                 // find it as a member
      6                 memberProperty = entityType.Members.SingleOrDefault(edmprop => edmprop.Name.Equals(prop.Name)) as EdmProperty;
      7 
      8                 // it can easily be something else than an edm property
      9                 if (memberProperty != null)
     10                 {
     11                     // it might be a key property. I have not yet found a better way to determine if it is a keymember or not. This seems wastefull
     12                     prop.CustomAttributes.Add(
     13                         CreatePropertyAttribute(memberProperty,
     14                         (entityType.KeyMembers.SingleOrDefault(edmprop => edmprop.Name.Equals(prop.Name)) != null)));
     15 
     16                     continue;
     17                 }
    18             }

    I use a bit of Linq to check if this a propety is a key, and call my CreatePropertyAttribute method:

      1         CustomAttributeDeclaration CreatePropertyAttribute(EdmProperty edmProperty, bool IsKeyProperty)
      2         {
      3             CustomAttributeDeclaration attr = new CustomAttributeDeclaration(edmScalarPropertyAttribute);
      4 
      5             // nullable
      6             attr.NamedArguments.Add(
      7                 new MemberValuePair(MemberKind.Property,
      8                     0,
      9                     "IsNullable",
     10                     new SerializedValue(
     11                         SerializationType.GetSerializationType(this.module.FindType(typeof(bool), BindingOptions.Default)),
     12                         edmProperty.Nullable)
     13                         ));
     14 
     15             // since we need to set the ordinal, take care to set this property last!
     16             if (IsKeyProperty)
     17             {
     18                 attr.NamedArguments.Add(
     19                     new MemberValuePair(MemberKind.Property,
     20                         1,
     21                         "EntityKeyProperty",
     22                         new SerializedValue(
     23                             SerializationType.GetSerializationType(this.module.FindType(typeof(bool), BindingOptions.Default)),
     24                             true)
     25                             ));
     26             }
     27 
     28             return attr;
    29         }

    As you can see, it get's a little bit more complicated. We need to add a custom attribute, but to get it, we need to have a constructor for the attribute. I already have it cached: at line 3 the cached IMethod is given to the PostSharp CustomAttributeDecaration class. I got to the ctor like this:

                edmScalarPropertyAttribute = module.FindMethod(typeof(EdmScalarPropertyAttribute).GetConstructor(System.Type.EmptyTypes), BindingOptions.Default);
    

    We use PostSharp to find the constructor in the module.

    With the constructor, we can create a customAttributeDeclaration and from there we can add namedArguments. Note, that here again, we use PostSharp to find types for us. Kind of confusing, but it does provide a consistent way to do things. You could use it to call your own methods as well (!).

    I do the same for the attribute that needs to be placed on the complete type, and we are ready!

    Default values

    In the EF designer, you have the ability to specify default values for properties. I needed to mimic this functionality for this project, so I got to work. It seemed quite simple, because I could get to the fields without a problem. However, fields are initialized in the ctor of your type (thank you reflector). So more work was needed.

    First, I wanted to reuse this weaver, and wanted the weaver to add IL methods in the constructor. To do that, I implemented the ITypeLevelAdvice interface and added this line to the end of the implement():

                // make sure this class is called to weave
    
                this.Task.TypeLevelAdvices.Add(this);

    Implementing the ITypeLevelAdvice gives us the opportunity to supply some  information about what we want to do exactly:

            #region ITypeLevelAdvice Members
    
            public JoinPointKinds JoinPointKinds { get { return JoinPointKinds.AfterInstanceInitialization; } }
    
            public TypeDefDeclaration Type { get { return (TypeDefDeclaration)this.TargetElement; } }
    
            #endregion
    
            #region IAdvice Members
    
            public int Priority
    
            {
    
                get { return 0; }
    
            }
    
            public bool RequiresWeave(PostSharp.CodeWeaver.WeavingContext context)
    
            {
    
                return true;
    
            }
    
            #endregion

    As you can see, I want to use the AfterInstanceInitialization joinpoint. In other words, I want to be able to weave code, at that moment.

    What to weave?? I know everything about my businessentity, but I only know which properties need default values. So I want to come up with some basic rules about which field belongs to a certain propertyname:

      1             #region set default values. not yet emitting the instruction, but waiting for the Weave method
    
      2             foreach (FieldDefDeclaration field in typeDef.Fields)
    
      3             {
    
      4                 // we have to make concessions: we do not know how to find the field with the property exactly
    
      5                 EdmProperty memberProperty;
    
      6 
    
      7                 // find it as a member
    
      8                 // the rules: the field must match the ending of the propertyname. So underscore is okay
    
      9                 memberProperty = entityType.Members.SingleOrDefault(edmprop => field.Name.EndsWith(edmprop.Name, StringComparison.OrdinalIgnoreCase) ) as EdmProperty;
    
     10 
    
     11                 // in case that didn't match, try the autogenerated fieldname
    
     12                 if (memberProperty == null)
    
     13                 {
    
     14                     memberProperty = entityType.Members.SingleOrDefault(edmprop => (field.Name.IndexOf("<" + edmprop.Name + ">") == 0)) as EdmProperty;
    
     15                 }
    
     16 
    
     17                 // if this field belongs to a edm property, we can check for it's default value
    
     18                 if (memberProperty != null)
    
     19                 {
    
     20                     FieldsNeedingDefaultValue.Add(field, memberProperty.Default);
    
     21                 }
    
     22 
    
     23             }  
    24             #endregion

    I use two rules: if the field ends with the same name as the property, then this field belongs to that property. Another rule is, to look at the naming scheme that the compiler uses when it generates auto properties: <Propertyname>_k_backingfield;. Since that is how we will most likely use this whole project, I want to also support that.
    I build up a default value dictionary that I use in a later stadium.

    The weave method will be called when our joinpoint has been reached.

      1         public void Weave(PostSharp.CodeWeaver.WeavingContext context, InstructionBlock block)
    
      2         {
    
      3             foreach (FieldDefDeclaration field in FieldsNeedingDefaultValue.Keys)
    
      4             {
    
      5                 object value = FieldsNeedingDefaultValue[field];
    
      6 
    
      7                 if (value == null)
    
      8                     continue;
    
      9 
    
     10                 // the context is the ctor because we only use the joinpoint AfterinstanceInitialization
    
     11                 InstructionSequence sequence = context.Method.MethodBody.CreateInstructionSequence();
    
     12                 block.AddInstructionSequence(sequence, NodePosition.Before, null);
    
     13                 context.InstructionWriter.AttachInstructionSequence(sequence);
    
     14                 InstructionWriter writer = context.InstructionWriter;
    
     15 
    
     16                 if(value is int)
    
     17                 {
    
     18                             writer.EmitInstruction(OpCodeNumber.Nop);
    
     19                             writer.EmitInstruction(OpCodeNumber.Ldarg_0);
    
     20                             writer.EmitInstructionInt32(OpCodeNumber.Ldc_I4, (int)value );
    
     21                             writer.EmitInstructionField(OpCodeNumber.Stfld, field);                }
    
     22                 else if (value is string)
    
     23                 {
    
     24                             writer.EmitInstruction(OpCodeNumber.Nop);
    
     25                             writer.EmitInstruction(OpCodeNumber.Ldarg_0);
    
     26                             writer.EmitInstructionString(OpCodeNumber.Ldstr, (string)value);
    
     27                             writer.EmitInstructionField(OpCodeNumber.Stfld, field);                }
    
     28                 else
    
     29                 {
    
     30                     // TODO: implement other value types
    
     31                     throw new NotImplementedException(String.Format("No IL default implemented for type {0}", value.GetType()));
    
     32                 }
    
     33 
    
     34                 writer.DetachInstructionSequence(true);
    
     35             }
    
     36 
    
    37         }

    Again, Gael has assured me that a highlevel functionality will be created to easily set default values. I do not like to work with a big IF statement to inject different IL instructions per type, but that's it for now....

    I just use reflector, in IL viewing mode, to see how I should initialize a certain type, and off we go.

     

    This is the end of this series. I hope you enjoyed it.

    The following things still have to be done:

    • PostSharp can now be installed without using the GAC. I think people feel more at ease just using an external assembly, so I will change the EFContrib project to support this.
    • Relationships and complex types need to be supported
    • Obviously, the other default values need to be supported.

    I'll keep you updated on how that progresses!
    I hope this series has given you some ideas on how to use postcompiling in your own project. Let it make your life easier and your code cleaner.

  • Monday, 25 February 2008 15:56:01 (Romance Standard Time, UTC+01:00)  #    Comments [1]  |  Trackback
     Friday, 22 February 2008

    Scott Guthrie talks about silverlight 2.0 and it is looking to be exactly what I hoped it would be. It seems they are really aiming for enabling RIA applications, but cross platform, cross browser.

    The big announcement is the inclusion of built-in controls. It was a big disappointment to me that they were not included in silverlight 1.0 or 1.1, but they are included now!

    My take on this is that it will revolutionize the way we build software. As you might know, I've been involved in creating a big RIA application with xbap. Although it was great, we did not have a very good story on the use of the 'browser'. I was reminded time and time again that it was weird to use the browser, while it was not cross platform or even cross browser. We had good reasons to go for xbap none the less, but I'm looking forward to seeing what we can do with Silverlight 2.0.

    Friday, 22 February 2008 21:28:46 (Romance Standard Time, UTC+01:00)  #    Comments [1]  |  Trackback

    I sometimes do it, but don't like to do it too often. But this one, I just want to archive here so I can always find it: Beatriz has just released a great ready-to-use drag and drop library. She has also written a terrific blog post about it here, that shows the steps she took to achieve it.

    I just ran the project and it performs well, has insertion adorners and even allows dragging and dropping within the same list. Good stuff!

    Friday, 22 February 2008 13:54:19 (Romance Standard Time, UTC+01:00)  #    Comments [0]  |  Trackback

    This is the fourth of a series about using Workflow Foundation to control your UI Logic in a WPF application. The full table of contents:

  • Workflow as controller: Introducing <M,V,C> where M: ViewModel, V : WPF, C : WF
  • Part II, starting the application, and the adapter
  • Intermezzo: new sample application
  • Part III, your first view
  • Part IV, decoupling view from controller
  • Part V, marshalling commands from WPF to WF
  • Part VI, Injecting a controller in a subview / workspace
  • Part VII, IOC on the cheap: injecting and retrieving objects
  • Part VIII, Broadcasting for all to see
    Recap

    In the first post, the complete solution was presented. I am presenting a solution to use workflow as the controller part in your MVC inspired WPF application. It is inspired on the thought that you do not need complex frameworks, because WPF already gives you great power (routed eventing, resources). So, no IOC is used, no event aggregator etcetera: it's taken care of by WPF and WF, a natural fit.
    The solution is very decoupled and I feel it's a great advantage to be able to visual your control logic.
    In the previous post, I talked about the various ways to show a view, and actually already talked about the decoupling mechanism: commands.

  • I'm very lucky to have received some good comments from Wekemf about tight coupling. I urge you to read those comments and maybe chime in.

    In this post I will not return on that subject, but will quickly address two important activities that help in configuring your system quickly: the SetMainContent activity and the SetDataContext activity and sending WPF commands to WF's HandleCommand.

    SetMainContent activity

    A controller adapter is a normal WPF contentcontrol. It has the job to participate in the visual tree on behalf of our workflow controller class. To actually attach a view to it, we need to set it's content property.

    As shown in the previous post, you can just set on in xaml yourself, but it's more logical to let the workflow decide on the view. Ofcourse, the best approach is totally up to you.

    I usually use the stateinitalization activity to set up a view for us. I drag in the SetMainContent activity, and choose a type from the references assemblies.
    If it weren't for this step, the controller assembly would not need a reference to the view assembly at all. I found it very cool to be able to select a type with the typebrowser and just have it show up.

    The typebrowser is located in an assembly I have put in the externalAssemblies folder. It is a project, not started by me. The code did not work when I got my hands on it, but I managed to fix it by using a hammer. Check out this post to learn more about this great design-time experience!

    If you have a business need to decouple even further, you would need to adjust the SetMainContent activity, and instead of sending a real Type, send a string key or whatever. Then you would create some mapping functionality to map that key to the actual view.

    When the adapter get's notified by the SetMainContentMessage that it needs to set a content, it will just create the view (using reflection) and place it as it's own content.

    SetDataContext activity

    I do not like MVP at all, where the presenter talks back to the view directly (using an interface or something). I feel it's way too 'pushy' and way too much work. I believe in databinding (especially WPF bindings, I think Microsoft got it right this time). You view should just bind to your domain objects. In many cases, it's better to create a wrapper for the domainobjects, so you have the opportunity to supply some shortcut properties or view specific stuff: you might have a list of products, and you want the view to display the sum of the prices. That is a great opportunity for the viewmodel to expose a 'Sum' property that the view can simply bind to.

    The object that is used as a ViewModel should live with the controller who will be able to communicate with it.
    I usually create an internal public class, simply called ViewModel and have the controller inject that class with domain objects.

    The Set DataContext activity is very similar to the SetMainContent activity, in that it let's the adapter know it has to set a datacontext on itself.
    You configure the SetDataContext activity simply by choosing a field or property of your controller.

    In small sample applications, I have used the 'invoking' event, to hook up some code that actually initializes the ViewModel object.

    Sending WPF commands to the Workflow: HandleCommandActivity

    The HandleCommandActivity is really what makes using the solution so easy. I have blogged about it already extensively, and I will just summarize here:

    Workflow has a difficult communication story. You need to define your incoming and outgoing calls in an ExternalDataExchangeService. Then you have to hook up events in your workflow to listen to incoming calls/events. It is not possible to listen to the same events in two different states, without using the very difficult CorrelationTechnique.

    This is not necessary for our usage. I have created the HandleCommand activity to just listen to a queue with a specific name. That name is defined by the command we are listening to. So, if you want your workflow to react when you send it the string 'workflowRules', you would just drag in the HandleCommand and configure the Command property to read 'workflowRules'. No need to setup a special event for it.

    The commandService class has a PostCommand method, that you can call to put a message on the queue. That's all there is to it.

    So, when we receive a WPF command, we cast it to a RoutedUI command. The commandname is used to form a SimpleCommandMessage which can be used as input to the PostCommand method.

      1         #region command sinks
      2         private void CmdExecuted(object sender, ExecutedRoutedEventArgs e)
      3         {
      4             string commandname = (e.Command as RoutedUICommand).Name;
      5
      6             PostCommand(commandname, e.Parameter);
      7
      8         }
      9
    10         private void PostCommand(string commandname, object Parameter)
    11         {
    12             if (implementedCommands.Contains(commandname))
    13             {
    14                 commandSvc.PostCommand(new SimpleCommandMessage(instance.InstanceId, commandname, Parameter));
    15             }
    16         }
    17
    18         private void CmdCanExecute(object sender, CanExecuteRoutedEventArgs e)
    19         {
    20             string commandName = (e.Command as RoutedUICommand).Name;
    21
    22             if (implementedCommands.Contains(commandName))
    23             {
    24                 e.CanExecute = commandSvc.CanExecute(new SimpleCommandMessage(instance.InstanceId, commandName));
    25             }
    26         }
    27
    28         #endregion

    As you can see, I first check if the workflow even implements such a command. If not, it would be too expensive to send it to the workflow.
    Also, check out the CmdCanExecute method. It actually makes it possible for the workflow to put rules on the HandleCommand activity that are used to figure out if a command can be executed. For instance, if you are not authorized to do something, the command was never in CanExecute, so the button that hooks up to it was always dimmed!

    I hope that clears up some questions. Let me know what you think!

    Friday, 22 February 2008 11:24:20 (Romance Standard Time, UTC+01:00)  #    Comments [1]  |  Trackback

    This is the fifth of a series about how to go about using postcompilation in your solutions. You can read it as a tutorial on how to use PostSharp. I am very much new to that framework, but the power it provides could seriously change how you build your applications. While working on the EF contrib project, I had to dive into PostSharp, and I hope to share some of the things I learned along the way.

    This post introduces the weaver, that will do exciting stuff for us.

    The full table of contents:

  • Introducing Entity Framework Contrib- Easy IPoco implementation V 0.1
  • Part II, Postsharp Core versus Postsharp Laos
  • Part III, the compound aspect
  • Part IV, the PocoInterfaceSubaspect (composition aspect)
  • Part V, hooking up the weaver
  • Part VI, the EdmScalarWeaver
    Recap

    We wish to create an attribute that can be placed on top of our ordinary Poco class, that will magically transform it into a class that implements the 3 IPoco interfaces. These are needed by the Entity Framework to do it's work. We will use PostSharp to do this.
    Our previous post talked about the compound attribute and how it goes about implementing interfaces on classes for you.

    This post will look at how we are going to hook up a weaver to do more complex stuff, not directly supported by the provided Laos aspects.

    But first I want to clear up a statement I made here: 'I found it wildly confusing the first time I came across the two parts of postsharp. Laos is such a high-level abstraction, that you use it quite differently from Core. In the latter, you have to spinup your own weaver, in Laos you do not ever see a weaver. '
    It is not true that you do not ever see a weaver when using Laos. I should have been more clear: Laos offers a great deal of functionality that you can use without going into a weaver.

    We want to put custom attributes on our type that EF needs (the EDM scalar attributes on top of properties and the EDM type attribute that connects your type to a EDM type). Laos does not seem to have a ready-to-use aspect that provides that functionality, so we are going to need to hook into the weaver ourselves! How exciting!
    Thankfully, we can derive from TypeLevelAspectWeaver to make life easy enough.

    Hooking up a weaver

    The cool thing about using the weaver is, that you can put it in it's own assembly and not have to reference it in the assemblies that you are postcompiling. That is quite important, because the weaver depends on PostSharp.Core and there is another license attached to it.
    The PocoAttribute adds an aspect to the collection, like it did for the other aspects:

      1 public override void ProvideAspects(object element, LaosReflectionAspectCollection collection)
      2         {
      3             // Get the target type.
      4             Type targetType = (Type)element;
      5
      6 ....
    10             // inspect the complete class and add EDM scalar attributes to the properties
    11             collection.AddAspect(targetType, new EDMAttributesSubAspect(this.EDMContainerName, Name, NamespaceName, PathToConfigFile));
    .....
    12 }

    At line 11, the EDMAttributesSubAspect is added. This means that when Laos is ready to start working, it will check that aspect to see what it should do. Let's look at it now:

        [Serializable]
        internal class EDMAttributesSubAspect : ILaosTypeLevelAspect
        {
            #region fields and properties
            internal string EDMContainerName { get; set; }

            internal string TypeName { get; set; }
            internal string NamespaceName { get; set; }
            internal string PathToConfigFile { get; set; }
            #endregion

            #region ctor
            /// <summary>
            /// ctor
            /// </summary>
            /// <param name="EDMContainerName">the container name</param>
            /// <param name="NamespaceName">namespacename, can be null</param>
            /// <param name="TypeName">typename, can be null</param>
            public EDMAttributesSubAspect(string EDMContainerName, string TypeName, string NamespaceName, string PathToConfigFile)
            {
                this.EDMContainerName = EDMContainerName;
                this.TypeName = TypeName;
                this.NamespaceName = NamespaceName;
                this.PathToConfigFile = PathToConfigFile;
            } 
            #endregion

            #region ILaosTypeLevelAspect Members

            public void CompileTimeInitialize(Type type)
            {
            }

            public bool CompileTimeValidate(Type type)
            {
                return true;
            }

            public void RuntimeInitialize(Type type)
            {
            }

            #endregion

            #region ILaosWeavableAspect Members

            public int AspectPriority
            {
                get { return int.MinValue; }
            }

            #endregion 

        }

    You might be surprised to hear that this is the complete aspect!! Nothing that hints at what is to come.

    When such a thing happens, you might be stumped. But obviously you will immediately think to check the assemblyinfo file of the PostSharp4EF assembly:

    [assembly: PostSharp.Extensibility.ReferencingAssembliesRequirePostSharp("PocoTypeWeaver", "EntityFrameworkContrib.PostSharp4EF.Weaver")]
    [assembly: InternalsVisibleTo("EntityFrameworkContrib.PostSharp4EF.Weaver")]

    Okay, I was kidding. You wouldn't have thought of that. ;-)

    The first statement there, instructs PostSharp to look for a plugin, with name PocoTypeWeaver to process all assemblies that are referencing this assembly. It is just another way of expressing requirements. I could have put that inside the attribute. But I did not.

    The plugin file can be found in the weaver assembly. It is just a normal text file with naming convention that matches the entire assembly name + "psplugin". The contents of that file:

    <?xml version="1.0" encoding="utf-8" ?>
    <PlugIn xmlns="http://schemas.postsharp.org/1.0/configuration">
      <TaskType Name="PocoTypeWeaver" 
                Implementation="EntityFrameworkContrib.PostSharp4EF.Weaver.PocoEDMAttributesWeaverFactory, EntityFrameworkContrib.PostSharp4EF.Weaver">
      </TaskType>
    </PlugIn>

    Here you will see a task PocoTypeWeaver and a reference to an implementation of a weaver factory. So, our PocoAttribute needs the pocoTypeWeaver and it can get one through the use of a factory. But, since your client assembly will not have a reference to this weaver assembly (which contains this mapping file), we need some way to tell it where to look. Enter the psproj file that was put inside your client assembly:

    <Project xmlns="http://schemas.postsharp.org/1.0/configuration">
    	<SearchPath Directory="../EntityFrameworkContrib.PostSharp4EF.Weaver/bin/{$Configuration}"/>
    	<SearchPath Directory="{$SearchPath}" />
    	<Tasks>
    		<AutoDetect />
    		<Compile TargetFile="{$Output}" IntermediateDirectory="{$IntermediateDirectory}"  CleanIntermediate="false" />
    	</Tasks>
    </Project>

    This psproj file is used by postsharp to extend it's searchpath. This means that on machines that are building the solution, you will need to supply some path where PostSharp can find the just mentioned psplugin file.

    Please note: there are other ways to configure the searchpath and possibly better ways to setup a system like this. There is a default searchpath and you could also put your plugin file there.

    The weaver factory
        public class PocoEDMAttributesWeaverFactory : Task, ILaosAspectWeaverFactory
        {

            #region ILaosAspectWeaverFactory Members
            /// <summary>
            /// creates the weaver
            /// </summary>
            /// <param name="aspect">the EDMAttributesSubAspect that instantiated this factory</param>
            /// <returns>the weaver that will do the lowlever hardcore work</returns>
            public LaosAspectWeaver CreateAspectWeaver(PostSharp.Laos.ILaosAspect aspect)
            {
                if (aspect is EDMAttributesSubAspect)
                {
                    EDMAttributesSubAspect edmAttributesAspect = (EDMAttributesSubAspect)aspect;
                    return new PocoEDMAttributesWeaver(edmAttributesAspect.EDMContainerName);
                }

                return null;
            }

            #endregion
        }

    Nothing special there. Using a factory allows you to supply different weavers for different aspects.

    So, to recap: we attached a plugin that did not do much itself. We used the assemblyinfo to tell PostSharp it should always use a certain task when compiling assemblies that are referencing our attribute assembly. We used the psproj file to make PostSharp search in the right spot and we used a psplugin file to tell map the task name to an actual factory.
    The factory creates our weaver, and we will discuss that in the next post!

  • Friday, 22 February 2008 10:41:17 (Romance Standard Time, UTC+01:00)  #    Comments [2]  |  Trackback
     Thursday, 21 February 2008

    I got a mail yesterday from a German student asking about the future of workflow and my thoughts on it. I will share the thread. It was written in a hurry, so take it for what it is. Leave a comment to give him another view point.

    Read from bottom to top.

    -----

    my reply:

    What you are describing does indeed sound like a typical WF application, and it is absolutely suitable for that.

    Custom activities: don't be afraid. Just create one that is a wrapper around your huge ole-api. Creating an activity is little more than deriving from Activity and overriding the execute method.
    Put some properties on there and of you go.
    Or create multiple activities that do different things to the ole object.

    It sounds to me that you want to re-host the workflow designer. That is certainly do-able and there is a project from some1 you can download that actually did that. However, it was in need of more debugging. I don't have the url here. Sorry.

    What WF is not, is a magical system that requires no development. It is really meant to be a foundation, which a developer uses and builds upon to create a system that really suits the client wishes. So that means, configuring it, creating external data exchange services and building custom activities. Only then will you create a system that your client can use in the way you described. You need to mold it to behave like you want.

    In our case, it was definitely the developers that created the workflows. Best we could hope for was that business analysts could understand it (and they did). However, I've always felt it was possible to create a system that they could use directly.

    Success!
    Ruurd

    -----Original Message-----
    From: Sven
    Sent: woensdag 20 februari 2008 20:56
    To: me
    Subject: Re: Some questions about the future of WF

    Hi Ruurd,

    thank you very much for your in-depth statement! I had not expected this detailed level ;-)

    Actually as a part of my project I have to evaluate if WWF fits into an existing CRM Application.
    It should be possible for solution partners (customizing the application for their customers) without in-depth programming knowledge for example to "wire together" some custom activites to visually build for example the processing of an incoming mail, a little workflow for some little approval process (like you press a button inside the application on an address form, the workflow gets some field from the current record, decides based on the field which e-mail model to use, sends the email and finally writes some information to the same record, like "e-mail XY sent") or things like that... (sounds like a classic
    Flow-WF)
    But their could be use for some "state machines", too. Like there is a WF-Service running and dispatching incoming mails to different employees...

    Is WWF suitable for this ? These things could be done today in the application by coding some huge VB-Scripts, there is a huge OLE-API in the application...

    What I missed is a "CustomOLEActivity" to call whatever function in an application with OLE-API (there are a lot on the market) and to simply return some values...
    (the ExternalDataExchange/CommunicationActivity with wca.exe-Tool-way looks like beeing very complicated - at least if you have to build a CustomActivity for a huge OLE-API, or have I missed something out ?)

    On my "first look" the designer looked a bit complicated (even for people with some advanced knowledge, i have to target not the computer dummies, but also not the programmers on the other hand, some level "between", lets say "System Administrators"), but perhaps you can give me from your own experiance some hints in which direction I have to go for this...
    (who is editing the workflows in your big project?)

    Implementing everything "from scratch" looks like an even bigger effort... (would be the other choice...)

    Thanks a lot for your help and guidance !

    Bye
    Sven

    my reply:
    >
    > Basically, I see quite a few problems surrounding WF. It is very
    > shielded, the designer is not very good still and there is no good
    > update strategy (updating long running persisted workflows to new
    > versions). I think that last issue is one of the biggest problems it
    > has, although it hasn't gotten much publicity.
    >
    > However, as a platform, it does what it should do very well. They are
    > going to use it as the biztalk workflow engine and are already using
    > it as the human workflow engine for sharepoint.
    >
    > I feel we are moving toward an industry that needs to mature (the IT
    > development industry I mean). It is looking for DSL's and other ways
    > to make developing software a more manageable and predictable process.
    > Workflow has a definite place in that eco-system, where you can
    > visualize the flow of your program. This means you have an artifact
    > that will actually help a developer communicate with a business analyst or a client.
    >
    > To be concrete:
    >
    > So, why do I think developers have been slow to take it up: a
    > difficult programming model and some serious issues that are not well understood yet.
    > It is a radically different approach to building software, and it
    > takes time for ppl to feel confident with it.
    >
    > Is there a future: I say _yes_. If you understand the problems of
    > todays WF framework, you can already build great things, and I've
    > heard about some of the stuff that Microsoft is doing on the next
    > version, which will alleviate some big problems. Since we need this
    > kind of technology to build better software, there is definitely a future for it.
    >
    > Is it already used in the industry: Well, I have used it, but I have
    > yet to hear of big projects using it. Then again, Biztalk is used
    > extensively and the WF engine is every bit as powerful. (rules engine maybe slightly less).
    > Sorry, no example possible...)
    >
    > I do not think it will disappear.
    >
    > Kind Regards,
    > Ruurd Boeke
    >
    > -----Original Message-----
    > From: Sven
    > Sent: woensdag 20 februari 2008 19:48
    > To: me
    > Subject: Some questions about the future of WF
    >
    > Hello!
    >
    > I'm a computer student from the university of applied sciences of
    > Emden, Germany.
    >
    > Actually I'm working on a project dealing with the Windows Workflow
    > Foundation.
    >
    > As it was introduced one and a half years ago, but I see not so much
    > implementations or books about it, I wonder why it has been adopted so
    > slowly by the developpers.
    >
    > What do you think about this? (just some thought will be helpful for
    > me!) Is there a "future" ? Or will this stay an "Microsoft Internal" - Affair ?
    >
    > Is this already used in the industry ? Where ?
    > (If you could give me some examples from your experience this would be
    > very helpful for my work)
    >
    > Is this really a technology to build on or might it disappear slowly
    > like other "cool" stuff in the past ?
    >
    > Thank you very much in advance for any hint!
    >
    > Sincerely,
    > Sven

    Thursday, 21 February 2008 17:38:13 (Romance Standard Time, UTC+01:00)  #    Comments [2]  |  Trackback

    Just a short message to let everyone know I have built the first version of default value support for efcontrib.

    I will follow up with a bigger post on the limitations. For now, only ints and strings will be processed.

    Thursday, 21 February 2008 16:44:26 (Romance Standard Time, UTC+01:00)  #    Comments [0]  |  Trackback

    This is the fourth of a series about how to go about using postcompilation in your solutions. You can read it as a tutorial on how to use PostSharp. I am very much new to that framework, but the power it provides could seriously change how you build your applications. While working on the EF contrib project, I had to dive into PostSharp, and I hope to share some of the things I learned along the way.

    This post introduces the first real step I took: the compound aspect.

    The full table of contents:

  • Introducing Entity Framework Contrib- Easy IPoco implementation V 0.1
  • Part II, Postsharp Core versus Postsharp Laos
  • Part III, the compound aspect
  • Part IV, the PocoInterfaceSubaspect (composition aspect)
  • Part V, hooking up the weaver
  • Part VI, the EdmScalarWeaver
  • Recap

    We wish to create an attribute that can be placed on top of our ordinary Poco class, that will magically transform it into a class that implements the 3 IPoco interfaces. These are needed by the Entity Framework to do it's work. We will use PostSharp to do this.
    Our previous post talked about the composite attribute and how it allows you to combine multiple actions into one attribute. That's easier to use for the endusers.

    This post we will look into the PocoInterfaceSubAspect and how it does it's job.

    The Composition aspect

    You can remember placing the PocoInterfaceSubAspect on the element like so:

                // implement the three IPOCO interfaces on the class
                collection.AddAspect(targetType, new PocoInterfacesSubAspect());

    Postsharp will instantiate our aspect during it's weaving. Our aspect inherits from CompositionAspect. Let's take a step back and discuss what the CompositionAspect does.

    The compositionAspect is an extremely powerful aspect which allows you to implement an interface onto another object. So let's say you want to make an object be an iList at runtime, without dealing with it in your source: use composition to implement iList!

    We are implementing the three iPoco interfaces. Because the CompositionAspect wants one type to composite (and I did not feel like doing it three times) I created a facade interface:

        public interface IPocoFacade : IEntityWithChangeTracker, IEntityWithKey, IEntityWithRelationships
        {
        }

    Now, how does the weaver go about using this aspect to actually implement the code needed? It will ask for the public interface to inject and then also ask for an implementation object. The implementation object is the one that gets to do the dirty work. PostSharp will basically inject that implementation object into your object and then create all your interface code to just use that implementation object.

    The PocoInterfacesSubAspect

    The complete aspect looks like this:

        [Serializable]
        sealed class PocoInterfacesSubAspect : CompositionAspect
        {

            public override object CreateImplementationObject(InstanceBoundLaosEventArgs eventArgs)
            {
                return new PocoImplementation(eventArgs.Instance);
            }

            public override Type GetPublicInterface(Type containerType)
            {
                return typeof(IPocoFacade);
            }

            /// <summary>
            /// Gets weaving options.
            /// </summary>
            /// <returns>Weaving options specifying that the implementation accessor interface (<see cref="IComposed{T}"/>)
            /// should be exposed, and that the implementation of interfaces should be silently ignored if they are
            /// already implemented in the parent types.</returns>
            public override CompositionAspectOptions GetOptions()
            {
                return
                    CompositionAspectOptions.GenerateImplementationAccessor |
                    CompositionAspectOptions.IgnoreIfAlreadyImplemented;
            }
        }

     

    Clearly, we are more interested in the implementation object.

    The implementation

    Part of the implemenation object looks like this:

       class PocoImplementation : IPocoFacade
        {
            private readonly object instance;

            public PocoImplementation(object instance)
            {
                this.instance = instance;
            }
    }

    You can see it implements the IPocoFacade interface. It expects our businessobject that was decorated with the Poco attribute in it's constructor.

    We now just look at the code the Ado.Net team has given us to see how to implement these interfaces.

    The IEntityWithKey for instance, is quite easy:

            #region key
            EntityKey _entityKey = null;

            // Define the EntityKey property for the class.
            EntityKey IEntityWithKey.EntityKey
            {
                get
                {
                    return _entityKey;
                }
                set
                {
                    // Set the EntityKey property, if it is not set.
                    // Report the change if the change tracker exists.
                    if (_changeTracker != null)
                    {
                        _changeTracker.EntityMemberChanging("-EntityKey-");
                        _entityKey = value;
                        _changeTracker.EntityMemberChanged("-EntityKey-");
                    }
                    else
                    {
                        _entityKey = value;
                    }
                }
            }
            #endregion

    Done.

    Since I first want to know if there is interest, I have not implemented relationships and complex types yet.

    Thursday, 21 February 2008 12:25:32 (Romance Standard Time, UTC+01:00)  #    Comments [5]  |  Trackback

    This is the third of a series about using Workflow Foundation to control your UI Logic in a WPF application. The full table of contents:

  • Workflow as controller: Introducing <M,V,C> where M: ViewModel, V : WPF, C : WF
  • Part II, starting the application, and the adapter
  • Intermezzo: new sample application
  • Part III, your first view
  • Part IV, decoupling view from controller
  • Part V, marshalling commands from WPF to WF
  • Part VI, Injecting a controller in a subview / workspace
  • Part VII, IOC on the cheap: injecting and retrieving objects
  • Part VIII, Broadcasting for all to see
  • Recap

    In the first post, the complete solution was presented. I am presenting a solution to use workflow as the controller part in your MVC inspired WPF application. It is inspired on the thought that you do not need complex frameworks, because WPF already gives you great power (routed eventing, resources). So, no IOC is used, no event aggregator etcetera: it's taken care of by WPF and WF, a natural fit.
    The solution is very decoupled and I feel it's a great advantage to be able to visual your control logic.
    In the previous post, I showed a wizard style application.

    This post follows Part II, starting the application and the adapter. In that post we started our shell and explained how the adapter communicates with a workflow instance, how it can react to commands (normal RoutedUI events from wpf controls) and react to events by the command service.

    We will now continue, by looking at a simple view.

    View responsibility

    Let's first look at how we perceive a view in the MVC paradigm.
    A view should be nothing more than the visualization of your data. The only authority it has, is the authority to decide how to represent a piece of data on the screen. That means it should not contain any business logic. Be very strict about this: the responsibility of a view is the visualization of data.

    So, let's take a look at a common scenario where these lines may blur.
    Take a list of products and let's say that if we have a new product-line that has been introduced within the past month, we want to use another background color, to alert our customers to this new hot product.

    We could solve this in our binding perhaps (let's just assume that is easy), but we should not do that. That would mean the view is deciding when a product is new and hot. It should not.
    The only thing the view should do is create the two visual representations of products and use a datatemplate selector to decide which is hot or not. The datatemplate selector could be injected by our controller. Another way to solve this, is for the controller to put this information in the ViewModel itself. Like, add a boolean 'new' which the view uses.

    If you do not do it this way, and you are embedding logic inside of your view, you will quickly end up with scattered logic, never knowing where something is defined. Changing rules becomes hard and your application will break at some point.
    Now, I understand, and have done many times, that sometimes you just do not have the time to do it right. But always remember that in the long run, you will get burned. Try to setup a situation where it is easy to do the right thing, by making it easy to use datatemplate selectors or use the viewmodel.

    View decoupling

    MVC advocates not letting your view have any knowledge whatsoever of the controller. It does this, because tight coupling of the view to the controller will destroy maintainability and flexibility. If you tight couple, you are unable to swap controllers or views. Most importantly, if you couple the view to the controller (by making it call specific methods on the controller), it becomes harder to maintain/refactor.

    There are certainly approaches that do couple view to controller. If you look at the very powerful Caliburn framework, you will see that the framework has 'action messages' that directly call methods on the controller. I have yet to work with that extensively, so I can not be sure, but it feels to me there should be a very explicit layer between view and controller, which defines how the view will communicate with the controller.

    Our goals in this project are to use the tools WPF provides us to communicate with the rest of the system. We do so with Commands.
    A command can be seen as a message that is passed upward (and downward) the visual tree. Since our adapter lives just above the view and is part of the visual tree, it will have the opportunity to react to the command.

    When building a view, you should also explicitly define all the interactions that view expects to have with the outside world. Do that in a static class like so:

        public static class ImportantWizardInteractions
        {
            public static readonly RoutedUICommand Next;
            public static readonly RoutedUICommand Back;

            public static readonly RoutedUICommand GotoClientScreen;
            public static readonly RoutedUICommand GotoAdresScreen;
            public static readonly RoutedUICommand GotoRoleScreen;
            public static readonly RoutedUICommand GotoCarScreen;

            public static readonly RoutedUICommand Save;
            public static readonly RoutedUICommand SaveYes;
            public static readonly RoutedUICommand SaveNo;


            static ImportantWizardInteractions()
            {
                Next = new RoutedUICommand("Next", "Next", typeof(ImportantWizardInteractions));
                Back = new RoutedUICommand("Back", "Back", typeof(ImportantWizardInteractions));

                GotoClientScreen = new RoutedUICommand("GotoClientScreen", "GotoClientScreen", typeof(ImportantWizardInteractions));
                GotoAdresScreen = new RoutedUICommand("GotoAdresScreen", "GotoAdresScreen", typeof(ImportantWizardInteractions));
                GotoRoleScreen = new RoutedUICommand("GotoRoleScreen", "GotoRoleScreen", typeof(ImportantWizardInteractions));
                GotoCarScreen = new RoutedUICommand("GotoCarScreen", "GotoCarScreen", typeof(ImportantWizardInteractions));

                Save = new RoutedUICommand("Save", "Save", typeof(ImportantWizardInteractions));
                SaveYes = new RoutedUICommand("SaveYes", "SaveYes", typeof(ImportantWizardInteractions));
                SaveNo = new RoutedUICommand("SaveNo", "SaveNo", typeof(ImportantWizardInteractions));


            }
        }

    By being explicit about your interactions like this, you will be able to unit test more easily as well.

    Use in your view like this:

    <Button Command="{x:Static local:ImportantWizardInteractions.GotoClientScreen}">Client</Button>

    A command is great for buttons and other stuff, but how do you do for instance communicate that a customer was selected from a listview?

    1. well, you bind to a selectedCustomer property hopefully, and when the customer was selected, that property changed on the viewmodel. The controller might pick that up.
    2. More explicitly though: do use the SelectedItemChanged event and use the codebehind of your view as a translation layer to talk to the outside world:
    3.         private void ListBox_SelectionChanged(object sender, SelectionChangedEventArgs e)
              {
                  // send a command
                  CustomerQueueInteractions.SelectNewCustomer.Execute(e.AddedItems, this);
              }

    You can call me on that. It's not a very elegant solution. I'd rather be able to do away with the codebehind of a view entirely. But using the codebehind is actually fine: it is part of the view, and it should not be allowed to do anything else than to act as a translator for view specific things to commands.

    So, how to show a view

    Well, building a view is nothing else than just deriving from usercontrol and doing your thing. Using commands and going wild on the visuals. (try to animate everything!!! your client loves it).

    It depends now how you want to show it.

    1. Let's say your building a project where you don't care about fancy composition and pluggable modules in your application, and you just want your shell to show your view. The shell might have the following code:
      <c:GenericWorkflowAdapter WorkflowController="{x:Type logic:AControllerForYourView}" >
         
      <v:YourView/>
      </c:GenericWorkflowAdapter>

      I am assuming you do want a controller around your view.

      Here a controller is instantiated and it's content is set to your view. Easy.
    2. Let's say we want our controller to choose what view it uses. That seems to me to be the nicest way to go about it. We will again put a controller in the visual tree, but will not set a view already:

      1. <c:GenericWorkflowAdapter WorkflowController="{x:Type logic:ImportantClientWizard}" />

      Then, in the workflow, we might use some fancy logic to determine which view we will show (perhaps looking at the role of the user). To actually set a view, we will use the SetMainContentActivity. Drag that to the canvas and select a type.

      image

      Selecting a type is easy, because of the typebrowser I included:
      image 

    3. Yet another way, that is suitable for 'subviews', is to define a contentpresenter on some view anywhere:
    4. <ContentPresenter 
      ContentTemplate="{DynamicResource CurrentWizardScreen}" />


    And use the InjectViewAsDataTemplate activity in a controller to place a contenttemplate in the resources section with the same resourcekey.
    image

     

    I'll follow up with another take on decoupling the view from the controller, by looking at the SetDataContext activity and talking a bit more about the viewmodel.

    Thursday, 21 February 2008 11:57:02 (Romance Standard Time, UTC+01:00)  #    Comments [4]  |  Trackback
     Wednesday, 20 February 2008

    This is the second of a series about how to go about using postcompilation in your solutions. You can read it as a tutorial on how to use PostSharp. I am very much a new to that framework, but the power it provides could seriously change how you build your applications. While working on the EF contrib project, I had to dive into PostSharp, and I hope to share some of the things I learned along the way.

    This post introduces the first real step I took: the compound aspect.

    The full table of contents:

  • Introducing Entity Framework Contrib- Easy IPoco implementation V 0.1
  • Part II, Postsharp Core versus Postsharp Laos
  • Part III, the compound aspect
  • Part IV, the PocoInterfaceSubaspect
  • Part V, hooking up the weaver
  • Part VI, the EdmScalarWeaver
  • Recap

    We wish to create an attribute that can be placed on top of our ordinary Poco class, that will magically transform it into a class that implements the 3 IPoco interfaces. These are needed by the Entity Framework to do it's work. We will use PostSharp to do this.
    Post 1 introduced the project and post 2 introduced PostSharp.

    The Compound aspect

    I have created an attribute class that can be placed on top of other classes, like so:

        /// <summary>
        /// <para>
        /// Attribute that can be used to decorate normal POCO classes with. It is used to start off a post compilation phase
        /// that will modify the IL of the class. After this phase, the class will implement:
        /// <list type="">
        /// <item>INotifyPropertyChanged</item>
        /// <item>IEntityWithChangeTracker</item>
        /// <item>IEntityWithKey</item>
        /// <item>IEntityWithRelationships</item>
        /// </list>
        /// </para>
        /// <para>
        /// It will also place EDMScalarattributes on your properties.
        /// </para>
        /// <para>This results in a type that is completely ready for consumption by the EntityFramework</para>
        /// </summary>
        [AttributeUsage(AttributeTargets.Class, AllowMultiple = false, Inherited = false)]
        [MulticastAttributeUsage(MulticastTargets.Class, AllowMultiple = false)]
        public sealed class PocoAttribute : CompoundAspect
        {
    ... implementation
        }

    As you can see it inherits from the compound aspect class, from the PostSharp.Laos assembly.

    The compound aspect basically allows you to define other aspects to do the job for you. This is very nice when you want to do more things, but only want to use one aspect.

    By overriding the ProvideAspects method, we can add our other aspects:

      1         public override void ProvideAspects(object element, LaosReflectionAspectCollection collection)
      2         {
      3             // Get the target type.
      4             Type targetType = (Type)element;
      5
      6             // implement the INotifyPropertyChanged interface on the class
      7             collection.AddAspect(targetType, new AddNotifyPropertyChangedInterfaceSubAspect());
      8             // implement the three IPOCO interfaces on the class
      9             collection.AddAspect(targetType, new PocoInterfacesSubAspect());
    10             // inspect the complete class and add EDM scalar attributes to the properties
    11             collection.AddAspect(targetType, new EDMAttributesSubAspect(this.EDMContainerName, Name, NamespaceName, PathToConfigFile));
    12
    13             // iterate the properties
    14             foreach (PropertyInfo property in targetType.UnderlyingSystemType.GetProperties())
    15             {
    16                 if (property.DeclaringType == targetType && property.CanWrite)
    17                 {
    18                     MethodInfo method = property.GetSetMethod();
    19
    20                     if (!method.IsStatic)
    21                     {
    22
    23                         // throw notifypropertychanged events
    24                         collection.AddAspect(method, new OnPropertySetNotifyPropertyChangedSubAspect(property.Name, this.AspectPriority));
    25
    26                         // TODO: possibly refactor to only include edm properties
    27                         // call the changetracker
    28                         collection.AddAspect(method, new OnPropertySetChangeTrackSubAspect(property.Name, this.AspectPriority));
    29                     }
    30                 }
    31             }
    32         }

    As you can see, on line 4 I cast the element parameter to a type. That is the type we have adorned with our attribute (your 'Person' Poco class, for instance). Then, I specify that I want 3 other aspects to work on this type!

    • The AddNotifyPropertyChangedInterfaceSubAspect, will implement the INotifyPropertyChangedInterface. (note: this aspect is added just for simplicity, it has nothing to do with IPoco, so I might remove it).
    • The PocoInterfaceSubAspect, will implement the 3 interfaces (see following post)
    • The EDMAttributesSubAspect, will put EDMScalar attributes on top of our properties, needed by EF to do it's job

    After that, I loop through the properties in our targetType and add aspects to their setters. These will inject methods into the setters to throw the PropertyChanged event and let the EF changetracker know that the property was changed.

    That's all the work this compound aspect does. It just provides aspects to the correct codeblocks. The real work is done inside of these aspects and will be explained in following posts.

    Wednesday, 20 February 2008 13:34:36 (Romance Standard Time, UTC+01:00)  #    Comments [5]  |  Trackback

    Scott Guthry announces the .Net 3.5 client product Roadmap here.

    Highlights are improved bootstrapping of 3.5 for your client applications and improved cold startup times.

    But the real news for the WPF addicts: the dropshadow and blur bitmap effects will now be hardware accelerated!! That is a big thing. These effects are completely useless at the moment, but if they are hardware accelerated, you will be able to do some great stuff. He also hints at a new effects API and data virtualization support.

    On top of that, he announces that there is a real DataGrid control coming, and my personal favorite: a Calendar/Datepicker control.

    I can easily live without a datagrid, these are so easy to create with the listview, that I don't see a reason to actually build a new control. However, the lack of an official datepicker is in-excusable for enterprise applications. I'm happy to see they are working on it!

    Wednesday, 20 February 2008 12:16:33 (Romance Standard Time, UTC+01:00)  #    Comments [0]  |  Trackback

    This is an intermezzo from the MVC with WF series. I have added a new sample to the project, which I hope demonstrates the flexibility of using WF.

    The rest of this series can be found here:

  • Workflow as controller: Introducing <M,V,C> where M: ViewModel, V : WPF, C : WF
  • Part II, starting the application, and the adapter
  • Intermezzo: new sample application
  • Part III, your first view
  • Part IV, decoupling view from controller
  • Part V, marshalling commands from WPF to WF
  • Part VI, Injecting a controller in a subview / workspace
  • Part VII, IOC on the cheap: injecting and retrieving objects
  • Part VIII, Broadcasting for all to see

    It is a simple 4 screen 'wizard' where logic determines that it should skip one screen. Also, when the save button is hit, a popup will show that asks if you are sure. If you are, you will be sent to the first screen, otherwise you will return to your last screen. It has buttons on the left that determine where you can go, as well as 'next' and 'previous' buttons.
    All of this was done with a minimum of code and a maximum of dragging and dropping activities. The whole reason for doing this, is that when you now get a new feature request ("We have a new screen that sits in between the client and adres screen!!!"), using WF it will be dead-simple to add it.

    I have uploaded the executable here, just in case you don't feel like opening up the project and building yourself.
    The application looks like this:

    image

    And when you reach the 'Car' screen, it will look like this:

    image

    Hitting the Save button here:

    image

    A few things to notice about this sample:

    • There are 2 controllers doing their job here:
      • The usersettings controller, with a view on top. It allows you to check a checkbox. Doing so makes you an administrator. Notice how, when you do so, you are able to browse to the 'Role' screen. You see, if you are not an administrator, you are not allowed to enter the role screen.
      • The 'ImportantWizardController', which handles the mainview. It shows a few buttons (Client, Adres, Role and Car) on the left, which will allow you to go to the screens you have already passed. It also shows a previous and next screen button. Finally, it defines a contentpresenter where our subviews will be injected.
    • The buttons react immediately. Go to the Car screen, and then check your checkbox to make yourself Administrator. This means you have the right to visit the Role screen, and it immediately pops up.
    • No code behind. Nowhere. The only codebehind is on the ImportantWizardControl to 'load data' (actually returning an empty client, but you get the drift).
      It felt really cool to build this plumbing without coding.

    Let's look at the steps to produce this application:

    1. I added a Controller project (type workflow), a Domain project (with a few very simple classes), a shell project that will be used to start us up and a view project which holds the views we are going to use:
      image
    2. I created a ClientService and a UserService class which will be classes used by our workflows:
          [Serializable]
          public class UserService
          {
              public bool IsAdministrator { get; set; }

          }
          [Serializable]
          public class ClientService
          {
              public Client CurrentClient { get; set; }

          }
    3. Then the shell was used to inject our main view and also inject a global userservice class:

        1 <Window x:Class="EditLogicShell.Window1"
        2     xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
        3     xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
        4     xmlns:logic="clr-namespace:EditLogicControllers;assembly=EditLogicControllers"      
        5     xmlns:c="clr-namespace:ControllersAdapters;assembly=ControllersAdapters"
        6     Title="Window1" Height="300" Width="300">
        7     <Window.Resources>
        8         <logic:UserService x:Key="globalUserService" />
        9     </Window.Resources>
       10     <StackPanel>
       11         <Border BorderThickness="1" BorderBrush="Black" Background="Beige">
       12             <c:GenericWorkflowAdapter WorkflowController="{x:Type logic:ManageUserSettingsController}" />
       13         </Border>
       14         
       15         <c:GenericWorkflowAdapter WorkflowController="{x:Type logic:ImportantClientWizard}" />
       16     </StackPanel>
       17 </Window>

      Note line 8, where we place a UserService instance in the resources section
      On Line 12 we start our UserSettings controller
      On Line 15, we start our main wizard.

    4. Let's not go into the usersettings controller. It's just too simple. It will inject a view, and set the datacontext to the userservice class. He retrieved that class using the 'RetrieveObjectFromResources' activity.

    5. I then asked our designer (yup, that was me too... could you tell??) to design our individual views. Everywhere the designer knew he had to interact with the system, a command was created in a static class. That class turned out to be like this:

          public static class ImportantWizardInteractions
          {
              public static readonly RoutedUICommand Next;
              public static readonly RoutedUICommand Back;

              public static readonly RoutedUICommand GotoClientScreen;
              public static readonly RoutedUICommand GotoAdresScreen;
              public static readonly RoutedUICommand GotoRoleScreen;
              public static readonly RoutedUICommand GotoCarScreen;

              public static readonly RoutedUICommand Save;
              public static readonly RoutedUICommand SaveYes;
              public static readonly RoutedUICommand SaveNo;


              static ImportantWizardInteractions()
              {
                  Next = new RoutedUICommand("Next", "Next", typeof(ImportantWizardInteractions));
                  Back = new RoutedUICommand("Back", "Back", typeof(ImportantWizardInteractions));

                  GotoClientScreen = new RoutedUICommand("GotoClientScreen", "GotoClientScreen", typeof(ImportantWizardInteractions));
                  GotoAdresScreen = new RoutedUICommand("GotoAdresScreen", "GotoAdresScreen", typeof(ImportantWizardInteractions));
                  GotoRoleScreen = new RoutedUICommand("GotoRoleScreen", "GotoRoleScreen", typeof(ImportantWizardInteractions));
                  GotoCarScreen = new RoutedUICommand("GotoCarScreen", "GotoCarScreen", typeof(ImportantWizardInteractions));

                  Save = new RoutedUICommand("Save", "Save", typeof(ImportantWizardInteractions));
                  SaveYes = new RoutedUICommand("SaveYes", "SaveYes", typeof(ImportantWizardInteractions));
                  SaveNo = new RoutedUICommand("SaveNo", "SaveNo", typeof(ImportantWizardInteractions));


              }
          }
      And on individual screens, the commands were used like this:
                  <Border Background="Beige" BorderThickness="1" BorderBrush="Black" DockPanel.Dock="Left" Width="120" >
                      <ListView>
                          <Label FontWeight="Bold">Previous screens</Label>
                          <Button Command="{x:Static local:ImportantWizardInteractions.GotoClientScreen}">Client</Button>
                          <Button Command="{x:Static local:ImportantWizardInteractions.GotoAdresScreen}">Adres</Button>
                          <Button Command="{x:Static local:ImportantWizardInteractions.GotoRoleScreen}">Role</Button>
                          <Button Command="{x:Static local:ImportantWizardInteractions.GotoCarScreen}">Car</Button>
                      </ListView>
                  </Border>
      (This is the list of buttons that are shown on the left hand side of the screen)
    6. The views were passed to the developer (guess who) and the following state machine was created:
      image 

      Obviously a thing of beauty.

      1. The state initialization will retrieve the userservice, load data, set our maincontent to the mainView and set our next state to clientDetails

      2. The clientdetail has an initialization as well: it will set the datacontext to our customer and inject the ClientView as a datatemplate. Then it waits for only one command: Next. If that is triggered, it will simply move to the AdresDetails State.
        When moving out of a state, the state finalization is triggered, which will remove the view from the resources.
        Note how great it is never to have to think about that cleanup code again, it is always executed when moving out of a state.

      3. The adresDetails state has a few more commands it will listen to. When moving 'Next', a piece of logic is executed:
        image
        There is a declerative rule in the IF/ELSE that goes a little something like this: this.GlobalUserService.IsAdministrator == True
        That rule is automatically put in the rules repository and can be used by others. It determines if the next screen will be the Role screen or the CarDetails screen.

      4. Role is simple.

      5. CarDetails also reacts to the Save-command. When it get's triggered, it will inject our popup into the resources section, and move on to the save state.

      6. The Save state will react to 'SaveYes' and 'SaveNo'. It will remove the popup from the resources, and go to another state.

    7. The views are dead simple, just binding to properties. However, the ImportantWizardMainView does require our attention. It has this definition for our subview:

                  <!-- our subview, uses name: CurrentWizardScreen -->
                  <ContentPresenter 
                  Content="{Binding RelativeSource={RelativeSource FindAncestor, 
                  AncestorType={x:Type local:ImportantWizardMainView}, AncestorLevel=1}, Path=DataContext}" 
                  ContentTemplate="{DynamicResource CurrentWizardScreen}" />


      Apparently, when using the contentTemplate, the datacontext is not inherited. So I have to set it up to react to the changing datacontext of our main screen. So, when the DataContext of ImportantWizardMainView is changed, the Content of our presenter is changed to match it. (Leave comment on how I could do this simpler, if you know how).

    8. Also interesting is that I used a Grid on that view, with two children that overlay eachother. The other child is our popup screen:

              <!-- our popup lives on top of that -->
              <ContentPresenter ContentTemplate="{DynamicResource PopupScreen}" />

      When we set a datatemplate with name Popupscreen, it will be shown on top of our regular screen. I like it!

    I have added a new activity, InjectViewAsDataTemplate. We already had the InjectControllerAsDataTemplate, but there are times you don't want a whole controller.

    I've replaced the original project file with the most recent. It can be found here.
    If you are interested in seeing more about this subject, please leave a short comment!

    kick it on DotNetKicks.com

  • Wednesday, 20 February 2008 11:55:27 (Romance Standard Time, UTC+01:00)  #    Comments [18]  |  Trackback
     Tuesday, 19 February 2008

    This is the second of a series about how to go about using postcompilation in your solutions. You can read it as a tutorial on how to use PostSharp. I am very much a new to that framework, but the power it provides could seriously change how you build your applications. While working on the EF contrib project, I had to dive into PostSharp, and I hope to share some of the things I learned along the way.

    This post quickly introduces PostSharp, before we move on to the real stuff!

    The full table of contents:

    PostSharp

    The PostSharp home introduces PostSharp as follows:

    PostSharp is a tool that can reduce the number of lines of code and improve its logical decoupling. Therefore its helps you delivering higher stability, cleaner design, and cheaper source code maintenance

    And best of all, PostSharp is free and open source. Yes, even for commercial use

    Basically, it allows you to use attributes on top of code to indicate that after the normal visual studio compilation, PostSharp should do 'something' to the code. That 'something' could be anything you want. The result is a compiled assembly that does more than you would expect from the sourcecode. This is a good thing, when you have 'code noise': code that might be important, but distracts from the real work.

    Code noise could be your logging mechanism, or your transaction mechanism. In my case, I did not want to implement the IPoco interfaces that EntityFramework needs, in order to make my business objects work with EntityFramework. I want my business objects to represent a person, car or whatever, and not have to deal with the data access logic at all.

    The simplest example you can think of, is shown on the frontpage:

    public class SimplestTraceAttribute : OnMethodBoundaryAspect 

      public override void OnEntry( MethodExecutionEventArgs eventArgs) 
      { 
        Trace.TraceInformation("Entering {0}.", eventArgs.Method); 
        Trace.Indent(); 
      } 
      public override void OnExit( MethodExecutionEventArgs eventArgs) 
      { 
        Trace.Unindent(); 
        Trace.TraceInformation("Leaving {0}.", eventArgs.Method); 
      } 
    }

    By adding an [SimplestTrace] attribute on top of your code, you will have instant tracing information, without actually seeing it in your code. The fun thing about PostSharp is, that this code is actually in your assembly after post-compilation, as opposed to other AOP frameworks, that will do it at runtime.

    Laos versus Core
    PostSharp offers a full representation of your code, a bit like reflection. But it can be hard to work with. That is why PostSharp Laos was created. Laos is a 'plugin' on the core functionality that abstracts away most of the hard stuff, and leaves you with ready to implement aspects.

    I found it wildly confusing the first time I came across the two parts of postsharp. Laos is such a high-level abstraction, that you use it quite differently from Core. In the latter, you have to spinup your own weaver, in Laos you do not ever see a weaver.
    (A weaver is a class that will actually inject IL methods into your assembly.)

    When you use Laos, some smart hooks exist to use it's own weaver. That weaver knows how to deal with Laos Aspects. And so, you can use the Laos abstractions without any knowledge about IL or weaving.

    The shown example uses the OnMethodBoundaryAspect, from Laos. The Laos weaver will inject the necessary IL methods on every method (that matches your desire to trace it) to call the OnEntry and the OnExit methods you defined. There are quite a few aspects ready to inherit from. I urge you to look at the documentation to find out which.

    If you were to implement that functionality using the Core library, you would have to inject all the IL yourself. It would however, give you the opportunity to actually inject the Trace calls into the methods, instead of the easier method calls.

    One very interesting aspect that Laos offers, is the CompositionAspect, which allows you to set a specific interface to implement and give an implementation object that is called for every defined method on the interface. I use it for the three IPoco interfaces.

    In short: Laos is a very high-level abstraction that will get you very far. In some cases you need to take it a little further, and you will need Core.

     

    In the next couple of posts, I will show both the Laos aspects and the Core aspect, how they were applied and how they do their job.

    Tuesday, 19 February 2008 12:28:26 (Romance Standard Time, UTC+01:00)  #    Comments [1]  |  Trackback

    This is the second of a series about using Workflow Foundation to control your UI Logic in a WPF application. The full table of contents:

  • Workflow as controller: Introducing <M,V,C> where M: ViewModel, V : WPF, C : WF
  • Part II, starting the application, and the adapter
  • Intermezzo: new sample application
  • Part III, your first view
  • Part IV, decoupling view from controller
  • Part V, marshalling commands from WPF to WF
  • Part VI, Injecting a controller in a subview / workspace
  • Part VII, IOC on the cheap: injecting and retrieving objects
  • Part VIII, Broadcasting for all to see
  • I thought it best to just put out that TOC, to force myself to actually write these short posts ;-)

    Recap

    In the previous post, the complete solution was presented. I am presenting a solution to use workflow as the controller part in your MVC inspired WPF application. It is inspired on the thought that you do not need complex frameworks, because WPF already gives you great power (routed eventing, resources). So, no IOC is used, no event aggregator etcetera: it's taken care of by WPF and WF, a natural fit.
    The solution is very decoupled and I feel it's a great advantage to be able to visual your control logic.

    Starting the application / Shell

    The term 'Shell' is used to indicate a startable 'host' for your application. In WPF, that is probably your App.xaml view. In there, you point to a startupUri of a window. We do not need anything different for our application, but we do need to start the workflow runtime.

    I have chosen not to build a generic application.start method, because I am still thinking about threading. For now, I have chosen to use the ManualWorkflowSchedulerService to let the workflow instances do it's thing. Normally the workflow runtime uses a workerthread to execute the workflow instances on the background. That means that when you send a command to the workflow, it will be run in the background. That sounds great, but will give you some pain when changing data that is bound to the UI thread. For this first version, I did not want that pain, so I used the ManualWorkerThreadScheduler. Now, the workflow instance will do nothing, until we explicitly donate our (UI)thread to it.

    Starting the runtime is simple:

      1         public App()
      2         {
      3             // start a workflow runtime
      4             workflowRuntime = new WorkflowRuntime();
      5
      6             ManualWorkflowSchedulerService manualSvc = new ManualWorkflowSchedulerService(false);
      7             workflowRuntime.AddService(manualSvc);
      8
      9             ExternalDataExchangeService dataSvc = new ExternalDataExchangeService();
    10             workflowRuntime.AddService(dataSvc);
    11             dataSvc.AddService(new CommandService(workflowRuntime));    // add our generic communication service
    12
    13
    14
    15             workflowRuntime.StartRuntime();
    16             workflowRuntime.WorkflowTerminated += new EventHandler<WorkflowTerminatedEventArgs>(workflowRuntime_WorkflowTerminated);
    17             workflowRuntime.WorkflowAborted += new EventHandler<WorkflowEventArgs>(workflowRuntime_WorkflowAborted);
    18             workflowRuntime.WorkflowCompleted += new EventHandler<WorkflowCompletedEventArgs>(workflowRuntime_WorkflowCompleted);
    19
    20
    21             ControllersAdapters.WorkflowRuntimeHolder.SetCurrentRuntime(workflowRuntime);
    22
    23             this.Exit += new ExitEventHandler(App_Exit);
    24         }

    At line 7, the ManualWorkflowSchedulerService is indeed added to the runtime.
    At line 11, our own communicaton class (CommandService) is added to the runtime. You can interpret the runtime as a global object container: when we ever want to use that commandService singleton, we can just ask the runtime for it.
    Lines 15 through 18 hookup some eventhandlers to certain events. We'll cover them next.
    Line 21 sets this runtime at a static propery for the controllerAdapters to fetch. A quick and dirty solution.

    The events that we subscribe to are handled as follows:

      1         void workflowRuntime_WorkflowCompleted(object sender, WorkflowCompletedEventArgs e)
      2         {
      3             ICommandService cmdsvc = workflowRuntime.GetService(typeof(ICommandService)) as ICommandService;
      4
      5             cmdsvc.SendMessage(new InstanceWasRemovedMessage(e.WorkflowInstance.InstanceId));
      6         }
      7
      8         void workflowRuntime_WorkflowAborted(object sender, WorkflowEventArgs e)
      9         {
    10             ICommandService cmdsvc = workflowRuntime.GetService(typeof(ICommandService)) as ICommandService;
    11
    12             cmdsvc.SendMessage(new InstanceWasRemovedMessage(e.WorkflowInstance.InstanceId));
    13         }
    14
    15         void workflowRuntime_WorkflowTerminated(object sender, WorkflowTerminatedEventArgs e)
    16         {
    17             ICommandService cmdsvc = workflowRuntime.GetService(typeof(ICommandService)) as ICommandService;
    18
    19             cmdsvc.SendMessage(new InstanceWasRemovedMessage(e.WorkflowInstance.InstanceId));
    20         }
    21
    22         void App_Exit(object sender, ExitEventArgs e)
    23         {
    24             workflowRuntime.StopRuntime();
    25         }

    As you can see, I fetch the command service from the runtime, and ask it to send a message. The commandservice will 'broadcast' this message to all living controller adapters. When a workflow is finished, either by termination or just because it finished it's process, we need to let the adapter know so that it can unsubscribe from events from the commandservice.

    The adapter

    The GenericWorkflowAdapter is a WPF control that handles the communication between WPF and WF. We will see pieces of it in the upcoming posts, but we'll need to go into a little more detail here.

        /// <summary>
        /// This is a WPF type that can be placed anywhere in your UI tree. It can be configured with a workflow type.
        /// When it is, it will instantiate the Workflow.
        /// This adapter will then be able to pick up WPF Command (RoutedUI) and send them to the workflow, as well
        /// as listen to events coming from the runtime, the commandsvc and the workflow instance
        /// </summary>
        public class GenericWorkflowAdapter : ContentControl, IWeakEventListener
        {
               ...
        }

    As you can see, it is a contentControl. The workflowcontroller is able to place an arbitrary view as it's content.
    It has one property: WorkflowControllerProperty, typeof(Type), which will fire off the SetWorkflowController method when it is set.

      1         private void SetWorkflowController(Type type)
      2         {
      3             // actually start the controller!
      4             instance = runtime.CreateWorkflow(type);
      5             instance.Start();
      6
      7             // allow it to do it's thing
      8             threadSvc.RunWorkflow(instance.InstanceId);
      9
    10             Debug.WriteLine(String.Format("Adapter has started workflow instance {0}, of type {1}", instance.InstanceId, type.ToString()));
    11
    12
    13             // we will filter commands to only manage commands that we have defined in our workflow
    14             // so we have to walk recursively through all activities
    15             IEnumerable<System.Workflow.ComponentModel.Activity> flattenedActivities =
    16                 (instance.GetWorkflowDefinition() as System.Workflow.ComponentModel.CompositeActivity).EnabledActivities.
    17                 SelectRecursiveSimple(activity => (activity is System.Workflow.ComponentModel.CompositeActivity) ?
    18                     ((System.Workflow.ComponentModel.CompositeActivity)activity).EnabledActivities :
    19                     new System.Collections.ObjectModel.ReadOnlyCollection<System.Workflow.ComponentModel.Activity>(new List<System.Workflow.ComponentModel.Activity>()))
    20             ;
    21
    22             // let's get the handlecommands
    23             var commands = flattenedActivities.Where(act => act is HandleCommand).Select(act => ((HandleCommand)act).CommandName)
    24             ;
    25
    26             implementedCommands = new ReadOnlyCollection<string>(commands.ToList());
    27
    28             SetupCommandSinks();
    29         }

    As you can see, this method actually goes into the workflowruntime and ask for it to spin up a workflowinstance. Then it donates it's thread to actually 'run' the instance. The workflow instance probably has initialization code attached to it. That code get's run at this point.

    At line 15, I use Linq to go through every activity that is defined in the workflow and look at the HandleCommand activities. These are activities that wait for a command and act upon it. I need to know which commands this workflow might respond to, so I create a readonly collection from this. Later, we will only let the adapter pass commands that are actually implemented by the workflow!

    At line 28, there is a call to setup the command sinks:

            private void SetupCommandSinks()
            {
                // set up command sinks
                CommandManager.AddExecutedHandler(this, CmdExecuted);
                CommandManager.AddCanExecuteHandler(this, CmdCanExecute);
            }

    Here you see the simple code that will register this contentcontrol to handle RoutedUICommands from WPF. As you can probably guess, when a command reaches these handlers, they will be filtered by the 'implementedCommands' collection we defined earlier on, and if they are implemented AND they are currently enabled, the command is posted to the workflow.

    I have setup two events: the lost and gotFocus events, to also send commands to the workflow. If the workflow chooses to do so, it can handle these. I use them to remove and add options to the menu shell.

    The last thing to cover, is the ReceiveWeakEvent method. This adapter will register itself at the commandService, and the commandService will subscribe the adapter to a few events. It uses weakevents to do so, so that the lifespan of this adapter is not tied into the commandService (which will live forever).
    There are a whole host of message that can be sent in the system, and the ReceiveWeakEventMethod will implement different behavior for all of them. It will look at at the arguments that were passed, and check for a specific type.

    (I might refactor that, to actually put the logic into the messages).

    That's it for now, in the next post we will actually get our hands dirty and put together our first application!

    Tuesday, 19 February 2008 11:36:14 (Romance Standard Time, UTC+01:00)  #    Comments [0]  |  Trackback
     Monday, 18 February 2008

    In this post I'd like to introduce version 0.1 of the first Entity Framework contribution project: Automatically implement the IPoco interfaces.
    The project is aimed at helping you build your domain layer in a more persistence ignorant way, than is possible at this moment.

    [official codeplex location of the project is here]

    The full table of contents:

    The Problem: baseclass needed

    Microsoft is on the brink of releasing the Entity Framework. It is at beta 3 at this moment. If you are reading this blog, you are probably familiar with it, but let's do a quick summary:
    The Entity Framework is a framework that maps between a database and your domain objects. It's grand vision is to easily allow you to (with a funky design-experience) create (multiple) conceptual models that know how to talk to the database. Although more than an OR-mapper, most people like to position it as such anyway.
    EF is an abstraction layer on top of your datastore and will allow you to work with business objects that actually make sense from an object-oriented perspective, instead of making you work with datarows, tables and sets.

    One part of the criticism that the Entity Framework gets at this moment, is the lack of persistence ignorance. This means that, when you use the Entity Framework, you will have to create business entities that are aware of the Entity Framework (they need to derive from a Entity Framework baseclass).
    This goes against too many principles to mention, and the ADO.Net team have gotten quite a bit of comments about it (other more mature frameworks, like nHibernate do not force you into this). Rightfully so!
    In the end, Daniel Simmons blogged about the criticism here: Persistence Ignorance: OK, I think I get it now.

    The suggested Solution: implement interfaces

    In order to take away the need to implement a base-class, the EF-team created a few interfaces that need to be implemented. That is as far as they can go in the first release.

    So, you can implement 3 interfaces on your business objects, and no baseclass is needed. 
    Although much better, I feel I should not have to spend time on, or burden my domain layer with, code to facilitate data access. My domain layer should be able to focus on one thing: solving the business problems of the client.
    By introducing other code to my domain layer, developers will be distracted.

    Bill McCafferty posts about DDD (Domain Driven Design) and EF here. He concludes:

    In short, and at the risk of being laconic, I feel that the ADO.NET Entity Framework does for data communications what the ASP.NET page life cycle did for the presentation layer.  In trying to introduce simplification and increased productivity, it's actually going to result in higher complexity and decreased maintainability in the long run.  I appreciate what Microsoft is trying to do, and absolutely love some of their other ideas, but, for now, I'm going to pass on the ADO.NET Entity Framework.

    Billy McCafferty

    He is quite right!!

    EF-Contrib: Easing the implementation of these interfaces

    The 3 interfaces we are talking about are:

    • IEntityWithChangeTracker
    • IEntityWithKey
    • IEntityWithRelationships

    Implementing these interfaces is sometimes called "IPoco": Poco stands for Plain Old C# (or Code) Object, and the I in IPoco means that you can still use your Poco object but have to implement these interfaces. (so, not Poco at all... but still!)

    The current checked in project (find it here) uses Postsharp to actually change the IL-code of your assembly and implements these interfaces. That means that you can build a domain layer with a class like this:

        [Poco("ConnectionName")]
        public class Person
        {
            public int PersonID { get; set; }
            public string Firstname { get; set; }
            public string Lastname { get; set; }
        }

    After compilation, the class will actually look a bit different on disk:

        public class Person : IEntityWithChangeTracker, IEntityWithKey, IEntityWithRelationships
        {
    ...
        }

    So you can use this Person class, like you would use the classes that EF generates.

    It is important to understand that there will be very little runtime performance costs involved. The code transformation is done at compile-time, once. At runtime, there is no magic AOP or whatever involved.

    This approach is used by several other OR-Mappers and is very common in the Java world.

    Is this Persistence Ignorance?

    Obviously, it's not. Hopefully, in version 2.0 of the Entity Framework, full ignorance is achieved. However, if you want to use EF at your datalayer today, this approach will let you focus on the important stuff, instead of data access code.

    Imagine changing your conceptual model. When implementing IPoco yourself, you will have to take care to change all kinds of attributes on top of your properties. This will quickly become a burden.

    How does it work?
    • You will need to download and install Postsharp on all the machines that will build your application (developer machines and teambuild machine(s)).
    • Your domain layer will have to reference the EntityFrameworkContrib.PostSharp4EF assembly, and the PostSharp.Laos and PostSharp.Public assemblies. By referencing these, Postsharp will know to do a post-compilation phase on your assemblies.
    • You will need to supply a 'psproj' file in your assembly, to let our attribute know where it should look to actually do the implementation. This allows me to seperate the implementation assembly from what you need at runtime!
    • You have already created your edmx file, which EF will dissect into the individual .csdl, .msl and .ssdl files and place them in your bin/debug folder.
    • The project for now assumes a connection string to be present in your app.config
    • You can create your own simple business object.
    • That connection string is needed during the postcompilation phase to get to the individual mapping files, so use the attribute [Poco("")] to let us know you need to change this class.
    • The interfaces are implemented and the setters of your properties are modified to actually do changetracking
    • Actually, at this moment: INotifyPropertyChanged is implemented as well (let me know if you actually want this).

    So, let's first look at the psproj file you need. In the Test-project, there is one already:

    <Project xmlns="http://schemas.postsharp.org/1.0/configuration">
    	<SearchPath Directory="../EntityFrameworkContrib.PostSharp4EF.Weaver/bin/{$Configuration}"/>
    	<SearchPath Directory="{$SearchPath}" />
    	<Tasks>
    		<AutoDetect />
    		<Compile TargetFile="{$Output}" IntermediateDirectory="{$IntermediateDirectory}"  CleanIntermediate="false" />
    	</Tasks>
    </Project>

    The referenced assembly EntityFrameworkContrib.PostSharp4EF only defines the Poco attribute, but does not contain the actually 'code-weaving'. If we would have placed the code-weaving in the same assembly as the Poco-attribute, you would have a much larger assembly to reference and you could get into licensing problems. By separating them, you only need to reference a tiny assembly.

    The weaving assembly should not be distributed with your final product!

    However, during the build, PostSharp does need to find the weaving assembly. Therefor, you need to create a psproj file that extends it's normal searchpath to also include the weaving dll.
    Take care in naming the file: it should be named "projectname.psproj".

    When the project is more mature, you might find it best to actually just place the weaving assembly into one of the default searchpaths for postsharp to find, and you will not need this psproj file.

    Now, let's look at our attribute:
    In it's constructor, it takes the name of the EDMcontainer, which should match your connection string. I have also added a few properties: Name, NamespaceName, PathToConfigFile. I'll get back to these in a later post. In the future, others will be added.

    During the weaving, I have to do quite a bit of work to actually get to the correct mapping files. So, I try to load in your app.config and extract the file path's from it. The Testproject has the following app.config:

    <?xml version="1.0" encoding="utf-8"?>
    <configuration>
      <connectionStrings>
        <add name="OneSimpleTypeConnection" connectionString="metadata=.\bin\debug\OneSimpleType\OneSimpleType.csdl|.\bin\debug\OneSimpleType\OneSimpleType.ssdl|.\bin\debug\OneSimpleType\OneSimpleType.msl;provider=System.Data.SqlClient;provider connection string=&quot;Data Source=VISTAX64\SQLEXPRESS;Initial Catalog=EntityFrameworkTest;Integrated Security=True;MultipleActiveResultSets=True&quot;" providerName="System.Data.EntityClient" />
      </connectionStrings>
    </configuration>

    So, after loading that app.config, I use the supplied ConnectionContainer to get that connectionstring, and then use some simple regex work to get the path's to the mapping files. Then I try to load these to create a MetadataWorkspace.

    When I finally have a MetadataWorkspace, stuff get's easier: I can iterate the properties in our original class and find the property in the metadataworkspace. Then I create the correct EDMScalar Attributes on top of those.

    Implementing the interfaces is done by PostSharp, where it will look at an interface and just use a class I provide to call when one of the interface methods is called.

    The result

    Let's look through reflector at how the end result looks like. I won't show the methods, to keep things short and sweet.

      1 [EdmEntityType(Name="Person", NamespaceName="EntityFrameworkContrib.PostSharp4EF.Tests.OneSimpleType")]
      2 public class Person : INotifyPropertyChanged, IComposed<INotifyPropertyChanged>, IProtectedInterface<IFirePropertyChanged>, IPocoFacade, IComposed<IPocoFacade>
      3 {
      4     // Fields
      5     private IPocoFacade ~EntityFrameworkContrib.PostSharp4EF.IPocoFacade;
      6     private readonly InstanceCredentials ~instanceCredentials;
      7     private INotifyPropertyChanged ~System.ComponentModel.INotifyPropertyChanged;
      8     [CompilerGenerated]
      9     private string <Firstname>k__BackingField;
    10     [CompilerGenerated]
    11     private string <Lastname>k__BackingField;
    12     [CompilerGenerated]
    13     private int <PersonID>k__BackingField;
    14
    15     // Methods
    16     static Person();
    17     public Person();
    18     void INotifyPropertyChanged.add_PropertyChanged(PropertyChangedEventHandler value);
    19     EntityKey IEntityWithKey.get_EntityKey();
    20     RelationshipManager IEntityWithRelationships.get_RelationshipManager();
    21     void INotifyPropertyChanged.remove_PropertyChanged(PropertyChangedEventHandler value);
    22     void IEntityWithKey.set_EntityKey(EntityKey value);
    23     void IEntityWithChangeTracker.SetChangeTracker(IEntityChangeTracker changeTracker);
    24     protected InstanceCredentials GetInstanceCredentials();
    25     [DebuggerNonUserCode]
    26     IPocoFacade IComposed<IPocoFacade>.GetImplementation(InstanceCredentials credentials);
    27     [DebuggerNonUserCode]
    28     INotifyPropertyChanged IComposed<INotifyPropertyChanged>.GetImplementation(InstanceCredentials credentials);
    29     [DebuggerNonUserCode]
    30     IFirePropertyChanged IProtectedInterface<IFirePropertyChanged>.GetInterface(InstanceCredentials credentials);
    31
    32     // Properties
    33     [EdmScalarProperty(IsNullable=true)]
    34     public string Firstname { [CompilerGenerated] get; [CompilerGenerated] set; }
    35     [EdmScalarProperty(IsNullable=false)]
    36     public string Lastname { [CompilerGenerated] get; [CompilerGenerated] set; }
    37     [EdmScalarProperty(IsNullable=false, EntityKeyProperty=true)]
    38     public int PersonID { [CompilerGenerated] get; [CompilerGenerated] set; }
    39 }
    40
    41

    Line 1 implements the needed EntityType attribute for EDM to work.
    Line 2 shows that INotifyPropertyChanged and IPocoFacade is implemented. The facade interface just hides the 3 IPoco interfaces, so that's them! PostSharps adds IComposed interfaces as well.
    Line 26 shows a call to the GetImplementation method of that interface. This way, a class I have added is returned where the actual work of the interface is done.
    Line 33, 35 and 38 show the EDMScalarProperties being set.

    What it does not do at this moment

    I do not set default values for fields and I haven't spend any time on complex types and relations.

    I first want to gauge community interest before spending more time on this project. So let me know if you would use this approach if it would be complete. I'm quite sure these things aren't too hard to accomplish, but they will take some time.

    The Future

    I'd like the EntityFramework Contrib project to provide easy tools to use EF in an enterprise system. I'm mostly interested in client/server SOA solutions. Other projects that might help in that aspect:

    • A custom changetracker that can be used on the client. This way the client will not have to reference Entity Framework at all.
    • Better serialization possibilities. Note that I do not automatically place datacontract attributes on top of the properties. I think it was a mistake for the ADO.Net team to implement their codegen to do this. (although I understand why).
      When I serialize a EF entity at this moment, I see all kinds of references to EF in the xml. I do not like that, and would like a beautiful clean xml representation of my business objects. (I don't want to be forced to use DTO's.).
    • Serializing original values. I can see a representation of the value with a xml attribute that shows what the original value was.

     

    Feel free to contact me, or leave a comment here or at the projects home to let me know if you are interested!

    kick it on DotNetKicks.com

    Monday, 18 February 2008 14:50:21 (Romance Standard Time, UTC+01:00)  #    Comments [46]  |  Trackback

    In this post and a few upcoming posts, I would like to present a solution I have built using Workflow as the controller for your WPF applications. I wish I could call it a framework and think of a great name for it, but it does not aim to solve all your UI-building problems in one go. It does however offer a very easy way to build a loosely coupled application, driven by WF and could be used to build upon for your own solution.

    Table of contents

    This is the first of a series about using Workflow Foundation to control your UI Logic in a WPF application. The full table of contents:

    Inspiration

    Like I mentioned in a blog post here, Josh Smith writes about using MVC in a WPF application where he does not use a funky IOC-container to help him build a MVC architecture, but uses the WPF framework itself to accomplish most of it.
    This resonated with me, because I just left a project where the combination of WPF and CAB did not make good on all it's promises. The team sometimes felt the combination was overly complex.

    Also Jeremy Miller writes about implementing all of the different aspects of CAB yourself. While doing so, he reasons (my interpretation) that it's best to build the simplest solution that is a precise fit for your problem, instead of using all kinds of big-time frameworks that abstract away so much, that you start to feel constrained.

    A great little post by Rob Teixeira concludes that most frameworks are way too complex to really use.

    I have had a bit of experience using WF on the server side, but have always thought of WF to be an excellent fit for the UI as well. When building complex UI's I would like nothing better than to be able to invite a business analyst to sit next to me and just show him what will happen when a button is pushed.
    I have had a team build a large UI for a LOB application. Although at first glance it looked very simple, there is always going on much more than you expect. Having a visual representation of the flow of actions in your program, is a good thing.

    This project aims to provide the most straight forward easiest plumbing possible, to get the job done. It tries to be explicit and make it easy for you (the developer) to do the right thing. It hopes that the use of WF provides some sort of DSL-feel to your application.

    So, what does this mean

    First, what does the solution not do:

    • It is explicitly not an IOC based solution (but perhaps you don't need that)
    • It is not a complete eventing mechanism (although controllers are able to communicate just fine)
    • It is not a finished solution (I might have called it a framework then!)

    What it is, is this:

    • It is a suggestion for how you could very easily use workflows as a controller
    • It combines some fun tricks I've learned, that will facilitate us here
    • It uses the native power of WPF, so no learning of new concepts just because you have a ShinyNewFramework, if you understand WPF, you understand how to hook things up
    • It uses the native power of WF to create your controller logic, this translates into a very descriptive usecase with easy handoff between developers and opens up possibilities of just letting your business analysts create the first draft themselves! WF always feels like a cheap-ass DSL to me.
    • It is one adapter class, a couple of activities and a command service. Very easy to understand and adjust to fit your own needs
    • It facilitates loose coupling to the extend where your views and your controllers do not need references to eachother
    • It is message based
    • Excellent testability, because of loose coupling and messages.

     

    Show us the goods

    I have uploaded the goods zipped here.
    It needs .net framework 3.5.

    In the previous post, I explained how you could combine the controller and wpf in one project. It seems that this does not work as well as it should: sometimes I get build errors that aren't there. It's fine to have logic and views separate for the real sample, but the shell consists of two projects now as well, that may seem as a bit overkill.

    I only UnitTested one small view to show how you can go about testing bindings and test the controller separately. I use TypeMock for this, so you might need to unload that project. (I'm considering TypeMock, but it is pretty expensive for a one-man-shop).

    What's in it

    The real stuff is very small.

    • project ControllersAdapters, with only one file. It is a contentcontrol, which acts as an adapter to your controllers [8 kb dll]
    • project WorkflowCommunications, which has the service that we can use to translate in/out of the controller and 6 custom activities, that do specific things [27 kb dll]

    That is all you need.
    I have loosely implemented the BankTeller application from CAB, or rather the SmartClientContrib 1.1 WPF for CAB. I did not look too closely at their implementation details, just copied the xaml and the domain model and build a part of it myself. Just to discover what was needed to build a real application.

    image

    The sample consists of a Shell, Domain, Logic, Views and Test project. It demonstrates how one could go about building such an application. I will follow up with a more detailed look at it. Suffice it to say, implementing it was a breeze.

    The thing with the BankTeller application is, that the logic is too simple. So it mostly demonstrates hooking up views and datacontexts.

    Just to give you a quick glance at what logic in a workflow looks like:

    image

    (Here you see what will happen when a new customer is selected in the listview. It checks if the customer is not null, and then sets a customerinfo view and a customer summary view. If it was null, the views are removed from the visual tree.)

     

    Go into more detail, please

    Well, I will follow up with more posts, if there is an interest in it. This post has dragged on long enough, so I will keep it very short for now.

    The concepts are:

    1. Use WPF resources as an excellent container for objects. Resource lookups work hierarchically, so it's actually pretty powerful on it's own. There are two activities Inject- and RetrieveObjectFromResource that will put or retrieve an arbitrary object into the resource section of the adapter. This could be a service or something else.
    2. Use WF as an event aggregator. All workflows are registered to the runtime, and all adapters subscribe (with weakevents) to the workflow. So it's easy to send messages around.
    3. Use WPF Commands to communicate from the View to the Controller. Commands go upstream. I have made it easy for a controller to handle a command (just drag a HandleCommand to the screen). I've also made it possible to use rules to determine if the command 'CanExecute'. So you could do a command 'AcceptCustomer' and bind it to a button. The Controller will determine if the command can be executed. (When the customerqueue is empty in the sample, the button to accept a customer is disabled automatically).
    4. Use WPF DataTemplates to inject UI by the Controller. The View can sprinkle ContentPresenters around (with ContentTemplates bound to DynamicResources). The controller will choose what piece of UI to inject as the resource. (cool stuff!)

    The most important class is the GenericWorkflowAdapter that can be placed into the UI like this:

    <c:GenericWorkflowAdapter WorkflowController="{x:Type l:ShellLogic}" />

    Here we tell it to use the workflowcontroller: ShellLogic as it's 'boss'.
    The adapter will hook into the RoutedUI commands coming from WPF and when a command comes that the workflow wants to react to, it will send it to the workflow. The workflow will react to it.

    Than, there is the CommandService, which defines the communication between the workflow and the runtime. The adapters use it to send messages to their workflow. The workflow uses it to communicate to the adapters.

    There are custom activities to do specific UI-things. Like setting a controller in the UI, or an object in the resources. Setting the datacontext of a view with your ViewModel and actually setting the Content of the adapter with it's View.

    More details will follow.

    Trixxxxxx

    In order to pull this off, a few things were hacked:

    • I created a much easier way to register commands on a workflow. Just drag a HandleCommand to an Eventdriven activity, set it's CommandName (the string it will react to) and you're off. Normal WF paradigm says you have to create an interface and possibly even implement correlation. Not productive for what we are trying to achieve.
    • Getting the workflow to communicate back to the adapter causes a clone to be made of the message. But since we don't want that, I implement IClonable to return 'this'. Works well, but you have been warned.
    • At one point I use a delegate that is passed to the workflow, that let's it get data from the commandService on the fly.
    • In order to use the custom activities, I needed to let the user (you) select types (what view you wish to inject, what controller you want to instantiate). I've had to jump through hoops to get it working. See this blog post.

    What is next

    I've had great fun implementing this. After a few refactorings, it turned out to be extremely simple. I'm interested in seeing what you think. If there is some interest from the community, it could easily be taken to the next level. However, at this point it was just a nice experiment for me. Let me know what you think of the idea!!

    kick it on DotNetKicks.com

    Monday, 18 February 2008 12:19:33 (Romance Standard Time, UTC+01:00)  #    Comments [2]  |  Trackback
     Thursday, 14 February 2008

    Just ran into a little bug in Visual Studio/msbuild and could not find any answers in the forums, so I thought I'd put it up here for reference:

    When you want to use both workflow classes and wpf classes in one project, you will run into some strange behaviour. Let's do it together.
    If you want to skip the newbie stuff, jump to step 11 and see the bug.

    1. create a WPF project.
    2. unload the project and choose to edit the project file
    3. somewhere in the beginning of the file, you will find the following line:
      <ProjectTypeGuids>{60dc8134-eba5-43b8-bcc9-bb4bc16c2548};{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}</ProjectTypeGuids>
      That is a way for visual studio to identify this as a WPF application project, and when you add an item, you will be able to choose a wpf item.
    4. You wish to be able to compile WF items, so add to the bottom of the file the correct import for the WF tasks:
      <Import Project="$(MSBuildExtensionsPath)\Microsoft\Windows Workflow Foundation\v3.5\Workflow.Targets" />
    5. Note that WPF in the past needed the import of winfx, but with framework 3.5 you don't need that anymore!
    6. At this point you are able to copy a workflow or activity to your project and compile, but you want to be able to add WF items to your project, so scroll to the top of the file again.
    7. Add the Guid that identifies a WF project ({14822709-B5A1-4724-98CA-57A101D1B079};)  to your projecttypeguids tag. The complete tag should be on one line (!) and look like this:

      <ProjectTypeGuids>{14822709-B5A1-4724-98CA-57A101D1B079};{60dc8134-eba5-43b8-bcc9-bb4bc16c2548};{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}</ProjectTypeGuids>
    8. Save the file and reload the project.
    9. Before adding a the first workflow to the project, first add references to System.Workflow.Runtime, Activities and Componentmodel.
    10. Add your first glorious workflow to the project.
    11. Be prepared to be disappointed: the project will not compile. Mine gave this error:

      Error    1    Error reading resource file 'j:\Users\Ruurd\Documents\Visual Studio 2008\Projects\WF_and_WPF_combined\WpfApplication1\obj\Debug\WpfApplication1.obj.Debug.WpfApplication1.g.resources' -- 'The system cannot find the file specified. '    J:\Users\Ruurd\Documents\Visual Studio 2008\Projects\WF_and_WPF_combined\WpfApplication1
    12. Notice the weird path of the resource file. It is looking for something with dots instead of path dividers. Strange.
      In regular windows explorer, go to the obj/Debug folder, and create a copy of the .g.resources file and use that weird name. I wanted to automate it though, so go to the properties of your project and use this as your pre-build script:
      IF EXIST "$(ProjectDir)obj\$(ConfigurationName)\$(TargetName).g.resources" (copy /-Y "$(ProjectDir)obj\$(ConfigurationName)\$(TargetName).g.resources" "$(ProjectDir)obj\$(ConfigurationName)\$(TargetName).obj.$(ConfigurationName).$(TargetName).g.resources") ELSE (echo "placeholder" > "$(ProjectDir)obj\$(ConfigurationName)\$(TargetName).obj.$(ConfigurationName).$(TargetName).g.resources")

      This checks to see if you already have a g.resources file and if so, copies it. Otherwise it will generate a placeholder file with the correct name. Atleast the project will build without problems.

    I have not tested it a lot yet. It seems to me that when the placeholder is created, there are resources that can not be found. During some quick and dirty tests, I've not had any problem yet and everything works just fine.

    Hope this helps someone out there.

    update: weird stuff. I have this running just fine in a couple of projects, but I have one project that gives an exception during a rebuild (not a build) in the compileworkflowtask. In other 'combined' projects, I can happily build and rebuild using the steps above.

    This is probably caused by wpf renaming the file to tmp_proj and the compileworkflow task is validating it's parameters like so:

            if ((string.Compare(this.ProjectExtension, ".csproj", StringComparison.OrdinalIgnoreCase) != 0) && (string.Compare(this.ProjectExtension, ".vbproj", StringComparison.OrdinalIgnoreCase) != 0))
            {
                base.Log.LogErrorFromResources("UnsupportedProjectType", new object[0]);
                return false;
            }

    The logging statement is the one giving the pain.

     

    Thursday, 14 February 2008 13:44:34 (Romance Standard Time, UTC+01:00)  #    Comments [9]  |  Trackback
     Wednesday, 13 February 2008

    I'm working on a sweet project at the moment using both WPF and WF. One of my custom activities has a property of type Type, where it would be cool for the user of the activity to be able to use the designer to select a type, just like what happens in the WF designer when I choose a type. However, no type picker popped up.

    So I went googling and found that Daniel Cazzulino also ran into this problem and created a fantastic little project to harness the power of the real WF typebrowser. He writes about it on this blogpost and later moves the project to code project. You can find the article and his download code here.

    However, as you can read in the comments, something was broken. Looking through the code, although small, made me not want to waste time on understanding the System.ComponentModel namespace in that much detail at this point ;-) (although, when working with WF, you will soon need to customize property pickers, so I will have to look into it someday soon).
    Daniel himself points to the Patterns and Practises entlib library: they offer the same functionality. I downloaded their sourcecode, and I'm quite sure they just used Daniels code and improved upon it a bit. However, with all the Entlib references, the project felt a bit heavy.

    What I have done is rip out all the references to entlib that I do not care about, used a few files from Daniels original solution and worked around a few shortcomings. Nothing fancy, I just hacked at it until it worked.

    image

     

    Now, since I have used some code (without license) by Daniel and code by the Entlib group, I'm not sure if I can publish a derivative without getting into problems. However, I've read their license, and I think it's okay.

    You can download the project here, don't ask for changes because I'm not interested in spending more time on it. All credits go to Daniel.

    (Also, find out how to create your own typefilters in his post).

    Have fun with it. Leave a comment if you find it useful.

    Wednesday, 13 February 2008 13:30:24 (Romance Standard Time, UTC+01:00)  #    Comments [9]  |  Trackback

    Grigori Melnik just blogged about the release of the first CTP for the Unity Framework. It can be downloaded from codeplex here.

    I think I mentioned Unity before. The Unity framework is build on top of ObjectBuilder2. ObjectBuilder2 seems a much better factory system than ObjectBuilder (infamous for it's bloated misuse in CAB). I have been following along with some sample on how to use ObjectBuilder2 and I quite like it. Obvious, nowhere near as powerful as Windsor, Spring.Net or StructureMap yet, but I understand Microsoft needs it's home-grown factory system, and ObjectBuilder2 seems easy enough to use.

    The Unity framework uses ObjectBuilder2 to provide a IOC. It can be used like so:

    UnityContainer container = new UnityContainer() 
                      .Register<ILogger, TraceLogger>() 
                      .Register<ISomething, Something>()
                      .Register<ISomethingElse, SomethingElse>();

    container.Get<ISomething>();

    I'll certainly be looking into it!

    Wednesday, 13 February 2008 10:40:19 (Romance Standard Time, UTC+01:00)  #    Comments [0]  |  Trackback
     Tuesday, 12 February 2008

    It frequently happens that you wish to enrich a valuetype with more data, most often the properties in your domainmodel. In this post I will present one way you could achieve that. But first, let's look at why you would want such a thing, then look at how you would go about this normally and finally look at another approach.

    Why metadata about a property

    Let's say you have a domainobject 'Person' like so:

        public class Person
        {
            public string Name { get; set; }
            public DateTime Birthday { get; set; }
        }

    When working with this object, you will set the property 'Name' and property 'Birthday', but while setting these, you have no clue about what really is allowed to go in there. Maybe Name can not accept numerics and in our domain, any Birthday before 1-1-1990 is not allowed. These validationrules are unknown until you actually validate the object, using your preferred mechanism. At that point, there might be an error collection that will state which properties are set to disallowed values. Maybe validation will occur on each change to the data.

    That's not really a problem for your businesslogic, but your UI might want to know about these validation rules beforehand! Maybe it could adjust to show a different textbox for 'Name', one which will not let you enter a number, for instance.
    Yes, sure you could have put that specialized textbox in there yourself, but that means you will have to adjust your UI views when businessrules change. I would much rather just place a generic control, bind it to my property and let it decide for itself how best to present the data:

    <MetaDataEditor Content="{Binding Path=Name}" />  <!-- this shows an alphanumerical textbox -->

    So, why metadata: it allows you to optimize beforehand and simplify your UI.

    Freaky
    Oh, or maybe you want to give a 'friendly name' to the property. That friendly name could be presented in the UI as the label or used in error messages.
    How do you go about this normally

    (and I say 'normally' not in a bad way, it might still be the best way, depending on your situation).

    You would define some interface:

        interface IMetadataProvider
        {
            Metadata GiveMetadataForProperty(string propertyName);
        }

    and implement that on Person. Now, when you want your metadata object, you just get to the object on which the property is defined, and ask for it. Easy.

    Metadata<T>

    What I do not like about an interface, is the fact that you have separated the metadata from the property itself. There now is a method in my domainobject that will take a string (ouch) and probably go through a long list of  'if(propertyName == "???")' or switch/case statements until it gets to the correct name and then create or return a metadata object. That's a great deal of hooking up you need to do, and when passing strings, your domainmodel just became less easy to refactor.
    When my magical MetaDataEditor needs the metadata, it will have to somehow find it by traversing from the bound property to it's parent and cast that to IMetadataProvider.

    Oh, bad boy, don't even think about using reflection to magically connect the passed string to a metadataobject!

    That is why I am experimenting with a little class, I like to call Metadata<T>.

        public class MetaData<T>
        {
            public T InnerValue { get; set; }
            .... goodness inside
        }

    This way, you need to define a property on your businessobject like so:

    private MetaData<string> name = new MetaData<string>();

    public MetaData<string> Name
    {
        get
        {
            return name;
        }
        set
        {
            name.InnerValue = value.InnerValue; 
        }
    }

    This way, you can rest assured the original metadata object is never discarded, but only the innervalue is changed.

    We could work with the property like so (p is an instance of Person):
    p.Name = new MetaData<string>("foo");

    That's not easy, but we can also say:
    p.Name.InnerValue = "foo";

    Better. We can do one better though, by creating a few implicit operators on the metadataobject:

    public static implicit operator T(MetaData<T> source)
    {
        return source.InnerValue;
    }

    public static implicit operator MetaData<T>(T source)
    {
        IMetaData md = source as IMetaData;
        if (md != null)
        {
            IConvertible convertible = md.InnerValue as IConvertible;
            if (convertible != null)
            {
                T converted = (T)convertible.ToType(typeof(T), null);
                return new MetaData<T> { InnerValue = converted };
            }

            throw new NotSupportedException();

        }
        else
        {
            return new MetaData<T> { InnerValue = source };
        }
    }

    This means, we can now use the property as follows:

    p.Name = "foo";

    The string "foo" can be translated to a MetaData<T> and that is going into the property-setter. Then, the setter will take the innervalue of that foo-metadataobject and use it to set it's own innervalue.

    string importantname = p.Name   will work too.

    One important note though: this might be misleading to your developers. They can not do p.Name.ToCharArray(), because p.Name really is not a string.
    Or p.Car.Color. You would have to do p.Car.InnerValue.Color.

    Now, whether you go with the implicit operators or not, you want to ease databinding especially. For that, a typeconverter can be used.

    A typeconverter can be attached to a class by the use of an attribute, like so: [TypeConverter( typeof(MetadataForDatabindingConverter))]

    The converter needs to inherit from TypeConverter. Let's implement one.

      1 public class MetadataForDatabindingConverter : TypeConverter
      2 {
      3     /// <summary>
      4     /// keep the real type of the metadata innervalue. Since we need it when we convert back to our metadata
      5     /// object.
      6     /// </summary>
      7     private Type databindingRealType;
      8
      9     public override bool CanConvertFrom(ITypeDescriptorContext context, Type sourceType)
    10     {
    11         if (sourceType.Equals(typeof(string)))
    12             return true;
    13         else
    14             return false;
    15     }
    16
    17     public override bool CanConvertTo(ITypeDescriptorContext context, Type destinationType)
    18     {
    19         if (destinationType.Equals(typeof(string)))
    20             return true;
    21         else
    22             return false;
    23     }
    24
    25     public override object ConvertFrom(ITypeDescriptorContext context, System.Globalization.CultureInfo culture, object value)
    26     {
    27         // ugly but necessary: when databinding to a ui control, it's likely that you want to go to a string representation
    28         // when the ui control (textbox?) was changed, the incoming value is a string. I have not yet found a way to
    29         // find out what the target really wants (for instance an MD<int>).
    30         // This hack works fine
    31
    32
    33         Type realtype = value.GetType();
    34         if (databindingRealType != null)
    35             realtype = databindingRealType;
    36
    37 
    39         // or just do a switch on the type and create the correct type
    40         //if(value is string)
    41         //    return (IMetaData) new MD<String> { InnerValue = (string)value };
    42
    43         // because I don't feel like implementing the above statements.. bla!
    44         Type d1 = typeof(MetaData<>);
    45         Type constructed = d1.MakeGenericType(new Type[] { realtype });
    46         IMetaData instance = (IMetaData)Activator.CreateInstance(constructed);
    47
    48         TypeConverter converter = TypeDescriptor.GetConverter(realtype);
    49         converter.ConvertFrom(context, culture, value);
    50
    51         instance.InnerValue = converter.ConvertFrom(context, culture, value);
    52
    53         return instance;
    54     }
    55
    56     public override object ConvertTo(ITypeDescriptorContext context, System.Globalization.CultureInfo culture, object value, Type destinationType)
    57     {
    58         // so, this is a metadata object, and we are going to be converting the innervalue to a string (for textboxes etc).
    59
    60         IMetaData md = value as IMetaData;
    61         if (md != null)
    62         {
    63             databindingRealType = md.InnerType;
    64
    65             TypeConverter converter = TypeDescriptor.GetConverter(databindingRealType);
    66             if (converter.CanConvertTo(destinationType))
    67             {
    68                 return converter.ConvertTo(context, culture, md.InnerValue, destinationType);
    69             }
    70         }
    71
    72         throw new NotSupportedException(String.Format("Conversion of {0} to type {1} is not possible.",value.ToString(), destinationType.Name.ToString() ) );
    73     }
    74
    75     public MetadataForDatabindingConverter()
    76     {
    77
    78     }
    79
    80 }
     

    A typeconverter should override the CanConvertFrom/To methods to indicate if it is able to convert between certain types. Our UI will present data as strings (textbox), so I have opted to only convert to and from strings.

    Line 7 keeps a variable where the 'real type' will be put. The databinding engine does not give much information during the call to convert. So if we have a MetaData<int> and bind that to a textbox, the ConvertFrom only gives us information about the value being set (the string "1234"). How do we know we have to convert to MetaData<int> instead of MetaData<string>?
    Well, turns out we do know at an earlier conversion that always happens: converting our metadata to a string that will be put into the textbox. I cache the 'realtype' at that moment. (I'm not happy with that solution, if you know how to get rid of the obvious codesmell there, leave a comment!).

    The ConvertTo method is easy enough to not have to describe here, but in the ConvertFrom, we do have to jump through some hoops. I've opted to create a metadata<T> with reflection, using the correct type for T. Then, convert the passed in string to the correct type.

    This works brilliantly and it allows you to bind like this:

    <TextBox Text="{Binding Path=Name}" />

    (When it is time to navigate through a property, you will have to go through InnerValue again though, like {Binding Path=Car.InnerValue.Color} .)

    Microsoft's latest approach: IDataErrorInfo

    The WPF team has invented dependency properties as another way to do something similar (for different reasons though). You obviously do not want to use dependency properties in your domain model.

    The recently introduced IDataErrorInfo has the following definition:

            #region IDataErrorInfo Members

            public string Error
            {
                get { throw new NotImplementedException(); }
            }

            public string this[string columnName]
            {
                get { throw new NotImplementedException(); }
            }

            #endregion

    I hate that.

    Especially the name of the argument 'columnName'. Drives me mad. My business object is a person or a car, not a databasetable. F*ck off.

    Besides, what if you want to do something useful with an index on your object? Madness, I tell you!

    Conclusion

    It's annoying that you have to go to the innervalue property to get to the real value you are interested in. Implicit operators make this a lot more transparent but possibly confusing. Besides, there is a tiny performance loss here.

    However, you can do great stuff with such a setup. Let the metadataobjects implement INotifyPropertyChanged as well and use them in pipelines (more on this later), query validation rules from it without having to think about getting to the object that holds the property, and more.

    What do you think?

    Tuesday, 12 February 2008 20:08:58 (Romance Standard Time, UTC+01:00)  #    Comments [2]  |  Trackback
     Saturday, 09 February 2008
    update 2: I also posted this on the forums and the team finally found what was causing this. They explain:

    There have been several reports where intellisense has completely stopped working for all projects after installing a version of the Windows SDK or MSDN. We have been able to track down the source of this problem. This seems to only affect installs of the SDK/MSDN post the installation of Visual Studio. One registry value has been incorrectly reset after these installs causing this failure. This issue has been handed off to setup team for a future fix.

    In the meantime, if you encounter this issue, it can be fixed using regedit. First, determine if you are seeing this same issue by opening regedit and looking at the key:

    HKEY_CLASSES_ROOT\CLSID\{73B7DC00-F498-4ABD-AB79-D07AFD52F395}\InProcServer32

    If (Default) is empty you are hitting this issue. To correct the problem, change the value of (Default) to point to the location of TextMgrP.dll on your system (C:\Program Files\Common Files\Microsoft Shared\MSEnv\TextMgrP.dll in my case with a C: OS drive and accepting all the defaults). Restart Visual Studio and intellisense should be working again. Thanks to everyone who submitted reports of this issue and gave us the additional details needed to track it down quickly.

    Update: If you are running a 64-bit version of Windows, you will need to make sure you are running the correct regedit version (%systemroot%\syswow64\regedit - see http://support.microsoft.com/kb/305097) and you will need to locate the correct path to TextMgrP.dll (such as C:\Program Files (x86)\Common Files\Microsoft Shared\MSEnv\TextMgrP.dll)

     

    Update: a repair of visual studio eventually fixed this. Lesson learned: a repair does not damage already installed hotfixes and addins, so you do not have to fear losing anything.

    I lost my xaml intellisense and I really miss it. I installed SDK 6.1, but I do not know if that was the cause. I was using Resharper, so I didn't notice. However, when I uninstalled resharper, intellisense did not appear again! It could have been the sdk. It could have been resharper. Or something else all together. I just don't know.

    Lot's of googling did not really help (2008 is different than 2005), but did point to the importance of two files in your xml/schemas folder:

    - XamlPresentation2006.xsd
    - xaml2006.xsd

    However, VS 2008 does not work with xsd anymore to supply intellisense. I was under the impression that a Xaml-parser service was build. I do not have those files and on another computer (where intellisense works fine), I did not find them either.

    I copied them from a vs2005 install, and opened a xaml file in the xml editor. No schemas were defined (nor are they on the healthy computer), but when I pointed to the just copied files, intellisense does work partially again. It does not see usercontrols and stuff. This is not the way it is supposed to work in 2008!

    Funny thing though: I now have this intellisense in the xmleditor, but not in sourcecode editor or the designer. In the healthy install, it's the other way around.

    Let me know if you have a solution!

    Saturday, 09 February 2008 23:06:22 (Romance Standard Time, UTC+01:00)  #    Comments [5]  |  Trackback

    Dax Pandhi, of Reuxables is offering a lite version of their commercial theme: paper. You only get the compiled dll, but still, it's a bargain ;-)

    I have not used any commercial themes yet, and as I am not working for a client at this point, I probably won't at this moment ;-) I also do not know yet whether I like the theme. I am going to use it and just see!

    Are there other commercial theme packs around? I would like a pack that makes my applications look like this application ;-)

    Saturday, 09 February 2008 21:05:07 (Romance Standard Time, UTC+01:00)  #    Comments [2]  |  Trackback

    If you are working with clients that do not see the use of automated testing (be it in unit tests of code blocks, or specific UI testing), you are in for a hard time. Maybe you should walk away, but let's face it: you will probably give in and try to do your best.
    I have even had heated discussions with developers that do not see the use of it, certainly when there are monkey-testers to do it.

    Testing the userinterface is incredibly hard to do. When you're testers are brave, they might use a testtool (robot) that will simulate clicks and read out information. However, these scripts have to be updated when the software changes and that is costly.

    WPF has great support for UI Automation, which allows other programs to interact with your application's UI from the outside. It does this by naming the elements of your UI and offers 'strategies' to interact with them (Press this button).
    It's not an easy framework, but workable.

    Project White seems to be an abstract layer on top of the UI Automation stack, released to the public domain by ThoughtWorks today. It's aimed at simplifying your scripts and presenting a uniform API for both Winform and WPF technologies.

    I'm looking forward to discovering it's api. This looks nice:

     

    Application application = Application.Launch("foo.exe");
       Window window = application.GetWindow("bar", InitializeOption.NoCache);
    Button button = window.Get<Button>("save");
    button.Click();

    Do you use UI automation to test your application?

    Saturday, 09 February 2008 20:54:33 (Romance Standard Time, UTC+01:00)  #    Comments [0]  |  Trackback

    Maybe old new, but Microsoft put up new site here which looks like a very nice collection of all the tools that you can get for visual studio.

    Resource Refactoring Tool

    Saturday, 09 February 2008 18:05:29 (Romance Standard Time, UTC+01:00)  #    Comments [0]  |  Trackback
     Monday, 04 February 2008

    I'm a strange man: I seem to be equally interested in EntityFramework and in WPF. They are such different beasts, and still I take great pleasure in using them both! That's possibly because I view them as enablers of the kind of projects I like to do. Weird.

    Anywho, it's been a long time since I blogged about WPF. And even longer since I blogged about unittesting WPF. The simple trick in this post is probably widely used already in the community: I haven't paid any attention ;-)

    In this post, I explained how to setup a tracelistener to listen for binding errors. In the months that followed, this proved to be less than convenient! In WPF views, even when everything is setup great, there might be binding errors that you wish to accept. For instance: a view binds to an instance of type Foo, and is later subsituted by an instance of type Bar. Bar has the same properties as Foo, except 1. The binding engine just clears the bound label, and you are fine with it (yes, sure.. it smells a bit, but you get the example).
    Using the tracelistener, you have less control over what the process.

    It is much better to have total control over the binding objects in a view. With some exceedingly simple methods, you can get to them to query their status:

    First, let's start with an enumeration over all the visuals in your view:

    public
    IEnumerable<Visual> EnumVisual(Visual visual)
    {
    for (int i = 0; i < VisualTreeHelper.GetChildrenCount(visual); i++)
     {
       Visual childVisual = (Visual) VisualTreeHelper.GetChild(visual, i);
       yield return childVisual;
     }
    }

    Then, we use some cleverness by dr. WPF that enumerates all the bindings found on a visual:

    private IEnumerable<BindingExpression> EnumerateBindings(DependencyObject target)
    {
       if(target is ContentControl && ((ContentControl)target).Content is DependencyObject)
       {
          EnumerateBindings( (DependencyObject) ((ContentControl)target).Content);
       }

    LocalValueEnumerator lve = target.GetLocalValueEnumerator();

    while (lve.MoveNext())
       {
          LocalValueEntry entry = lve.Current;

          if (BindingOperations.IsDataBound(target, entry.Property))
          {
             Binding binding = (entry.Value as BindingExpression).ParentBinding;

             yield return entry.Value as BindingExpression;
          }
       }
    }

    It uses the GetLocalValueEnumerator, which is a largely unknown method that gets all the properties on a dependencyobject.
    I first check to see if the target is a content control, if it is, I go for its content as well.

    Now, let's see all the bindings:

    private IEnumerable<BindingExpression> GetFlattenedBindings(Visual root)
    {
       foreach (Visual child in EnumVisual(root))
       {
          foreach (BindingExpression childBinding in GetFlattenedBindings(child))
          {
             yield return childBinding;
          }

          foreach (BindingExpression binding in EnumerateBindings(child))
          {
             yield return binding;
          }
       }
    }

    Use it in your unittest to get all the bindings that are available in the visualtree. Test specific bindings or just fail if one breaks.

    foreach(BindingExpression b in GetFlattenedBindings(this))
    {
       Debug.WriteLine(b.ParentBinding.Path + "=" + b.Status + " on item:" + b.DataItem.ToString() );
    }

    Ofcourse, it's cool to change the helper methods to extension methods.

    Beware that I have not really tested the code. Copy pasting it, I see a few errors, but you get the drift.

     

    Monday, 04 February 2008 23:31:58 (Romance Standard Time, UTC+01:00)  #    Comments [2]  |  Trackback