Sunday 6 June 2010

Building S#arp Architecture fails with a Nant error

 

I am having a look at S#arp architecture.  I downloaded the latest build (1.5.2) and proceeded to start the build by clicking ClickToBuild.bat, which instantly failed with:

NAnt 0.86 (Build 0.86.2898.0; beta1; 08/12/2007)
Copyright (C) 2001-2007 Gerry Shaw
http://nant.sourceforge.net

BUILD FAILED

Failed to initialize the 'Microsoft .NET Framework 2.0' (net-2.0) target framework.

    Property evaluation failed.
Expression: ${path::combine(sdkInstallRoot, 'bin')}
                            ^^^^^^^^^^^^^^

        Property 'sdkInstallRoot' has not been set.

For more information regarding the cause of the build failure, run the build again in debug mode.

Try 'nant -help' for more information

Not sure why but it might be an issue with the version of Nant bundled with S#arp.

Not really having a clue as to how to fix this I installed the .Net 3.5 SDK and tried again, no joy. 

Then I installed the .Net 2.0 SDK, this progressed past the above error, but then I got:

NAnt 0.86 (Build 0.86.2898.0; beta1; 08/12/2007)
Copyright (C) 2001-2007 Gerry Shaw
http://nant.sourceforge.net

Buildfile: ***************
Target framework: Microsoft .NET Framework 2.0
Target(s) specified: build

[property] Target framework changed to "Microsoft .NET Framework 3.5".

BUILD FAILED

INTERNAL ERROR

System.NullReferenceException: Object reference not set to an instance of an object.
   at NAnt.Core.FrameworkInfo.get_Version()
   at NAnt.Core.Project.UpdateTargetFrameworkProperties()
   at NAnt.Core.Tasks.PropertyTask.ExecuteTask()
   at NAnt.Core.Task.Execute()
   at NAnt.Core.Project.InitializeProjectDocument(XmlDocument doc)
   at NAnt.Core.Project.Execute()
   at NAnt.Core.Project.Run()

At this point, I abandoned open source software and went back to webforms :0

Only joking, google found me this which pointed me in the right direction of editing the {S#arp root folder}tools\nant.exe.config file.

At this point I’m not sure if you actually need either the 2.0 or the 3.5 SDK. 

The post wasn’t an exact match (I’m x64 and VS 2010) but by comparing the windows Registry with the nant config file I managed to get S#arp built.

Here is what eventually worked for me (YMMV), but essentially the nant file has to point to a valid registry key for an SDK:

   1: <readregistry
   2: property="sdkInstallRoot"
   3: key="SOFTWARE\Microsoft\Microsoft SDKs\Windows\v7.0A\WinSDK-NetFx35Tools\InstallationFolder"
   4: hive="LocalMachine"
   5: failonerror="false" />

 

Happy S#arping!

Thursday 2 July 2009

Pre-tested commit in TeamCity

I've been meaning for a while to have a look at pre-tested commit functionality in TeamCity* and today I got a chance to try it out.

It's a pretty simple idea. When you are ready to commit your changes to source control, you have TeamCity check that your commit is not going to break the build. Having someone else on your team commit bugs, failing tests, or, even worse, compilation problems to the repository is a serious time waster. A problem committed is a problem doubled...

So how is this different from just checking it yourself by compiling locally, running tests and committing? Well, there are several advantages from getting TC to do it.

Firstly, TC runs the checks remotely. It takes all the code you were going to commit, sends it to the build grid, and runs your tests there. If the tests fail, the changes are not committed, and no-one else has to suffer! Already this week I have broken the build a couple of times by accidently only committing only some subdirectories of changes (I use Subversion for source control and TortoiseSVN as my client tool). This is avoided as TC bundles all your working copy changes up for you so you don't forget to commit some code.

Secondly, TC runs the checks remotely. Yes I am repeating myself! Once you have asked TC to pre-test commit, you can continue making changes while the CPU intense stuff happens on another box. The local changes which you make after the pre-test commit do not interfere with the pre-test and you can carry on developing. If your pre-test works, you get told and your change set gets committed to the repository. If your pre-test fails, you get told, and you can fix any problems and try again. The point is, it's fire and forget but with no friendly fire.

Here is how I set it up. As you'd expect from JetBrains it's very easy.

1. Go to your TC project site. Select My Tools and settings, download the Add in for Visual Studio. Install it.

2. Restart VS (probably)

3. You should now have a TeamCity menu item.

4. Click Login, set your server details + login password you use to connect to TC.

5. If you don't have a SVN.exe, you need to get one! (if like me you are using TortoiseSVN, you need to do this)

   A. I used CollabNet's binaries. Download + install this version. Yes you have to register, no, that doesn't matter.

CollabNet Subversion Command-Line Client v1.6.3 (for Windows)

6. Click the Remote run command in VS (Remote run + pre-tested commit seem to be interchangeable terms)

7. Click the subversion tab

8. Set the path to svn.exe to C:\Program Files\CollabNet Subversion Client\svn.exe (by default installation of collabnet)

9. Set 'Use Subversion local copy authentication' if not already set

10. Set 'Detect subversion working copy automatically' if not already set

11. Go to the changes tab and select the build configuration you want to use for pre-tested commit. We have a commit checking config which compiles the trunk solution and runs our fast TDD tests.

12. Type in a commit comment + press run This took about a minute the first time I tried it, after that its faster!

* other continuous integration tools are available.

Wednesday 15 April 2009

NHibernate tips

Here is a collection of things I keep forgetting with NH: Getting any old client for testing : var client = session.CreateQuery("select from Client") .SetMaxResults(1) .UniqueResult();

Thursday 11 December 2008

Winforms event memory leaks; Form Zombies that just won’t die!

I have a fair few events in the business layer which are listened to by my GUI which is a winforms app.  I have been tripped up on a nasty memory leak problem which is caused by GUI objects listening to events in the business layer. 

Unless developers are really disciplined and remember to unhook event handlers (with the –= operator during the closing event or wherever) you will get the following scenario:

You close a form because the user is finished with it.  The rest of the GUI can no longer reference your form or control.  You would now expect and assume that the form would be cleared up by the garbage collector.   But!! As the form is listening to a longer lived business layer object, keeping the business object in scope will, in turn, keep your form in scope.  You have ended up with GUI objects being held in memory by references from the business layer!

This doesn’t appear to make sense as how can an observable keep an observer hanging around when the observer wants to leave! (Daddy!! Daddy!! Look what I can do Daddy!!).  The simplest way of explaining what is happening is when an event is wired up, the observed needs to know how to get hold of the observer (to tell him about the event).  In order to do this, the observed needs a reference to the observer.  From here its not hard to see that the observed now can keep the observer in scope via the reference.

In a large winforms app and a long user session, you can end up with loads of forms hanging around eavesdropping on all sorts of conversations that are nothing to do with them.  This is first of all a nuisance as winforms is quite memory hungry anyway.  More importantly it can lead to bizarre effects such as message boxes seeming to appear twice for one logical business ‘event’.

So how to go about solving this.  Well, if you only have a couple of forms and a couple of business objects or events, you can just read the code to ensure that events are being unsubscribed from when they should be.  If you only have a couple of events, chances are you aren’t reading this post as you haven’t uncovered this problem yet!

“But but! I’ve got simply millions of events!  My app is an event driven masterpiece of enterprisey stuff!”.  Well, in this scenario, I don’t like typing more than the next man so I decided to use reflection to track down the zombie forms.  I though I would piggy back my normal GUI unit binding tests, and check that unsubscribing was happening when it should be.  It should be a simple matter of binding my GUI objects. Then telling them to unbind, or close them or whatever is relevant for the subject under test.  Then I could use some sexy reflection to confirm that indeed, the events in the business layer no longer refer to the form.

It turns out this is not as easy as you might assume!  I set my sights on getting a reference to the MulticastDelegate underlying the event in the business layer.  Once I have this reference, I can enumerate the invocation list checking for reference to my form.

First of all I tried .GetType().GetEvents();  This returns an EventInfo object and I won’t detain you further with how useful I found this class to be.

So after a bit of Visual Studio debug reflection exploration, and getting no where fast, I turned to google where I was assured that my MulticastDelegate was a field.  I was sure I had looked in the GetFields() collection which was an empty array.  I then had an RTFM moment, and realised there was an overload to GetFields.  A bit of boolean logic later and I had a reference to a private field in the class that was nothing to do with my event.  Damn you internet!  Will you ever live up to your potential?

I then then set up a little test rig which convinced me I was going down the right path.  I was easily able to get a reference to the MulticastDelegate using exactly the same code that appeared to do nothing for my real code base.  I systematically introduced all the things that made my real code scenario different.  It happened to be the first thing I tried, inheritance.  Although the miss-leading GetEvents() returns the events from a base class, GetFields() doesn’t get inherited fields.  In fact, all you need to do is get to the base class’s type and you are away.  Here’s the code to get to a base class’s MulticastDelegate:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Reflection;
using NUnit.Framework;

namespace Yarp
{

    public class BaseClassWithEvent
    {
        public event EventHandler OhWhereCanIBe;


    }

    public class SubClass : BaseClassWithEvent
    {}

    [TestFixture]
    public class SubClassTester
    {
        private SubClass c1;

        public SubClassTester()
        {
            c1 = new SubClass();
            //Create the invocationlist
            this.c1.OhWhereCanIBe += new EventHandler(c1_OhWhereCanIBe);
        }

        private void c1_OhWhereCanIBe(object sender, EventArgs e)
        {
            throw new NotImplementedException();
        }

        [Test]
        public void TestFindingTheInvocationListOfAnEventInABaseClass()
        {
            //Get the subclassType
            var c1Type = c1.GetType();

            //Prove we can't access it via the type reference
            var subclassFields = c1Type.GetFields(BindingFlags.NonPublic | BindingFlags.Instance);
            Assert.IsEmpty(subclassFields);
            
            //It is accessible through the base type
            var baseFields =
                c1Type.BaseType.GetFields(BindingFlags.NonPublic | BindingFlags.Instance);
            Assert.IsNotEmpty(baseFields);

            MulticastDelegate multicastDelegate = baseFields[0].GetValue(c1) as MulticastDelegate;

            foreach(var listener in multicastDelegate.GetInvocationList())
            {
               //Do what you got to do to ensure the listener is not a zombie
            }

        }

    }

}

Thursday 6 November 2008

Useful Links

Windsor container

BitterCoder Tutorials

(series incomplete but pretty comprehensive)

NHibernate

Summer of NHibernate

Fluid NHibernate : Using a convention to not have to xml

http://ayende.com/Blog/archive/2008/12/11/fluent-nhibernate.aspx

CAB Links

Intro to CAB (Rich Newman)

Rhino

http://ayende.com/hibernating-rhinos.aspx - Videos

NHib - session management in rich client

Tuesday 14 October 2008

Trying to get code snippets working is a joy with blogger

/// <summary>
/// IoC Ctor
/// </summary>
/// <param name="PoisonMessageReceiver">A receiver of private messages which can't be processed</param>
/// <param name="PrivateEntityRepository">A service to use when requesting private entities from the replicated DB</param>
/// <param name="HmsServiceFacade">A facade for interfacing with the HMS service</param>
/// <param name="PrivateEntitiesToHmsEntitiesMapper">A mapping class matching HMS entities with private entities</param>
public RxMessageReceiver(IPoisonMessageHandler PoisonMessageReceiver,
                       IPrivateEntityRepository PrivateEntityRepository,
                       IHmsServiceFacade HmsServiceFacade,
                       IPrivateEntitiesToHmsEntitiesMapper PrivateEntitiesToHmsEntitiesMapper)
{
  poisonMessageReceiver = PoisonMessageReceiver;
  privateEntityRepository = PrivateEntityRepository;
  hmsServiceFacade = HmsServiceFacade;
  privateEntitiesToHmsEntitiesMapper = PrivateEntitiesToHmsEntitiesMapper;
}

I take it back. That wasn’t so hard was it! I went with this in the end, its an extension for live writer, which takes code copied onto the clipboard from Visual Studio.

Tuesday 7 October 2008

Strongly typed dataset: IDENTITY columns, RowState and AcceptReject

I have recently hit a problem with strongly typed datasets regarding an inability to reconcile children row’s RowState with cascading on AcceptReject rules on a relationship. I want to be able to have AcceptReject rules cascade so when I remove a tranche of data from multiple tables I can remove all this data with 2 commands:

ultimateMasterRow.Delete(); 
ultimateMasterRow.AcceptChanges(); 

However, if you set up a test rig as I describe below, you will quickly see that you can't just set all relationships' AcceptReject rule to cascade get the desired functionality.

I have 2 tables in my dataset, a typical master/detail setup. The master and the detail tables are based on SQL tables with IDENTITY columns. The dataset is configured with a relationship with:

update rule - cascade

delete rule - cascade

accept/reject rule - cascade

I have configured both table adapters with:

AcceptChangesDuringUpdate = false; //don’t let adapter automatically call AcceptChanges

I populate the 2 tables in the dataset with 1 record each. The RowState of each record is:

Master record - Added

Detail record - Added

So far so good. Now to synch with the database.

I call Update only on the master record’s table adapter. The RowState of each record is now:

Master record - Modified

Detail record - Modified

The detail record has been updated with the actual PK value from the database's IDENTITY property as expected, but the detail row has now lost its correct RowState, which should still be Added. I’m prepared to accept that the master row’s RowState can be modified as it has just received the IDENTITY value back from SQL, but no way should the detail record be modified. In fact, if you go onto try to call Update on the detail table adapter, you will most likely receive an optimistic concurrency exception. If you put a SQL Profiler trace on you will see that the table adapter is attempting to do an UPDATE statement in the database, rather than an INSERT as it is under the incorrect impression that the RowState correctly reflects the synchronisation state of the row!

There is an easy solution however. Configure all your relationships with AcceptReject = none.

“Hey! How do I get the awesome functionality of a 2 command removal of all related data from the dataset”, you ask. Well, all you have to do is when you are getting ready to remove your data tranche is…temporarily turn the AcceptReject rule back to cascade to do your delete/accept!

//We will briefly change the relationships that are not set to not cascade, then revert
 
List relationsToRevertToNone = new List(); 
List relationsToRevertToNone = new List(); 
StronglyTypedDataSet ds = //Get DataSet here 
 
foreach (DataRelation rel in ds.Relations) 
{ 
    if (rel.ChildKeyConstraint.AcceptRejectRule == AcceptRejectRule.None) 
    { 
        relationsToRevertToNone.Add(rel); 
        rel.ChildKeyConstraint.AcceptRejectRule = AcceptRejectRule.Cascade; 
    } 
} 
 
UltimateParentRow.Delete(); 
UltimateParentRow.AcceptChanges(); //now cascades to all rows related to the case 
 
//back out the changes to the rule so inserts will work in future 
foreach(var relToRevert in relationsToRevertToNone) 
{ 
    relToRevert.ChildKeyConstraint.AcceptRejectRule = AcceptRejectRule.None; 
}
 

Not quite 2 lines to get the functionality and it would be better if the dataset did this for you but, well, you can get there in the end!