MSTest and the DeploymentItem Attribute

I was writing Unit Tests today with MsTest and came across what I thought was a straightforward problem. My test needed to load an Xml document from the file system. Easy, right?

Wrong.

Here is the part of the code that was causing the problem – two lines that I’ve written in applications, a hundred times before without any issue.

XmlDocument testXmlDocument = new XmlDocument(); testXmlDocument.Load(@".TestXmlSWHNewFormat.xml");

But when the test ran, it threw the following error

System.IO.DirectoryNotFoundException: Could not find a part of the path ‘D:[Path to test project]binDebugTestXmlSWHNewFormat.xml’.

Turns out that when you run your tests they are deployed to a folder (the path of which is configurable). Any files you want to be deployed with the tests (for example xml files which your tests may need to read) have to be flagged up on the test method (or test class) with the DeploymentItem attribute.

Finding that out was the easy part. Actually getting the fire to deploy to the folder was another matter.

Google. Rinse. Repeat.

Firstly, in Visual Studio I had to set the “Copy to Output Folder” property to “Copy Always” on the files I wanted to deploy with your my tests. Once that was set I then needed to set the DeploymentItem attribute.

The attribute accepts two parameters. The first is the path to the folder that contains the file(s) you want to deploy and the second is optional and the name of the folder that will be created in the deployed folder.

But with both parameters set on the attribute, still my file did not appear in the deploy folder.

Gotcha!

The thing that caught me out was the path in the first parameter is relative to the bin[CONFIGURATIONNAME] of the test project, so in my case the path I specified in the attribute was

  • @”.XmlMangerTestsTestXml”

This translated to the file path 

  • “D:[Path to test project]binDebugXmlMangerTestsTestXml”

The second parameter is the folder name within the test run folder for the copied files. Again in my case I set

  • “TestXml”

which translated to the folder path

  • D:[Path to Solution]TestResultsDeploy_CompName 2012-10-25 11_10_16OutTestXml

Once this was set (and if I’m honest after correcting the odd type of two!) the test ran as expected. Of course it failed, but that’s a whole other post.

Using GhostScriptSharp to create pdf thumbnails

It sounds like a simple task. Creating thumbnails of uploaded files for display in a repeating list. The project I am working on uses Kentico CMS, which provides the majority of this functionality out of the box. However, it did not create thumbnails for pdf files. The interim solution was to display the default pdf file icon. This worked for the first few weeks after which it became clear that the majority of the uploaded documents were pdf’s. This made for a very uniteresting screen, so the decision was made to find a component or service to create pdf thumbnails.

After a little research I decided on GhostscriptSharp, a C# wrapper for the GhostScript library. Installed, up and running, and simple solution implemented beautifully on localhost within a couple of hours. And so to the Staging staging server…. a very different server. This did not take hours or sadly days, but a couple of weeks (albeit of on and off development) to get working.
I am now going to outline the major time-sinks I encountered when getting GhostscriptSharp working on a remote staging server;
Firstly, you do not need to install GhostScript on the server which your code is running. This is something that was not 100% clear from the documentation. I was also hampered by the suggestion from several other blog posts that this was indeed a prerequisite for getting GhostScriptSharp up and running. For me, this was simply not the case. Your application will only require the correct version (I’ll come to this later!) of the GhostScript dll in it’s bin folder.
The next problem I encountered was the following error message;
Message: An attempt was made to load a program with an incorrect format. (Exception from HRESULT: 0x8007000B) Stack Trace: at GhostscriptSharp.GhostscriptWrapper.CreateAPIInstance(IntPtr& pinstance, IntPtr caller_handle)
My working solution was developed on a 64-bit machine. I was deploying to a 32bit machine. There are two versions of GhostScript a 64 bit and a 32 bit version (can you see where I’m going here?).
So, if you see this error message with your GhostScript application, check you are using the correct version of the dll.
Correct dll in the bin folder. Show me the pdf thumbnails.
No?
Another error message was now being displayed by my application. This one said;
Unable to load DLL ‘gsdll32.dll’: The specified module could not be found.

This error message was caused by insufficient permissions on the bin folder. To overcome this I had to grant read & write permissions for the application pool identity on the /bin folder to enable it to access the dll.


Another error message bites the dust, surely now…. thumbnails? No. Though now my application was up and running, so no more asp.net error messages, just application error messages. This one said;

Ghostscript 100 error

As it so often does, Stackoverflow came to the rescue. As well as granting read and write permissions on the bin folder, I also had to grant read and write permissions on the /files folder (the folder my thumbnails were being written to). This allowed the GhostScript dll to write thumbnails to the specified location.

And with that final change, the application sprang in to life and several hundred thumbnails were written to disk. Ah, the taste of victory.


In summary, writing the GhostScriptSharp code was a really simple process. Getting it to run on a production server was a slightly more involved process, but I would definitely use it again on other projects with the configuration knowledge I now have in the bank.

CMS Design Patterns…… anyone?

During my years as a developer I have worked with a lot of content management systems. Hell, I’ve even helped develop a couple. Over the last 18 months I have been working extensively with;

  • EpiServer
  • Sitefinity
  • Kentico
  • Umbrao
Having worked with all of these systems I come across the same design problems again and again, never quite sure which solution is the correct one. What I am looking for is an authoritative tome on CMS design patterns. Scratch that. I’m looking for any tome on CMS design patterns. So far a quick Google and a sift through the dusty back shelves of Amazon have turned up nothing.
So if anyone knows of any books on CMS design patterns, be sure to let me know.

Roy Osherove TDD Masterclass

Roy Osherove is giving an hands-on TDD Masterclass in the UK, September 21-25. Roy is author of “The Art of Unit Testing” (http://www.artofunittesting.com/), a leading tdd & unit testing book; he maintains a blog at http://iserializable.com (which amoung other things has critiqued tests written by Microsoft for asp.net MVC – check out the testreviews category) and has recently been on the Scott Hanselman podcast (http://bit.ly/psgYO) where he educated Scott on best practices in Unit Testing techniques. For a further insight into Roy’s style, be sure to also check out Roy’s talk at the recent Norwegian Developer’s Conference (http://bit.ly/NuJVa).

Full Details here: http://bbits.co.uk/tddmasterclass

bbits are holding a raffle for a free ticket for the event. To be eligible to win the ticket (worth £2395!) you MUST paste this text, including all links, into your blog and email Ian@bbits.co.ukwith the url to the blog entry. The draw will be made on September 1st and the winner informed by email and on bbits.co.uk/blog

Encode column values as xml in sql server 2005

The problem:

I needed to select rows from a database which corresponded to a note on a case. Each row was then written as xml to a file. Some of the columns in the row contained user entered information which could contain special characters which would create badly formed xml ‘<' and '>‘ for example. I needed a way to encode the column values when selected so that special characters were replaced.

The solution:

With a little help from the fellas over at StackOverflow.com I implemented a user-defined function which encoded the original column value. Performance wasn’t a big concern as the query only ever returns one row, hence using a function;

ALTER FUNCTION [pega].[encodeTextForXml]
(
@string_to_encode varchar(1000
)
)
RETURNS VARCHAR(MAX)

AS
BEGIN

declare @x xml

declare @encStr varchar(8000)

set @x = ‘<a/>’

set @x.modify(‘insert text{sql:variable(“@string_to_encode”)} as first into (/a)[1]’)

set @encStr = CAST(@x.query(‘/a/text()’) as varchar(8000))

RETURN @encStr

END

If I had the luxury of not being penned in by existing code, this would have been solved differently as suggested in the StackOverflow post, however it works well and does what it says on the tin!

Continual learning & getting involved

The Dean’s speech at my graduation ceremony professed that this would not be the end of my learning, but in fact the beginning. Learning is a lifelong process. It was an interesting speech and one that has rung very true in my career as a developer. If you are not continually learning then you will quickly become stale as a developer, not only in terms of the code you are writing, but the tools and processes you are using to write it.

Since starting as a junior developer I have had a bit of an infiriority complex wher it comes to other developers. So after seeing a post by Jeremy Miller on CodeBetter I was inspired to do more than the odd hour of book reading that I had previously been kidding myself was enough to improve as a developer.

As part of my quest to become a better developer I have done the following;

Begun blogging – if you’re reading this, you’ll know all about it.
Started an out-of-work-hours side project
Attended community developer days (see below)
Begun contributing to StackOverflow
Subscribed to number of podcasts other than just Hansleminutes
Begun investigating MS Certification – Longer term goal.

Community events

One of the most productive things I have done recently to learn more is attend DDD SouthWest a free community developer event. It was an excellent event, not only in terms of the content of the presentations, but also the organisation of the day. Combine this with the fact that it was all free – even the delicious food – and its difficult to understand why there were any empty seats – ok it was on a Saturday!

The sessions I attended were;

Embracing a new world – dynamic languages and .Net – Ben Hall
Get Going With jQuery – George Adamson
Real-world MVC architecture – Steve Sanderson
What’s New In C# 4? – Guy Smith-Ferrier

Here is the full agenda which gives a brief overview of all the sessions that ran on the day as well as source code and slides.

During lunch there were a series of grok talks. Grok talks are 10 minute micro-presentations on a particular subject. Two of the ones that stuck with me were;

10 tips for speeding up sql server by Jon Reade (SqlServerClub.com)
A talk on the fututre of developing .net for mobile devices (A developer whos name escapes me)

I have been inspired to get ore involved in my local community developer events. There are lots of good ones in the Bristol / Bath area on whose mailing lists I have lurked for the past few years / months. Its time to de-lurk and get involved. Here are a few of the groups I plan to infiltrate over the coming months.

BathCamp
The .NET Developer Network
Underscore

Sql Server – Get most recent item from a table using sub-select

Recently I needed to create a kinda complex sql query to get the most recent note that had been added to a case in an internal application at work. This is something I come across infrequently enough to forget it every time I want to do it so I thought I would post it as it may be of some use to others (and me next time I need it!)

The requirement:
We need to get the latest case note entry made by a user not the system. Data required is date, time, user ID, subject and content fields.

The database structure:

Case Table

Column name Type
Id (PK) int

Note table

Column name Type
Id (PK) int
CreateDateTime datetime
CaseId (FK) varchar(50)
CreateOperator varchar(50)
NoteTitle varchar(100)
NoteContent text
Service varchar(50)

Service Table

Column name Type
Id (PK) int
CaseId (FK) varchar(50)
Status varchar(100)

The solution
It appears fairly straightforward at first glance. Just join across the three tables using primary keys and foreign keys. The complication comes in that we only want to see the most recently added note on a case.

So, I used a sub-select to get the most recently created note ( max(CreateDateTime) ) and compare it to the CreateDateTime in the original select statement. It’s probably easier to look at the sql……

DECLARE @Service varchar(20)
SET @Service = ‘MyServiceName’

SELECT attach.Service, attach.CreateDateTime, attach.pxLinkedRefFrom, work.Id, attach.NoteTitle, attach.NoteContent, idx.Status
FROM myCaseTable work
JOIN myNoteTable attach ON work.Id = attach.CaseId
LEFT OUTER JOIN myServiceTable idx ON idx.CaseId = attach.CaseId
WHERE attach.CreateDateTime =
(

SELECT MAX(CreateDateTime) FROM myNoteTable attach2
WHERE attach2.CaseId = attach.CaseId
AND attach2.CreateOperator <> ‘System’
AND attach2.Service = @service

)

Any feedback, especially more performant ways of achieving the same thing task welcome.

Microsoft Enterprise Library Rediscovered

Brilliant.

I have just accidentally rediscovered Microsoft Enterprise Library. A set of common “building blocks” for .net applications. I used version 3.1 on an old project back in 2007 and it was a real time saver.

I have just downloaded v4.1 and it looks like it covers lots more. Will post more thoughts when I have had more time to figure it all out.

Unit testing is dead. Long live peer reviews.

Just a quick update to the unit testing trial we were conducting….. Management canned it. Due to the iterative development approach we follow it would have taken about three months to get some hard and fast figures and that wasn’t the sort of timescales they were looking at. Despite this I am still valiantly plugging away trying to prove the case for unit testing and hope that the decision will be reviewed in the coming months.

One of the more interesting measures that have been introduced to ensure quality on the project is peer reviews. Following each completed piece of development work a senior developer is assigned to review the code we have produced. Peer review as I understand it is a process of code review where code is pored over, design decisions challenged and suggestions for improvement debated. At my organisation it has been implemented as a checklist of things that should be done, with the focus not on quality, but getting ticks in a box. Very frustrating.

A tweak to the unit testing methodology

Just a quick update on a change of methodology from my earlier post.

We have decided to scrap the control team. It was deemed too expensive in terms of man hours as we are only trialling the concept.

So my team is now the “unit testing” team. Any regression defects that arise from this development stage are to be tracked back to individual pieces of work. This will hopefully show how many defects can be attributed to a piece of code with a unit test as opposed to code without. As a proper unit test should catch the majority of defects before system testing we will also be counting the number of defects our team tester finds from our code.

We will meticulously record the time spent on development and on writing unit tests, hopefully giving us some idea of the time overhead that comes with writing a test.