Category Archives: Software Architecture – MVC

Developing Successful Software Projects (S.T.A.M.P)

A large proportion of software projects are still failing costing billions each year and so the need to engineer solutions correctly has always been of great importance, yet there doesn’t seem to be a silver bullet approach to developing software that will always end in success.  If I’m honest, I don’t think there can be, what with the constant changes in technologies, skills, tools, platforms and devices we often find ourselves as developers in a world that is changing faster than we can keep up.  Imagine trying to build a bridge over a river that was always widening or narrowing and finding that your materials or tools were ever changing as the bridge was being constructed.  It’s a very difficult engineering discipline to be involved in but also an exciting and challenging one.

When I first started in the profession of software engineering I loved coding but I didn’t understand how to develop a successful project.  I’d get a vague requirement from a customer and jump straight into the code in order to implement the solution as I saw it.  15 years later and after working on multiple projects, some failures, some successful it got me thinking.  What have I learnt with regards to writing successful projects?, in fact what makes a successful project?  I then started to think, if I was starting a new project, and knowing what I know now with regards to why projects failed in the past, what basic principles, do I feel must be present for a project to stand up good chance of succeeding.

Based on my experience I came up with these 5 basic principles.

My basic ingredients for a software project

My basic ingredients for a software project

Now you may not agree and with different experience comes different viewpoints but this is great as I hope this post will lead on to further discussions and ideas as to: what are the ingredients for a successful software projects?


Stakeholder Driven

I’ve seen a lot of projects where the stakeholder will come to the developer(s) with a requirement. The development team will then formulate their own understanding of that requirement or feel they can offer a better return on the original requirement by driving the development with little input from the original stakeholder.

We have all seen the classic comic strip:




The driving force behind a project should always be the stakeholder (whether that’s the customer, business, or any entity that requires a solution to the problem) guided closely by the development team.

Agile teaches us to use User Stories (a short statement which captures the ‘who’, ‘what’ and ‘why’) to help the stakeholders convey the functional business requirements to the development team.

This is great, as the User Stories allow the system to be broken down into smaller chunks and encourage the developers / analysts to work with the stakeholder to refine and detail the requirements.  Communication is improved and the stakeholder will have more confidence that they will receive a solution that closer matches their initial requirements.

If we consider SCRUM, requirements are broken down into sprints (normally a period of one-month or less which will result in a ‘Done’ and use able product) which implement part of the final solution.  Stakeholders should ideally be involved when reviewing the results of a sprint in order to ensure the requirements are being realised as they envisioned and to recommend changes or introduce new considerations due to a changing business or environment.  If a software project was not driven by the stakeholder in this way, the developers are likely to misinterpret the requirements, or the business is likely to change during the development cycle so that the end product no longer completely meets its needs.

A great advantage I’ve always seen from using the SCRUM approach is that the Requirements offering greater Risk or a greater ROI are handled first.  This can result in a use able product which meets the stakeholder’s needs earlier and as such the entire budget may not be required if the product is ‘good enough’ or additionally, the customer is more likely to have a working project if the budget is reached before the end of the development.

Testable Code

One of the key by-products of going down the Agile route is the Re factoring of Code.  Unlike the old waterfall models, Agile actively encourages change.

As Agile software professionals, we are encouraged to develop sprints of work which can be reviewed by the stakeholders.  This is great as it allows testing in parallel with development reducing the overall time for the project life cycle   However, this will almost certainly result in the re factoring of existing code as customers detail and finalise their requirements or the solution is re factored to adjust to the ever changing business needs.

Now as developers I suspect we have all been in that situation where the re factoring of an area of code has resulted in another area breaking.  However, by writing testable code and applying unit tests for example, we can feel confident that we will be notified of any breakages due to the changes we make.  As a result we deliver higher quality software which is easier to maintain as well as the time in QA and testing being reduced.  By writing testable code we promote good software practices such low coupling, high cohesion, code re-use and hopefully employing ideas such as unit testing and dependency injection.


Accept Change 

Often overlooked, but in my mind a key ingredient to delivering a successful software project is the ability to design a solution which can adapt to change, whether this relates to changes in the requirements, design or even the environment within which the solution is deployed.

A common mistake is to receive a set of requirements and dive straight into coding a solution without entertaining the idea that requirements may change or even the business needs may change leading to the re factoring of code.  Now this is where the experience of the developer comes into its own.  Developing a solution that fulfils the principle of low coupling, high cohesion and can be extended and modified in an easy programmatic manner takes real skill.

If we don’t accept that the software is likely to need to change with future enhancements then we must accept that maintenance and complexity costs are going to increase in order to implement future modifications and enhancements leading to larger development time, lower quality software not to mention lower customer satisfaction.

Looking back, every company I have worked for have been involved in the development of at least one long running project and most of those have suffered from ‘Software Rot’.  Many of the projects where designed with a single implementation in mind with little thought of future re factoring and change requests.  The result was normally quick hacks to existing code rather than expensive design changes.  With this dis-regard for good design came messy, in-efficient and un-maintainable code i.e..  ‘Software Rot’.   Eventually the solution will no longer meet the needs of the business.  This can lead to a break down in the moral of the develop team as essentially they will be working and maintaining a system which they are not proud off and desperate to re-write.  Something which is not always possible due to the expense and time involved.

Measurable and Transparent Development

How can we determine if a project is on track? After all if there are obstacles in the way or deadlines are slipping then these need to be addressed as soon as possible.

Frameworks such as SCRUM go some way to solving this issue.  A project backlog is created as a single source of everything that is required in order to implement a workable solution.  Items are selected from this backlog and arranged into sprints of work (no more than a month), each resulting in a “Done”, useable and potentially releasable product.  During these sprints there are short morning meetings to check progress and remove any potential obstacles.

Taking this approach to software development allows all stakeholders to have a clear picture to what is required and the progress is clearly visible and measurable   It will also improve communication and confidence among the stakeholders as well as the moral of the development team by returning a regular sense of achievement and progress to the development cycle.



And finally, when it comes to producing quality software, on time and to budget there is no substitute for good planning.  After all, like a friend once told me

If you were going to scale Everest you would plan first to give you the best chance of success, so why should writing software be any different.  Don’t jump straight into the code without planning first it will not end in success’.

Again, good planning comes with experience.  For example, take the common scenario of the initial requirements being finalised and entered into the project backlog.  Now at this stage your manager may appear and say ‘right I need to price this project so how longs it going to take to develop?’  Developers hate this, as at this stage there is insufficient detail to time each requirement accurately and more often than not they are held accountable to the estimates given before the project commences.  A common and difficult situation.

However there is a solution.  Sometimes known as Planning Poker the basic premise is that we as developers stop trying to estimate requirements as time-scales but rather as degrees of complexity compared to a known entity.  In other words, using a past piece of work as reference, and knowing how long it took to implement, we can estimate how complex we feel new requirements are as opposed to this original piece of work.

The Poker element of this approach helps with the accuracy of the estimates, after all more heads are better than one.  So take the following made up complexity scale – 2, 4, 8, 16, 32, 50, 100.  With a requirement on the table all developers give an estimate from the scale.  One developer might say ‘2’, one ‘50’ while the others settle on ‘8’.  This is great as all developers have different strengths and experience and might pick up on things that others other look. The point is, the reason for the ‘2’ and ‘50’ would be discussed before entering into a new round of estimates.  Hopefully with each round of discussions, points are raised which lead to more similar estimates by the developers until an accurate idea of the complexity of a requirement is known.

Good Planning should also mean good design.  With the surge of multiple platforms and devices on which software must operate rising, never as this been more evident.  What environment will the application be deployed into?, what is the topology? Are there performance requirements? What are the potential obstacles?  Will the application need to be scalable? Cloud or Internal Hosting etc.   All these questions need to be asked, planned for and built into the design if the solution is to be truly successful.

Planning is always going to be my main ingredient for a successful software project as from experience, whether using the traditional Waterfall approach or the newer Agile approach, the successful projects I have been involved with have always been planned properly.  Remember the well-known phrase;

‘Failing to Plan is planning to fail’


Creating a MVC Application using Unit of Work, Repository Pattern and NinJect

Part 1 – Summary of the Solution (Click Here)

Part 2 – The Solution

In this Post I’ll examine the components that make up the architectural solution described in my previous post in more detail up to the API / Controller level.  If you want a copy of the complete code then don’t hesitate to contact me on the About Section of my blog and I’ll send you a copy.

Ok so let’s start at the bottom with the database.  Well there is no database, not yet anyway.  In my example I have used the Code-first approach which allows me to describe my model in code, from which Entity Framework will construct the database.  Knowing this lets have a look at my model project;

The Solution

I am using the common Blog Model.  Keeping it simple, a Blog can contain many Posts and a Post can contain many Comments.

For this tutorial is not necessarily important to know what properties these objects contain but rather that they will reflect tables within our database and act as container objects to pass information around our system.  However here is the code for the Post Class just so you have an idea of how it is constructed:

The Post Model Object

Remember that Entity Framework has the job of trying to convert C# model classes to the database and so it’s wise to give it as much information as possible to give it a helping hand.  I’ve demonstrated this by adding an annotation to the Title Property and creating references back to Parent objects e.g. BlogId, Blog etc. which can be used by Fluent API (Not shown in this post) to enforce relationships between the data objects.  I have also created the Comments collection as Virtual.  This will ensure that the Entity Framework will not initialise the Comments collection until the data is specifically requested, improving performance.

Once we have our model classes we need to implement the repository layer to provide consistent data access.  To provide the consistent data access I created a new project (to contain my interfaces) which contains an IRepository Interface;

IRepository Class

This provides us with a generic interface which will accept an object from our model project and describe the data access actions that will be available to our application, e.g. retrieving records, updating records and adding a new record etc.  By implementing this interface we have a standard set of actions that can be performed on our database model (consistency).  If a model object needs to extend this set of actions, then we just create another repository interface which implements IRepository and includes the extra method signatures.  For example check out IPostRepository in this example which adds an extra method to return the number of comments associated with a Post;

IPostRepository Class

Following on from this we need to create the concrete implementations of the repository classes.  This is done within another project which is essentially our data layer.  I have created a class named EFRepository which implements IRepository.  This will be our base data access class and every object in our model will implement the functionality within this class as a minimum;

EFRepository Class

We will also need implementations for extended repository interfaces such as the IPostRepository created earlier;

PostRepository Class

As in this instance we are using the Entity Framework, the repositories will access the database via the DBContext. You will notice that in the EFRepository class we create an instance of this DBContext in order to perform operations of the database.  When developing against the Entity Framework using the Code-First approach you must create a Context Class which implements DBContext to identify the data objects which will be made available via the DBContext.  Here is an example of the Context class I created for this project named BlogDBContext;

BlogDbContext Class

By now we have described our model objects and created our repositories to allow us to perform data persistence actions on the database objects that these represent.  The next step is to implement the Unit of Work Pattern.  This requires that we create a Unit of Work class which will provide a single point of access to our repositories and a method to persist changes to the database as a single transaction improving efficiency and reducing concurrency problems.

So what does the Unit of work class look like?  Well if we examine its interface it simply exposes the repository objects and a single method to persist any changes back to the database.  That’s it simple.

IWebTemplateUoW Class

And the concrete implementation of the UnitOfWork class would look like this;

WebTemplateUoW Class

The key things to point out here is that firstly when we invoke the Unit of Work class we also create a new instance of our DBContext.  By wrapping the DBContext within the Unit of Work we can submit our changes within a single update request and dispose of the context once the request is complete.  We also have the advantage of performing atomic transactions, for example if one object fails to update when updating multiples objects within the database then we can rollback all objects to their original state reducing data inconsistency issues.

Another important point is that access to all repositories are available within this class and that also our context can be configured from this class.  If there was suddenly a business requirement to move to another database platform which used different data access technologies e.g. nHiberate, then we could create another Unit of Work class and substitute it for this one without breaking the logic and UI layers.  The advantages of a decoupled system!!

The final stage is to implement the Unit of Work into our business layer or in the case of this post our controllers within our API as I created a MVC application.

A sample controller Class using the Unit of Work

In this example I have created a quick controller class which when instantiated creates an instance of the Unit of work.  This in turn will create an instance of our DBContext and out repository classes. As a result we instantly have available all the objects from our model and the operations associated with that model via the repository interfaces e.g. GetAll(), Remove(entity), Add(newEntity) etc. and in the case of the Post object GetCommentCount() which was implemented via the IPostRepository.

The only bit of code in this controller you might be querying is the constructor which seems to accept an instantiated unit of work object, in this case of type IWebTemplateUoW.  This is because the Unit of Work Object is created for the controller using NinJect. NinJect is a technology which allows dependency injection.  In other words it allows the application to choose which concrete implementation of a component to create at run time.

Dependency Injection and NinJect is a large topic so I won’t try to go into it in any level of detail here but essentially I created a NinJect configuration class which tells my application that if it comes across an object of type IWebTemplateUow, then create an instance of WebTemplateUow.

The Ninject Configuration Class













Creating a MVC Application using Unit of Work, Repository Pattern and NinJect

Part 1 – Summary of the Solution

When developing new Web Applications using ASP.NET MVC we always encounter the same questions; where does the business logic layer fit in?, how can we implement Unit Testing?, what if the database changes? How can we make it scalable?

As a result, I set out to create a project template which could be used for future MVC projects and would address the questions above.

After much investigation the best approach I found was to develop the following framework:


Solution Component Overview

In this post I will just give an overview of the main components of the solution which is followed by a more detailed technical post of how the solution is put together.


Starting at the bottom of the diagram you will notice I have a database and sitting on top of that database I have the Entity Framework.  In the following post you will see how I use the Code-First approach to allow the Entity Framework to construct the database based on the model.

On top of the Entity Framework I am making use of the Repository Pattern to allow access to the objects that make up the model and to enforce the actions associated with these objects.  By making use of the Repository Pattern we prevent needless duplication of Data Access Logic code and enforce a standard set of actions that can be applied to the data objects e.g. Get All(), Find(), Add, Delete() etc.

The Repositories serve up the Model objects to the Unit of Work.  Using the Unit of Work Pattern we allow the higher tier layers to access and modify the data objects via a single class.  This result is a single Commit (operation / transaction) when writing changes made to multiple objects, back to the database allowing for a much more efficient approach and reducing concurrency issues.  Using the Unit of Work Pattern also means we de-couple the higher logic and UI tiers from the Database context which allows us to substitute in alternative Unit of Work classes that may be used for Unit Testing or connecting to alternative databases.

The Unit Of work object is then directly used by the API components or if required a Business Logic Layer which can be slotted between the API / UI layer and the Unit of Work classes to give a further level of separation.  In my example I have used NinJect as my IoC container to instantiate the correct Unit Of Work Objects but you will see more on this in the following post.

Our UI then has access to our data level objects and models without needing to know any detail about the ins and outs of the system which will allow us to build any UI on top of our API service, whether it’s a desktop, Web, Phone, IPad app etc.

Click here to see my real world template




Step 4 – Querying the WCF Data Service

This post is Step 4 in a Quick Start tutorial on how to create WCF Data Services

In Step 3 we finished exposing our WCF service.  Browsing directly to the Service will display the entities that we are exposing via the service using the AtomPub format.

If we want the data returned as an alternative format e.g. JSON or XML then just modify the Request Header using the ‘Accept’ attribute, so for JSON we could ass the request header:

Accept: application/json

The best way to demonstrate this is to use a free application called Fiddler (

With Fiddler loaded enter the address of the WCF service into the Composer tab but add the ‘Accept: application/json’ into the Request Headers window.  Click the Execute button and you should have a JSON page returned in the Web Sessions window:

Using Fiddler to modify the Request Header

Click on the Page in the Web Sessions window to view the results.

So far we have queried the WCF service and found out how to return the data in a specific format but what about querying the actual data.

Let’s assume we want to return all records held within the Staff table.  As this was one of the entities exposed by the service (see beginning of this article) then we can just append the href attribute representing this entity to the Service URL e.g.


NOTE: it is case sensitive

JSON returned using Fiddler

Again I am using Fiddler to display the results returned as JSON

Let’s call the same URL but return the results in the raw atompub format so we can analyse exactly what is being returned:

NOTE: Some browsers are automatically set up to view the raw content as an RSS feed.  You may need to disable this feature in order the view the raw response from the data service. 

One important point to remember is that the Data Service is purely exposing data.  The whole concept is based on a RESTful architecture.  This is great if we want to build a client and an alternative language or device but adds a little more complexity when it comes to modelling the relationships and navigational properties of the entities on the server-side.

WCF Data Services do however provide hints on how to navigate around the data it servers to the client.  Notice the Link tags in the XML above.  The first one gives us an exact pointer to the record we are viewing.  You can try this yourself; simply modify the original URL to include the ID of the record you are after in brackets (this URL is also shown in the ID tag in the above XML).  Now try changing the id to return different records.

Now referring back to Step 1 we created the following database structure:

MySQL Table Structure from Step 1

There is a link between Staff and StaffTypes.  This has also been exposed in the XML served up by the data service – See the second Link tag.

So by simply using the ‘href’ supplied to me in this tag I can navigate to the related StaffType record e.g.

Calling http://localhost/WCFSparsMobile/WcfDataService1.svc/staffs(1)/stafftype1

Returns the record:

Ok now for the clever bit.  WCF Data Services are based on OData (Open Data Protocol).  This allows us to query the data using nothing but the URI. This is a massive subject which includes using the URI to query, filter and sort data, too much for this blog post, but I show an example just to get you started.

So let’s take the scenario that we want to return a staff record where the forename is equal to ‘Dan’ (assuming there is a person called Dan in the database!!)

We simply append a filter expression to the URI as follows:

http://localhost/WCFSparsMobile/WcfDataService1.svc/staffs?$filter=forename eq ‘Dan’

This is great as you may have noticed we are returning, navigating and now filtering data without writing any code and so essentially giving the power to the client to request the exact data it requires.


Step 3 – Creating the WCF Data Service

This post is Step 3 in a Quick Start tutorial on how to create WCF Data Services

Now for the fun bit, exposing our data using WCF Data Services.

1. Start by creating a new project in the solution using the template – ASP.NET Empty Web Application calling it something like WcfDataService.

2. Add a reference to the data model project we created in step 2.

3. Copy the ConnectionStrings section from the App.Config file in the data model project into the Web.config file of the WcfDataService project that we have just created.

4. Now add a new file file (Add New Item) to the Data Service Project of type ‘WCF Data Service’

WCF Data Service Template

In the Code view of the service we have just added (wcDataService1.svc.cs by default) we need to make the following code modifications:

Modifing the Data Service Class

The first modification lets the service know which data we want to expose, in this case the entities representing our model created in step 2 via the Entity Framework.

The second modifications are relating to security.  This is quite a cool feature as it allows us to specify which parts of our data should be available via the service and to what degree that data should be available e.g. Read only, Read & write etc.

Ok, now to run out service (yep thats it, not much coding to get a service initially set up!!)

Right click the Service in the project and select ‘View In Browser’:

If your service has been created correctly you should see an XML representation of the entity objects we created in step 1 and 2 and exposed via the service earlier in this step.

Results from calling the WCF service

So we have a fully functional WCF Service but it doesn’t really show a lot at the moment.  The next Step in this blog post gives more of an initial insight into the data that is being served up by the service and how to manipulate it.

Step 4 – Understanding the Results


Step 2 – WCF Data Services – Entity Framework Model

This post is Step 2 in a Quick Start tutorial on how to create WCF Data Services

Using the Entity Framework is a great way to model the database and serve up our data although if you are using MySQL as your data source like me, you will need to install the connector for .NET which will allow Entity Framework to import from a MySQL.  The Connector can be downloaded here

1. Once the connector is installed open Visual studio ‘create a new Project’

2. Within the Project click to Add a New Item – ADO.NET Entity Data Model and name it ‘companyModel’

3. When the Entity Data Model Wizard appears click ‘Generate from database’

4. On the Connection screen hit ‘New Connection’ and on the following Connection Properties screen you should be able to select the MySQL driver (if the connector successfully installed) and complete your credentials for the database:

Connection Properties for the MySQL Data Source

5. Select to import all the tables we created in step 1 and click finish to generate the model and if all goes correctly you should end up with the data structure created in step 1, imported into the Entity Framework

Entity Framework model representing the MySQL Data Structure

Follow the link for Step 3 where we expose this data using a WCF Data Service

Step 3 – The WCF Data Service


Step 1 – WCF Data Services – The MySQL Database

This post is Step 1 in a Quick Start tutorial on how to create WCF Data Services

First just a bit of background.  I am going to create a simple database structure in MySQL that has a few tables but also enforces relationships.  As I’m not doing a MySQL tutorial I will not go into the technicalities of creating the database but if you feel more confident with something else e.g. SQL Server then just copy the following structure in your database of choice.

We are going to model the following scenario:

In our organisation there is a number of staff and a member of staff may care for one or more supported people

The database structure I have created to represent this scenario looks like this (Note: I have used MySQL workbench to create the following EER model – Once created you can use the Database à Synchronize model feature of workbench to port the structure straight into your database)

The sample Data Structure that will be exposed using WCF Data Services

Once the database has been created populate the tables with some sample data e.g.

Populate with sample data

Ok so we have a database to follow the link to Step 2 where we import this data into the Entity Framework

Step 2 – The Entity Framework