RSS

Category Archives: web development

Cross Domain requests when using JQuery Autocomplete

When using the JQuery control with AJAX web requests for example, it is likely that the data used to populate the auto complete is stored on another security domain.  As a result you ma have found that the list fails to appear and you start to bang your head against a table wondering why.

In order to ensure the auto complete continues to function correctly across domains the trick is to set the datatype of the ajax request to “jsonp” e.g.

Sample JQuery autocomplete using an ajax call

Sample JQuery autocomplete using an ajax call

Advertisements
 
 

Developing Successful Software Projects (S.T.A.M.P)

A large proportion of software projects are still failing costing billions each year and so the need to engineer solutions correctly has always been of great importance, yet there doesn’t seem to be a silver bullet approach to developing software that will always end in success.  If I’m honest, I don’t think there can be, what with the constant changes in technologies, skills, tools, platforms and devices we often find ourselves as developers in a world that is changing faster than we can keep up.  Imagine trying to build a bridge over a river that was always widening or narrowing and finding that your materials or tools were ever changing as the bridge was being constructed.  It’s a very difficult engineering discipline to be involved in but also an exciting and challenging one.

When I first started in the profession of software engineering I loved coding but I didn’t understand how to develop a successful project.  I’d get a vague requirement from a customer and jump straight into the code in order to implement the solution as I saw it.  15 years later and after working on multiple projects, some failures, some successful it got me thinking.  What have I learnt with regards to writing successful projects?, in fact what makes a successful project?  I then started to think, if I was starting a new project, and knowing what I know now with regards to why projects failed in the past, what basic principles, do I feel must be present for a project to stand up good chance of succeeding.

Based on my experience I came up with these 5 basic principles.

My basic ingredients for a software project

My basic ingredients for a software project

Now you may not agree and with different experience comes different viewpoints but this is great as I hope this post will lead on to further discussions and ideas as to: what are the ingredients for a successful software projects?

 

Stakeholder Driven

I’ve seen a lot of projects where the stakeholder will come to the developer(s) with a requirement. The development team will then formulate their own understanding of that requirement or feel they can offer a better return on the original requirement by driving the development with little input from the original stakeholder.

We have all seen the classic comic strip:

 

comic

 

The driving force behind a project should always be the stakeholder (whether that’s the customer, business, or any entity that requires a solution to the problem) guided closely by the development team.

Agile teaches us to use User Stories (a short statement which captures the ‘who’, ‘what’ and ‘why’) to help the stakeholders convey the functional business requirements to the development team.

This is great, as the User Stories allow the system to be broken down into smaller chunks and encourage the developers / analysts to work with the stakeholder to refine and detail the requirements.  Communication is improved and the stakeholder will have more confidence that they will receive a solution that closer matches their initial requirements.

If we consider SCRUM, requirements are broken down into sprints (normally a period of one-month or less which will result in a ‘Done’ and use able product) which implement part of the final solution.  Stakeholders should ideally be involved when reviewing the results of a sprint in order to ensure the requirements are being realised as they envisioned and to recommend changes or introduce new considerations due to a changing business or environment.  If a software project was not driven by the stakeholder in this way, the developers are likely to misinterpret the requirements, or the business is likely to change during the development cycle so that the end product no longer completely meets its needs.

A great advantage I’ve always seen from using the SCRUM approach is that the Requirements offering greater Risk or a greater ROI are handled first.  This can result in a use able product which meets the stakeholder’s needs earlier and as such the entire budget may not be required if the product is ‘good enough’ or additionally, the customer is more likely to have a working project if the budget is reached before the end of the development.

Testable Code

One of the key by-products of going down the Agile route is the Re factoring of Code.  Unlike the old waterfall models, Agile actively encourages change.

As Agile software professionals, we are encouraged to develop sprints of work which can be reviewed by the stakeholders.  This is great as it allows testing in parallel with development reducing the overall time for the project life cycle   However, this will almost certainly result in the re factoring of existing code as customers detail and finalise their requirements or the solution is re factored to adjust to the ever changing business needs.

Now as developers I suspect we have all been in that situation where the re factoring of an area of code has resulted in another area breaking.  However, by writing testable code and applying unit tests for example, we can feel confident that we will be notified of any breakages due to the changes we make.  As a result we deliver higher quality software which is easier to maintain as well as the time in QA and testing being reduced.  By writing testable code we promote good software practices such low coupling, high cohesion, code re-use and hopefully employing ideas such as unit testing and dependency injection.

 

Accept Change 

Often overlooked, but in my mind a key ingredient to delivering a successful software project is the ability to design a solution which can adapt to change, whether this relates to changes in the requirements, design or even the environment within which the solution is deployed.

A common mistake is to receive a set of requirements and dive straight into coding a solution without entertaining the idea that requirements may change or even the business needs may change leading to the re factoring of code.  Now this is where the experience of the developer comes into its own.  Developing a solution that fulfils the principle of low coupling, high cohesion and can be extended and modified in an easy programmatic manner takes real skill.

If we don’t accept that the software is likely to need to change with future enhancements then we must accept that maintenance and complexity costs are going to increase in order to implement future modifications and enhancements leading to larger development time, lower quality software not to mention lower customer satisfaction.

Looking back, every company I have worked for have been involved in the development of at least one long running project and most of those have suffered from ‘Software Rot’.  Many of the projects where designed with a single implementation in mind with little thought of future re factoring and change requests.  The result was normally quick hacks to existing code rather than expensive design changes.  With this dis-regard for good design came messy, in-efficient and un-maintainable code i.e..  ‘Software Rot’.   Eventually the solution will no longer meet the needs of the business.  This can lead to a break down in the moral of the develop team as essentially they will be working and maintaining a system which they are not proud off and desperate to re-write.  Something which is not always possible due to the expense and time involved.

Measurable and Transparent Development

How can we determine if a project is on track? After all if there are obstacles in the way or deadlines are slipping then these need to be addressed as soon as possible.

Frameworks such as SCRUM go some way to solving this issue.  A project backlog is created as a single source of everything that is required in order to implement a workable solution.  Items are selected from this backlog and arranged into sprints of work (no more than a month), each resulting in a “Done”, useable and potentially releasable product.  During these sprints there are short morning meetings to check progress and remove any potential obstacles.

Taking this approach to software development allows all stakeholders to have a clear picture to what is required and the progress is clearly visible and measurable   It will also improve communication and confidence among the stakeholders as well as the moral of the development team by returning a regular sense of achievement and progress to the development cycle.

 

Planning

And finally, when it comes to producing quality software, on time and to budget there is no substitute for good planning.  After all, like a friend once told me

If you were going to scale Everest you would plan first to give you the best chance of success, so why should writing software be any different.  Don’t jump straight into the code without planning first it will not end in success’.

Again, good planning comes with experience.  For example, take the common scenario of the initial requirements being finalised and entered into the project backlog.  Now at this stage your manager may appear and say ‘right I need to price this project so how longs it going to take to develop?’  Developers hate this, as at this stage there is insufficient detail to time each requirement accurately and more often than not they are held accountable to the estimates given before the project commences.  A common and difficult situation.

However there is a solution.  Sometimes known as Planning Poker the basic premise is that we as developers stop trying to estimate requirements as time-scales but rather as degrees of complexity compared to a known entity.  In other words, using a past piece of work as reference, and knowing how long it took to implement, we can estimate how complex we feel new requirements are as opposed to this original piece of work.

The Poker element of this approach helps with the accuracy of the estimates, after all more heads are better than one.  So take the following made up complexity scale – 2, 4, 8, 16, 32, 50, 100.  With a requirement on the table all developers give an estimate from the scale.  One developer might say ‘2’, one ‘50’ while the others settle on ‘8’.  This is great as all developers have different strengths and experience and might pick up on things that others other look. The point is, the reason for the ‘2’ and ‘50’ would be discussed before entering into a new round of estimates.  Hopefully with each round of discussions, points are raised which lead to more similar estimates by the developers until an accurate idea of the complexity of a requirement is known.

Good Planning should also mean good design.  With the surge of multiple platforms and devices on which software must operate rising, never as this been more evident.  What environment will the application be deployed into?, what is the topology? Are there performance requirements? What are the potential obstacles?  Will the application need to be scalable? Cloud or Internal Hosting etc.   All these questions need to be asked, planned for and built into the design if the solution is to be truly successful.

Planning is always going to be my main ingredient for a successful software project as from experience, whether using the traditional Waterfall approach or the newer Agile approach, the successful projects I have been involved with have always been planned properly.  Remember the well-known phrase;

‘Failing to Plan is planning to fail’

 

Creating a MVC Application using Unit of Work, Repository Pattern and NinJect

Part 1 – Summary of the Solution (Click Here)

Part 2 – The Solution

In this Post I’ll examine the components that make up the architectural solution described in my previous post in more detail up to the API / Controller level.  If you want a copy of the complete code then don’t hesitate to contact me on the About Section of my blog and I’ll send you a copy.

Ok so let’s start at the bottom with the database.  Well there is no database, not yet anyway.  In my example I have used the Code-first approach which allows me to describe my model in code, from which Entity Framework will construct the database.  Knowing this lets have a look at my model project;

The Solution

I am using the common Blog Model.  Keeping it simple, a Blog can contain many Posts and a Post can contain many Comments.

For this tutorial is not necessarily important to know what properties these objects contain but rather that they will reflect tables within our database and act as container objects to pass information around our system.  However here is the code for the Post Class just so you have an idea of how it is constructed:

The Post Model Object

Remember that Entity Framework has the job of trying to convert C# model classes to the database and so it’s wise to give it as much information as possible to give it a helping hand.  I’ve demonstrated this by adding an annotation to the Title Property and creating references back to Parent objects e.g. BlogId, Blog etc. which can be used by Fluent API (Not shown in this post) to enforce relationships between the data objects.  I have also created the Comments collection as Virtual.  This will ensure that the Entity Framework will not initialise the Comments collection until the data is specifically requested, improving performance.

Once we have our model classes we need to implement the repository layer to provide consistent data access.  To provide the consistent data access I created a new project (to contain my interfaces) which contains an IRepository Interface;

IRepository Class

This provides us with a generic interface which will accept an object from our model project and describe the data access actions that will be available to our application, e.g. retrieving records, updating records and adding a new record etc.  By implementing this interface we have a standard set of actions that can be performed on our database model (consistency).  If a model object needs to extend this set of actions, then we just create another repository interface which implements IRepository and includes the extra method signatures.  For example check out IPostRepository in this example which adds an extra method to return the number of comments associated with a Post;

IPostRepository Class

Following on from this we need to create the concrete implementations of the repository classes.  This is done within another project which is essentially our data layer.  I have created a class named EFRepository which implements IRepository.  This will be our base data access class and every object in our model will implement the functionality within this class as a minimum;

EFRepository Class

We will also need implementations for extended repository interfaces such as the IPostRepository created earlier;

PostRepository Class

As in this instance we are using the Entity Framework, the repositories will access the database via the DBContext. You will notice that in the EFRepository class we create an instance of this DBContext in order to perform operations of the database.  When developing against the Entity Framework using the Code-First approach you must create a Context Class which implements DBContext to identify the data objects which will be made available via the DBContext.  Here is an example of the Context class I created for this project named BlogDBContext;

BlogDbContext Class

By now we have described our model objects and created our repositories to allow us to perform data persistence actions on the database objects that these represent.  The next step is to implement the Unit of Work Pattern.  This requires that we create a Unit of Work class which will provide a single point of access to our repositories and a method to persist changes to the database as a single transaction improving efficiency and reducing concurrency problems.

So what does the Unit of work class look like?  Well if we examine its interface it simply exposes the repository objects and a single method to persist any changes back to the database.  That’s it simple.

IWebTemplateUoW Class

And the concrete implementation of the UnitOfWork class would look like this;

WebTemplateUoW Class

The key things to point out here is that firstly when we invoke the Unit of Work class we also create a new instance of our DBContext.  By wrapping the DBContext within the Unit of Work we can submit our changes within a single update request and dispose of the context once the request is complete.  We also have the advantage of performing atomic transactions, for example if one object fails to update when updating multiples objects within the database then we can rollback all objects to their original state reducing data inconsistency issues.

Another important point is that access to all repositories are available within this class and that also our context can be configured from this class.  If there was suddenly a business requirement to move to another database platform which used different data access technologies e.g. nHiberate, then we could create another Unit of Work class and substitute it for this one without breaking the logic and UI layers.  The advantages of a decoupled system!!

The final stage is to implement the Unit of Work into our business layer or in the case of this post our controllers within our API as I created a MVC application.

A sample controller Class using the Unit of Work

In this example I have created a quick controller class which when instantiated creates an instance of the Unit of work.  This in turn will create an instance of our DBContext and out repository classes. As a result we instantly have available all the objects from our model and the operations associated with that model via the repository interfaces e.g. GetAll(), Remove(entity), Add(newEntity) etc. and in the case of the Post object GetCommentCount() which was implemented via the IPostRepository.

The only bit of code in this controller you might be querying is the constructor which seems to accept an instantiated unit of work object, in this case of type IWebTemplateUoW.  This is because the Unit of Work Object is created for the controller using NinJect. NinJect is a technology which allows dependency injection.  In other words it allows the application to choose which concrete implementation of a component to create at run time.

Dependency Injection and NinJect is a large topic so I won’t try to go into it in any level of detail here but essentially I created a NinJect configuration class which tells my application that if it comes across an object of type IWebTemplateUow, then create an instance of WebTemplateUow.

The Ninject Configuration Class

 

 

 

 

 

 

 

 

 

 

 

 

Creating a MVC Application using Unit of Work, Repository Pattern and NinJect

Part 1 – Summary of the Solution

When developing new Web Applications using ASP.NET MVC we always encounter the same questions; where does the business logic layer fit in?, how can we implement Unit Testing?, what if the database changes? How can we make it scalable?

As a result, I set out to create a project template which could be used for future MVC projects and would address the questions above.

After much investigation the best approach I found was to develop the following framework:

 

Solution Component Overview

In this post I will just give an overview of the main components of the solution which is followed by a more detailed technical post of how the solution is put together.

 

Starting at the bottom of the diagram you will notice I have a database and sitting on top of that database I have the Entity Framework.  In the following post you will see how I use the Code-First approach to allow the Entity Framework to construct the database based on the model.

On top of the Entity Framework I am making use of the Repository Pattern to allow access to the objects that make up the model and to enforce the actions associated with these objects.  By making use of the Repository Pattern we prevent needless duplication of Data Access Logic code and enforce a standard set of actions that can be applied to the data objects e.g. Get All(), Find(), Add, Delete() etc.

The Repositories serve up the Model objects to the Unit of Work.  Using the Unit of Work Pattern we allow the higher tier layers to access and modify the data objects via a single class.  This result is a single Commit (operation / transaction) when writing changes made to multiple objects, back to the database allowing for a much more efficient approach and reducing concurrency issues.  Using the Unit of Work Pattern also means we de-couple the higher logic and UI tiers from the Database context which allows us to substitute in alternative Unit of Work classes that may be used for Unit Testing or connecting to alternative databases.

The Unit Of work object is then directly used by the API components or if required a Business Logic Layer which can be slotted between the API / UI layer and the Unit of Work classes to give a further level of separation.  In my example I have used NinJect as my IoC container to instantiate the correct Unit Of Work Objects but you will see more on this in the following post.

Our UI then has access to our data level objects and models without needing to know any detail about the ins and outs of the system which will allow us to build any UI on top of our API service, whether it’s a desktop, Web, Phone, IPad app etc.

Click here to see my real world template

 

 

 

Navigation Menu using CSS and JQuery

It’s a smart, professional navigation menu for the top of your web site.  The menu itself is styled using CSS and I use JQuery to add a nice fade in fade out effect to the menu items.  It’s also very quick and easy to implement.

I have broken up the tutorial into the following lessons:

Step 1 – Positioning the Navigation Menu

Step 2 – Creating and Positioning the Top Level Menu Items

Step 3 – Adding the Drop Down Menus

Step 4 – Animating the Drop Down Menus using JQuery

 

Navigation Menu using CSS and JQuery – Step 1

Step 1 – Positioning the Navigation Menu

Ok so let’s assume we are starting from scratch and have a blank web page:

Blank Web Page Template

I am now going to extend my main.html page with 2 new Divs. These are ‘content’ and ‘top_menu_bar’

Content DIVs

 

I am also going to add the following styles to our style.css file for these 2 Divs:

Content DIVs styles

The border properties are not required but merely demonstrate where our Divs are sitting on the web page.  The Content Div will hold the page content, while the top_menu_bar Div is going to be the placeholder for our menu.  If you browse to main.html you will see something like this:

Using Borders to show the Positioning of the Content DIVs

Try resizing the browser menu.  Notice that the menu will remain positioned in the middle of the window as we are using auto margins.

The final stage of this step is to remove our temporary borders and apply a gradient background to our ‘top_menu_bar’ Div.  Go online and type gradient generator into Google.  There are lots out there but essentially you want to create a gradient that is 1 pixel wide and 60 pixels in height e.g.

Sample Gradient Image

Save the Gradient image to your site folder and revise the Style sheet as below:

Revised Content Style

Your page will now look like this:

Navigation Bar

 

 

 

 

 
 

Navigation Menu using CSS and JQuery – Step 2

Step 2 – Creating and Positioning the Top Level Menu Items

To our page, lets start by adding another DIV which will contain our navigation bar items and a List of menu Items:

Parent Menu Items

If you browse to the page now, before we have added any styles, you will see the following (not very stylish):

Un-Styled Menu Items

Therefore, lets modify the style sheet and add the following styles in order to position and style our top level menu items:

Top Level Menu Item Styles

There are a few styles worth noting here:

Firstly the * Style, shown towards the top, ensures that we reset any default positioning applied by the browser template.  If you have done much web development you will have noticed that pages display slighltly differently based on the browser you are using.  By adding this style I am ensuring the body content will always be posiiton flush with the top of the page.

I have also applied a z-index property to the navigationBar DIV to ensure that the menu and its drop down menus will display on top of any other page element.

I have added some text shadow to the menu items.  This is a newer CSS property and so will not be available in all browsers and will simply not be applied if the browser does not support Text Shadow.

I like to apply new effects and styles where possible but the golden rule is not too implement a style which will cause the element to look out of place if the style is not supported by the browser and so cannot be applied.

Ok great, so now if you browse to the page we should see our navigation bar taking shape:

Top Level Menu Items