Sunday, December 20, 2009

2010 the year of flexible packages

Lots of people make predictions but I'd like to make one that I haven't seen around.

For quite a few years there has been Package + Middleware but people have still been running it as middleware + package. So the middleware folks do CI and the package folks... well the package folks don't. This is a generalisation and a few, very few, people are doing full CI with a package infrastructure but its still two worlds.

Here I'm talking about SaaS or traditional package it really doesn't matter. What matters is that I'm beginning to see more and more people actually concerned about how to do things like CI and TDD in a package environment. Now what would help with this would be a few things
  1. Package vendors to ship their unit tests
  2. Standard VM building tools integrated into package suites
  3. End-to-end deployment tools
  4. OSS support for packages (i.e. Hudson, ant, etc)
The point is that SaaS and packages need to start having the development rigour that custom development packages have and they should have this more explicitly as they have already developed end-user functions.

So its a little question: Why don't vendors ship their Unit and SIT tests?

2010 is going to be a year when hopefully a bit more technical professionalism comes into package delivery and maybe the business professionalism will rub off on the middleware chaps.

Technorati Tags: ,

Tuesday, December 15, 2009

Supported doesn't mean you can get the same support

When you are down and dirty with a delivery project it becomes clear that there is a difference between when a vendors says something is support and the actual ability of that vendor to support you when you have a problem.

You often see this when a vendor supports "interoperability" with solutions from another vendor, even if they have a competing product. Sometimes this support is genuine and other times its what I'd call "sales supported" in other words you'll be able to get it running in the sales evaluation process but if you have a real nasty operational problem the level of support is going to go off a cliff.

So as a few "theoretical" examples.

Lets say Vendor A produces a product which is some sort of middleware thing and while they have their own rules engine product they also support products from a number of different vendors through "standard APIs". The demos work fine in sales and you choose an engine from Vendor B but in operation you start getting a really tricky problem

Vendor A says "its a rules engine problem, speak to Vendor B"
Vendor B says "its an integration problem, speak to Vendor A"

The problem is that you then waste a huge amount of time proving whose problem it actually is.

Next example would be where Vendor A produces a COTS package and says it runs on 20 different operating system environments. You have some older kit hanging around and its on the supported list so you go for it. Again the demo works fine and the first release is fine, you then try to upgrade and it goes horribly wrong.

Vendor A says "errr the support person for that is on holiday, he'll be back in a week"

The point here is that "supported" technically != "supported" operationally.

When evaluating products you need to be aware that there are preferred products that the vendor might not tell you about as they are worried about losing the sale, so they stand their saying "yes it will work" with a smile on their face.

Technically this is known as "grin fucking" as the smile is to cover the fact that they know in reality its going to go horribly wrong.

So when looking at product to product integrations and the level of operational support that you can get start asking questions

  1. What proportion of the support staff are dedicated to this product to product interaction
  2. How many people is that globally
  3. How many people is that locally
  4. Will you sign off on our operational SLAs around this supported product, including penalties
The objective here is to get to the stage where you have something that is as supported in operation as it is through the sales cycle.

do not believe supported product lists they are there to make you buy the product, what you want to know is what it takes to operate the product.

My preferred approach is to ask the following questions
  1. What OS do you develop on
  2. What products do you use to develop your product
  3. What are the standard integrations that the product developers use as a normal part of their day job
These are the things that will really work and which have the largest number and highest quality of developers ready to fix your operational problem. The less gaps you have between the people who code your product and the runtime in operation the better.

The best support you will ever get is when the product development team can come straight in because your environment matches theirs. This means your future requirements are more likely to match their thinking and your current problems are easier for them to recreate.

So again, don't believe supported product lists and find out what the product developers are using.

Technorati Tags: ,

Wednesday, December 09, 2009

Why the hell do vendors not use version control properly

This is an appeal to everyone out there who is writing a product that is meant for an enterprise customer, you know the people with budgets and it applies to companies big and small.

When doing a project you normally have a lot of things to control
  • Word docs
  • Powerpoints
  • Visio
  • Requirements
  • Code
  • Test Cases
  • Data models
  • etc
The point is that its a lot of different things. What makes it easier to control things is a version control system. What makes it easier is if we can use one version control system to manage all these elements. What this means is that your tool needs to
  • Store things in lots of different files, every object or important piece of information should have its own file
  • Delegate version control to an external tool

There are two very very simple principles that I'm fed up seeing people screw up. Having one massive XML file that your tool "helpfully" manages is just rubbish. Not only does this mean that people can't share sections of this but it also means that structurally I can't set up my version control structures so one area has all of the requirements, docs and interface specifications all managed together. This is a very bad thing and it means your product isn't helping my programme.

The worst thing about this is that its much easier for a tool vendor to just say "version control is done by another tool, if you want a free one then subversion is great otherwise we integrate with most of the major providers". Having this elementary dumb decisions of single files is just a very very bad design decision and completely ignores how enterprises need to manage and adopt technologies.

I really don't understand people who pitch up with a tool that fails on this basic test of
"see that individual important element in your tool, can I manage that as an individual element and do a release of that without dragging other crap in"
If I'm creating a Use Case I want to version at the Use Case level, not at the repository level. If I'm creating at the Process level I want to version at the individual process level, not at the repository level. If I'm modelling a database I'd like to be able to version at the entity level not the whole schema level.

Code works like this, it would be mental to have a code versioning system that worked on every file as a clump. Yet this is what lots of packages and other tools do.

Its 2009 folks and it appears people are still building tools that can't match the capabilities of RCS let alone CVS or heaven forbid something from this millenia. So a quick summary
  1. Expose your things as files at the lowest level of sensible granularity, this is normally the blobs or icons in your tool
  2. Delegate versioning to an external product that does this stuff better. If you feel you have to include it then ship a copy of subversion.
That is all. Its that simple.

Technorati Tags: ,