Sunday, December 20, 2009

2010 the year of flexible packages

Lots of people make predictions but I'd like to make one that I haven't seen around.

For quite a few years there has been Package + Middleware but people have still been running it as middleware + package. So the middleware folks do CI and the package folks... well the package folks don't. This is a generalisation and a few, very few, people are doing full CI with a package infrastructure but its still two worlds.

Here I'm talking about SaaS or traditional package it really doesn't matter. What matters is that I'm beginning to see more and more people actually concerned about how to do things like CI and TDD in a package environment. Now what would help with this would be a few things
  1. Package vendors to ship their unit tests
  2. Standard VM building tools integrated into package suites
  3. End-to-end deployment tools
  4. OSS support for packages (i.e. Hudson, ant, etc)
The point is that SaaS and packages need to start having the development rigour that custom development packages have and they should have this more explicitly as they have already developed end-user functions.

So its a little question: Why don't vendors ship their Unit and SIT tests?

2010 is going to be a year when hopefully a bit more technical professionalism comes into package delivery and maybe the business professionalism will rub off on the middleware chaps.

Technorati Tags: ,

Tuesday, December 15, 2009

Supported doesn't mean you can get the same support

When you are down and dirty with a delivery project it becomes clear that there is a difference between when a vendors says something is support and the actual ability of that vendor to support you when you have a problem.

You often see this when a vendor supports "interoperability" with solutions from another vendor, even if they have a competing product. Sometimes this support is genuine and other times its what I'd call "sales supported" in other words you'll be able to get it running in the sales evaluation process but if you have a real nasty operational problem the level of support is going to go off a cliff.

So as a few "theoretical" examples.

Lets say Vendor A produces a product which is some sort of middleware thing and while they have their own rules engine product they also support products from a number of different vendors through "standard APIs". The demos work fine in sales and you choose an engine from Vendor B but in operation you start getting a really tricky problem

Vendor A says "its a rules engine problem, speak to Vendor B"
Vendor B says "its an integration problem, speak to Vendor A"

The problem is that you then waste a huge amount of time proving whose problem it actually is.

Next example would be where Vendor A produces a COTS package and says it runs on 20 different operating system environments. You have some older kit hanging around and its on the supported list so you go for it. Again the demo works fine and the first release is fine, you then try to upgrade and it goes horribly wrong.

Vendor A says "errr the support person for that is on holiday, he'll be back in a week"

The point here is that "supported" technically != "supported" operationally.

When evaluating products you need to be aware that there are preferred products that the vendor might not tell you about as they are worried about losing the sale, so they stand their saying "yes it will work" with a smile on their face.

Technically this is known as "grin fucking" as the smile is to cover the fact that they know in reality its going to go horribly wrong.

So when looking at product to product integrations and the level of operational support that you can get start asking questions

  1. What proportion of the support staff are dedicated to this product to product interaction
  2. How many people is that globally
  3. How many people is that locally
  4. Will you sign off on our operational SLAs around this supported product, including penalties
The objective here is to get to the stage where you have something that is as supported in operation as it is through the sales cycle.

do not believe supported product lists they are there to make you buy the product, what you want to know is what it takes to operate the product.

My preferred approach is to ask the following questions
  1. What OS do you develop on
  2. What products do you use to develop your product
  3. What are the standard integrations that the product developers use as a normal part of their day job
These are the things that will really work and which have the largest number and highest quality of developers ready to fix your operational problem. The less gaps you have between the people who code your product and the runtime in operation the better.

The best support you will ever get is when the product development team can come straight in because your environment matches theirs. This means your future requirements are more likely to match their thinking and your current problems are easier for them to recreate.

So again, don't believe supported product lists and find out what the product developers are using.

Technorati Tags: ,

Wednesday, December 09, 2009

Why the hell do vendors not use version control properly

This is an appeal to everyone out there who is writing a product that is meant for an enterprise customer, you know the people with budgets and it applies to companies big and small.

When doing a project you normally have a lot of things to control
  • Word docs
  • Powerpoints
  • Visio
  • Requirements
  • Code
  • Test Cases
  • Data models
  • etc
The point is that its a lot of different things. What makes it easier to control things is a version control system. What makes it easier is if we can use one version control system to manage all these elements. What this means is that your tool needs to
  • Store things in lots of different files, every object or important piece of information should have its own file
  • Delegate version control to an external tool

There are two very very simple principles that I'm fed up seeing people screw up. Having one massive XML file that your tool "helpfully" manages is just rubbish. Not only does this mean that people can't share sections of this but it also means that structurally I can't set up my version control structures so one area has all of the requirements, docs and interface specifications all managed together. This is a very bad thing and it means your product isn't helping my programme.

The worst thing about this is that its much easier for a tool vendor to just say "version control is done by another tool, if you want a free one then subversion is great otherwise we integrate with most of the major providers". Having this elementary dumb decisions of single files is just a very very bad design decision and completely ignores how enterprises need to manage and adopt technologies.

I really don't understand people who pitch up with a tool that fails on this basic test of
"see that individual important element in your tool, can I manage that as an individual element and do a release of that without dragging other crap in"
If I'm creating a Use Case I want to version at the Use Case level, not at the repository level. If I'm creating at the Process level I want to version at the individual process level, not at the repository level. If I'm modelling a database I'd like to be able to version at the entity level not the whole schema level.

Code works like this, it would be mental to have a code versioning system that worked on every file as a clump. Yet this is what lots of packages and other tools do.

Its 2009 folks and it appears people are still building tools that can't match the capabilities of RCS let alone CVS or heaven forbid something from this millenia. So a quick summary
  1. Expose your things as files at the lowest level of sensible granularity, this is normally the blobs or icons in your tool
  2. Delegate versioning to an external product that does this stuff better. If you feel you have to include it then ship a copy of subversion.
That is all. Its that simple.

Technorati Tags: ,

Monday, October 26, 2009

Requirements Landfill - the challenge of central services

Enterprise Services of any type from Business Services like procurement through to technical infrastructure services such as Identity management are all at risk from a threat of pollution. I call this threat the requirements landfill.

One of the key principles of SOA in operation has been the ability to have services which are used across the enterprise. These single services represent a single point of truth and a single functional boundary that ensures enterprise consistency, it is their goal to shift away from fragmented sytems towards Services which represent the single way of accomplishing a technical or business objective.

This is great and good but I've been seeing something more and more over the past few years. Its that these enterprise class services, particuarly when successful, suffer from being the easy place to add new requirements rather than always being the right place to add those requirements. The basic problem is simple

These services sit at the junction between multiple elements, some are nice new service programmes, some are hacky service programmes, some are old legacy systems and others are just systems owned by people who don't have much budget or are rather grumpy.

So what happens is that you get a big and complex requirement. About 50% of this requirement really does belong in the central service but the other 50% is a combination of elements
  1. 20% requirements that should be done in the originating systems
  2. 20% requirements that are around data storage and access that should be in another system
  3. 10% requirements related to business process that are completely beyond the scope of the central service

Now the problem is that to implement this new set of requirements therefore requires a much broader set of changes if it is to be done properly. Initially the discussions might start out talking about doing it the right way but then someone will say the immortal line

It would be easier if we just put all of this is the enterprise service

While the enterprise service team will scream at the prospect this will make perfect sense to everyone else as they won't be doing the work. Unless there is a set of concrete governance processes around then the normal "democratic" decision process will result in the compromise decision being made. The first time this gets done its seen as a compromise that just had to be made. The second time onwards its just replicating the same approach as that is what we normally do. Rapidly therefore the enterprise service goes from being a single clearly defined entity with a defined business purpose to being the requirements landfill for all those requirements that people are either too lazy or too scared to try and do properly. This leads to the service getting ever slower to react to change and eventually being seen as a failure due to its transformation from a business service into a traditional old style application.

The point here is that governance and escalation are essential to maintain a clear set of enterprise wide services and to ensure that requirements are not simply dumped into areas due to them having budget, talented people or just because they are at a central point.

Most of the time I see requirements landfills its because IT owns and manages the services and thus they have no direct link to a specific business owner. This means that business people don't care about keeping them clean as it isn't their problem. One of the first steps in a solution therefore is to ensure that your enterprise services are clearly aligned to the business areas where they matter and thus ensure that they have an in-built desire to keep the services aligned to their area and not blurred across technical and organisational boundaries.

Favourites for requirements landfills are also centrally provisioned IT "enabling" solutions such as ESBs which almost stand up to the business and say "dump crap here and then point the finger of blame here when it goes wrong".

So have you got a decent policy to prevent requirements dumping? Do you have a refactoring and recycling programme to take dumped requirements out of the landfill and put them back where they should be? Or are you just hoping that you'll be able to convince people that its not your fault that you allowed the illegal dumping?

Technorati Tags: ,

Tuesday, October 20, 2009

Data Accuracy isn't always important

Now while Amex really should have understood the term minimum there are examples where it really isn't an issue if someone gets it wrong in displaying the information to you. Sometimes this indicates that a prediction has been incorrect or that "approximately" is good enough for this scenario.
Is a good example of this, the current temperature on the Sydney Morning Herald site is listed as one degree higher than the maximum for the day. Does this matter? Well no and for two reasons. Firstly a weather forecast is accepted as being approximate information, its a chaotic system and so by definition can't be predicted exactly. Secondly the Max number is only a prediction and the current temperature is indicating that it was an incorrect prediction. So by having an incorrect piece of information we actually have more information as it re-enforces the concept that weather forecasts cannot be 100% accurate.
Now when the next day rolls around then looking back you should clearly be recording the actual maximum achieved rather than the prediction. This is because the information has gone from being a record of a prediction into a record of fact. The only question therefore is at what point you should update the maximum. Do you change it dynamically or on a daily basis when reporting historical information. For the Sydney Morning Herald site the answer is simple changing the daily maximum as it increases during the day would defeat the purpose of the "max" level which is what the paper predicted it would be at the start of the day. Its a free news story if it goes well beyond the prediction "Sydney Weather was bonza today with max temperatures 5 degrees higher than expected".

So the point is that when you look at data do think about what level of accuracy is important. If reporting a bank balance then spot on is the only option, if reporting the number of customers who bought cheese with wine as a percentage of overall cheese buyers then you can probably get away with 1 decimal place or less. This sort of view applies even more when looking at forecasting and other predictive data sets, the effort of increasing accuracy by 1% might be pointless due to the extra time it takes.

Data Accuracy isn't an absolute, be clear about what matters to you.

Technorati Tags: ,

Tuesday, October 13, 2009

When you know it isn't a cloud

Following up on the previous post I thought I'd do the REALLY obvious ones that indicate it isn't a cloud. James' list wasn't nearly cynical enough in light of the things that are claimed to be a cloud.
So here goes
  1. If its just a single website with no backups storing stuff to disk then its just a standard WEBSITE not a cloud (Hello Sidekick)
  2. If its a physical box that is meant to help you manage a virtual environment... then its not a cloud (hello Cloudburst appliance)
  3. Seriously if you are just a single website doing stuff with a database you aren't a cloud (hello Soonr)
  4. No seriously if about buying a physical box it isn't a cloud (like HP's spin though that they are just cloud enabling... nice weasel room)

And I could go on. The point is that cloud is an infrastructure thing, it is IaaS in the "aaS" hierarchy. PaaS can have a go at being cloud but SaaS is "just" something that might be deployed to a cloud. Having a website (SaaS solution) that runs on Amazon doesn't make that SaaS solution "a cloud" it makes it a SaaS solution hosted on a cloud.

The hardware point is that making capital expenditure is exactly what a cloud isn't about and physicality is exactly what a cloud isn't about. You want virtual compute and storage that you pay as a utility. This is the economic model of cloud.

So in the words of Kryton. I know that strictly speaking I've only identified two things, but they were such big things I thought I'd say them twice.

Technorati Tags: ,

How you know a phone is rubbish

My ever perceptive wife Heather spotted something the other day about mobile phone adverts. They basically come in two flavours
  1. iPhone ads
  2. The rest
The difference between them is simple here are the apple ones, first from when it launched

and an "there is an app for that" one

Now for the competition: Samsung


and (with the Omnia)

and Google's Android (via HTC)

See the difference? The Mrs did very quickly: In one set of adverts you see someone using the phone, in the others you appear to be watching a film trailer that is starring a phone. Her point was that the reason she likes her iPhone is what it is like to use and she won't switch to these other phones because you have no idea how they work just that they are in an advert with as much depth as a perfume ad.

Now I'm no advertising genius but I can't help thinking that this means one of two things
  1. The other phones are a pain to use
  2. The advertising agencies doing the other adverts are rubbish
Now it could of course be both of these things but my betting is on the first one. What appears odd therefore is that in the time since the iPhone has been released that no-one has come close and had the balls to actually advertise people using their phone.

The Windows 6.5 advert that is on TV right now has a bloke holding a phone while people in FOAM COSTUMES with windows Icons on them look sad and then become happy because he has a new Windows 6.5 phone... I mean how bad is that phone to use if you won't even show screenshots but just foam icons?

There are lots of reasons why the iPhone is doing well but top of the list is usability. Interestingly other mobile phone companies keep trying to compete on things like "better camera" or "better windows integration"(!) rather than on the feature that people actually want: USABILITY.

So how do you know a phone is rubbish? If an advert does everything in its power to NOT to show you someone using it.

Technorati Tags: ,

Monday, October 12, 2009

Not a cloud? Then what is it?

Redmonk are one of those smaller analyst companies who make up for a lack of numbers with a refreshing depth and honesty. James Governor's latest, and I assume light hearted, view of "15 Ways to Tell Its Not Cloud Computing" however does a bit of a disservice to the debate around clouds. Mostly right but with a few glaring pieces I felt I had to disagree with.
  1. If you peel back the label and its says “Grid” or “OGSA” underneath… its not a cloud.
    1. Fair enough if its talk about people selling last years technology with this years sticker but.....
    2. If its a question of doing a deep dive and finding that underneath there is a "Grid" but that you don't care about it then I don't think this discounts it.
  2. If you need to send a 40 page requirements document to the vendor then… it is not cloud.
    1. I'll go with this one... with the caveat of governments can turn anything into 40 pages ;)
  3. If you can’t buy it on your personal credit card… it is not a cloud
    1. Nope I can't accept this. If I'm a fortune 500 company and I'm buying millions of dollars a month in dynamic capacity then I want a professional invoicing and billing approach. When governments build their own clouds they won't be billing to credit cards and for most companies this is an irrelevance.
  4. If they are trying to sell you hardware… its not a cloud.
    1. Absolutely with it
  5. If there is no API… its not a cloud.
    1. This is really to enable 3rd party tools integration and its a good thing. Fair enough
  6. If you need to rearchitect your systems for it… Its not a cloud.
    1. Very very wrong. For a simple reason, shifting boxes into the cloud and doing the same thing you've done before is easy. having a software application that can actually dynamically scale up and down and handle scalable data stores is harder.
    2. To take best advantage of the cloud you need systems that can scale down and up very quickly, LOTS of systems today do not get the full value out of the cloud (as opposed to just virtual infrastructure) and will require re-architecting to take advantage of the cloud.
  7. If it takes more than ten minutes to provision… its not a cloud.
    1. Depends what we call provisioning. I've got 5TB of data to process that needs pre-loading into the database image. Does this count as provisioning as its going to take more than 10 minutes.
    2. If it means 10 minutes to get a new compute instance for an existing system then fair enough but that isn't the same as provisioning a whole system in the cloud.
  8. If you can’t deprovision in less than ten minutes… its not a cloud.
    1. As an IT manager once told me "I can turn any system off in 5 seconds if I have to"... "just kick out the UPS and pull the plugs"
    2. Fair enough point though in that it would at least be managed in a cloud.
  9. If you know where the machines are… its not a cloud.
    1. Really? So Amazon don't have a cloud as I know that some of my instances are in the EU?
    2. If you mean "don't know exactly physically where a given compute instance is" then fair enough, but most companies don't even have a clue where their SAP systems are physically running.
    3. Also against this one is the government cloud and security requirements. I need to know that a given instance is running in a secure environment in a specific country. This doesn't stop it being a cloud it just means that my non-functional requirements have geographical specifications in them.
  10. If there is a consultant in the room… its not a cloud.
    1. Cheap gag. You could add "if a vendor says it is... it is not a cloud"
  11. If you need to specify the number of machines you want upfront… its not a cloud.
    1. Fair enough
  12. If it only runs one operating system… its not a cloud.
    1. Why does this matter? Why can't I have a Linux cloud or a Windows cloud? Why is OS independence critical to a cloud?
  13. If you can’t connect to it from your own machine… its not a cloud.
    1. Non functionals (e.g. Government) might specify this. It depends what connection means. I could connect to the provisioning element without being able to connect to the running instance.
  14. If you need to install software to use it… its not a cloud.
    1. Server or client side? If its the later then I'd disagree, how will you use something like Amazon without installing a browser or the tools to construct an AMI?
    2. If its the former.... I take it that it isn't the former
  15. If you own all the hardware… its not a cloud.
    1. Or you own the cloud and are selling it. What this would mean would that a mega-corp couldn't turn its current infrastructure into a cloud, and I don't see why they can't.
  16. If it takes 20 slides to explain…. its not a cloud
    1. Fair enough again. As long as this is the concepts rather than a code review!

So pretty much I agree with 50% and disagree with the remainder. The point is that cloud is still arbitrary and there are some fixed opinions. Utility pricing is clearly a given, but credit cards aren't (IMO) required.

One big way to tell its not a cloud is of course if you can see the flashing lights.

Technorati Tags: ,

Tuesday, October 06, 2009

Alternative Engineering

Dan Creswell pointed me towards an interesting blog on Cargo Cults and Computing on the same day I looked at this video on YouTube...

Then yesterday I was talking with someone who is helping dig out a project that has been driven into the ground by some people with very firm beliefs about how things should be done these were people who had taken pieces from agile, from waterfall, from TOGAF, from lots and lots of different places and combined it in their own approach.

This is alternative engineering in practice. The approaches were often contradictory, so they had a waterfall process but they didn't do full requirements up-front as the development would be agile. The Architecture wasn't complete and certainly didn't include the principles or non-functionals, but did include the hardware infrastructure that the solution should be deployed on.

When challenged on this it was described as taking "best practices" from lots of areas and combining it in a methodology which "best suited the company". This wasn't however like the start of a RUP project where you decide which RUP artefacts are required, it was just a complete and utter hack by people who wanted to do certain things, mainly the easy or interesting things, without doing the difficult or boring bits.

Around the industry I see Alternative Engineering practised a lot. Sometimes it evolves and is tested, made robust and clearly documented and becomes Engineering (SCRUM for instance), other times it remains vague in the vast majority of cases with people referring to a very limited set of successes and ignoring a huge amount of failures (step forwards XP).

The point is that Software Engineering does move on. The Spiral killed the Waterfall, at the stage that this happened then Waterfall became alternative engineering, in the same way as people thought that "nice smells" would ward off plague became demonstrably false so Waterfall for the vast majority of software development programmes became nonsense.

Iterative development then evolved from the Spiral and became undoubtedly the best way to deliver software. Agile added some dressing around the edges, some good some bad but the basic principle was still iterative development.

Alternative Engineering is what most IT organisations appear to practice. They don't go and look at what has been proven to work and they certainly don't accurately measure their own processes and ensure that they are working effectively. Their ability to learn is practically zero but their ability to have faith in what they are doing being the right way knows no bounds.

Cargo Cults are at least trying to copy people smarter than them, they are clearly practising alternative medicine but they are doing it by pretending to be proper doctors. Most people in IT are nothing more than quacks who don't, can't and won't prove they are working in better ways and hold on faith that their unique "blended" view and methodology which they are the only people on planet earth using is clearly the best way.

Sometimes these organisations can be broken out of the stupor and made to work in new ways which actually work. Its then always impressive watching the level of religious fervour that can develop as they turn on their previous ways and driving whole scale change. Unfortunately they then often apply this new way to places that it should not be used (Agile on Package implementations for instance). The Alternative Engineering practice then kicks in again and the cycle repeats.

The point is that if you aren't actually measuring your performance and understanding how a given requirement will be implemented and how over time your will reduce the time for that implementation then you are also practising Alternative Engineering. If you aren't looking robustly at your development and delivery processes and verifying that they work and that they are proven to work then you are practising Alternative Engineering.

Alternative Engineering is bunk and its part of the problems of Art v Engineering which also underpins the Cargo Cult phenomena. There are a vanishingly small number of people who can practice software as an Art. In my entire career and with everyone I've ever met its probably no more than 1% of people who can do that properly. The rest need to be doing proper Engineering based on the measured implementation of processes that are proven to work.

Surely in an industry that has above average scepticism for "alternative" therapies we should apply the same rigour to our day jobs?

Updated: Found a quote from Fred Brooks that just sums it all up
“I know of no field of engineering where people do less study of each other’s work, where they do less study of precedents, where they do less study of classical models. I think that that is a very dangerous thing”

The man is a genius.

Monday, October 05, 2009

American Express - Which number is greater?

Sometimes you just sit there and wonder about how the code managed to get to a certain end point... take my current Amex online bill...

Now it could be the surprise that its the lowest monthly bill I've had since I got the card, but I don't quite think that "surprise" is something that should be built into a financial system. But some how the amount that I have to pay (the last bill) is now less than the minimum payment, which includes stuff for which the bill hasn't been sent yet.

Part of this is because Amex have two billing elements, the first is the date that they want you to pay by, the 2nd is when the next bill comes out. If you pay before the later then everything is fine, but they'd prefer you to do it before the former.

This is clearly a historical thing with Amex and it clearly reflects back into their core operational systems which are almost certainly batch oriented. What this also means is that like many companies out there Amex haven't really adapted their systems or processes for the web they've just lobbed the paper processes on line which delivers oddities such as this which aren't possible in a paper only world.

When people put systems on-line they often seem to forget that the interactional model for on-line working is significantly different to off-line working. If you want customers to engage more in your on-line solution and move away from the more manual and higher cost channels then it really isn't good enough to shift crap processes onto the web, you should be looking at how customers will be interacting with your company in real-time and therefore what new processes and opportunities this can bring.

I did feel like phoning up the call-centre and asking "which minimum payment should I make, the one that says minimum payment or the one with the lowest value" but I decided my life was too short to waste time on that.

Saturday, October 03, 2009

In praise of vendor shipped AMI's and virtual machines

Roman Stanek is one of those guys who consistently gets things right and his point around AMIs and Virtual Machines not being SaaS is absolutely 100% spot on. These aren't SaaS solutions they are PaaS solutions and they do indeed leave a huge amount of work for the developer to do. Part of the problem is of course that the name Software as a Service is of course wrong its really Service as a Service that you are buying.

So that said I do disagree with him that shipping AMIs is just the same as shipping DVDs for one very very big reason

AMIs and Virtual Machines must run

This for me is a big leap forward from vendors as while it still means that I have to build my application I don't have to spend days or even weeks trying to install software that quite clearly has never been tested from the DVD or downloads that they have on the site. Some wonderful pieces I had in the past include
  • A software & hardware manufacturer whose software worked on competitors operating systems but not their own
  • A software vendor whose instructions for connecting their two products had never been tested... I know this as it did not work at all
  • Spending 2 weeks installing from DVDs and downloads from a vendor and eventually having them on-site trying to do it for themselves, failing and then "getting back to us" a week later with an installer than actually worked
  • A single software vendor whose two products that were required to work together required two differently patched versions of the Java runtime
  • A vendor whose installation DVD was missing some core jar files, which they denied at firt but the old "Class not found" exception was a bit of a give away
So I'd like to praise AMIs and vendor delivered virtual machines for the basic progress that at least now it is spectacularly hard for a vendor to ship a non-working image. It might be an ugly image with lots of hacky patches required, but at least they've had to do all that dirty work and not you.

So don't con yourself that you are doing SaaS, you aren't you are doing PaaS at best. But do rejoice in that vendors are at last forced to prove that the software runs before shipping to you. Having a vendor delivered AMI or virtual machine that doesn't run really would set a new low bar in the sorts of quality that they expect customers to put up with.

So I say hail this new move away from DVDs and towards images, because personally I'm sick and tired of debugging their installers.

Thursday, October 01, 2009

Why do games need an operating system?

Using my iPhone and looking at a PS3 console in Selfridges made me think about what the future could hold for games. While there are companies looking at doing server side games and a VDI solution to the end device I just don't think that matches against Moore's Law. But equally the model of downloading and installing loads of games onto a single device with multiple different anti-piracy tools doesn't seem to make sense either.

Given that "bare-metal" concepts have been around for a while now, BEA had their bare-metal version of the application server, wouldn't it make sense for games to be a full single virtual image? So you don't boot the OS then start the game, you just boot the image which just contains the game and what ever minimal drivers it needs.

Now some people will point out "what about the drivers?" and there is a slight point there. But would it be really so hard to define an image where you select your graphics cards et al and it constructs a fully-optimised image just for you? Upgrade your kit and then you just upgrade the image.

Now others will bleat "but how will I pirate it if its a custom environment?" and to be honest I don't care, your problem.

What are the advantages? Well from a piracy perspective it clearly reduces the potential if the game is running on a custom micro-kernel and is tied against a specific license that is created when you create the image. From a performance perspective it can be tuned to the 9s as there is nothing else running. From an end-user perspective it means using your PC like a console, or indeed your console like a console, and selecting the game to play at boot up.

Once you start thinking like this of course it then becomes a question as to what other applications don't really require an OS and the concept of isolation via VMs. Windows 7 is doing this with its XP mode which really opens up the question as to when you don't need Windows for certain applications.

Virtualisation is a server technology that is going to have a massive impact on the desktop.

Thursday, September 24, 2009

REST Blueprints and Reference Architectures

Okay so the REST-* stuff appears to have rapidly descended into pointless diatribe which is a shame. One of the questions is what it should be instead (starting with REST-TX and REST-Messaging wasn't a great idea) and around a few internal and external discussions its come down to a few points
  1. What is "best practice"
  2. What is the standard way to document the interactions available & required
  3. How do we add new MIME types
Quite a bit of the technical basics have been done but before we start worrying about a "standard" way of defining reliability in a REST world (yes GET is idempotent.... what about POST?) we should at least agree on what good looks like.

Back in the day Miko created the "SOA Blueprint" work around Web Services, an attempt to define a concrete definition of "good", unfortunately it died in OASIS (mainly due to lack of vendor engagement) but I think the principles would be well applied here.

The other piece that is good (IMO) is the SOA Reference Model, Roy Fielding's paper pretty much defines that reference model but what it doesn't have is a reference architecture. Saying "The internet is the reference architecture" doesn't really help much as that is like saying that a mountain is a reference architecture for a pyramid.

Now one of the elements here is that there appears to be some parts of the REST community that feel that Enterprise computing must all "jump" to REST and the internet or are in fact therefore irrelevant to REST. This isn't very constructive as the vast majority of people employed in IT are employed in just those areas. B2B and M2M communications with a decent dose of integration are the standard problems for most people, not how to integrate with Yahoo & Amazon or build an internet facing website.

For the enterprise we have to sacrifice a few cows that hopesfully aren't sacred but I've heard bandied around
  1. You can't just say "its discoverable" - if I'm relying on you to ship $500m of products for me then I don't want you messing around with the interface without telling me
  2. You can't just say "late validation" - I don't want you making a "best guess" at what I meant and me doing the same, I want to know that you are shipping the right thing to the right place
  3. You can't just say "its in the documentation" - I need something to test that you are keeping to your side of the bargin, I don't want just English words telling me stuff I want formal definitions... contracts.
  4. You can't just say "look at this URI" - we are embarking on a 5 month project to build something new, you haven't done your stuff yet, you don't have a URI yet and I need to Mock up your side and you need to Mock mine while we develop towards the release date. Iterative is good but we still need to have a formal clue as to what we are doing
  5. You can't say 'that isn't REST' if you don't have something objective to measure it against
So what I'd suggest is that rather than having the REST-* piece looking at the technical standards we should really be focusing on the basics mentioned above. We should use Roy's paper as the Reference Model from which an enterprise Reference Architecture can be created and agree on a standard documentation approach for the technical part of that reference architecture.

In other words
  1. REST Reference Model - Roy's paper - Done
  2. REST Reference Architecture - TBD and not HTTP centric
  3. REST Blueprints - Building the RA into a concrete example with agreed documentation approaches (including project specific MIME types)
Right now burn me at the stake as a heretic

Technorati Tags: ,

Monday, September 21, 2009

Theory v Practice - the opposite view

There is an age old saying
In theory, practice and theory are the same thing, in practice they aren't

This is true 90% of the time, but in Engineering it isn't always the case. I was speaking to someone a day or so ago about interviews and they were nervous as the job they were applying for required a specific programming skill and they had only done "a bit" of it.

What I told this poor young fool was that as they had talent (and they do) this lack of experience was just a minor element. Could they learn more in the week before the interview? I asked. "Sure" came the reply.

Well there you go. Any if they ask questions about threading and deadlocks can you answer them.

"Well I know the theory but not the syntax"

And it was here than I imparted the knowledge... Its actually the theory that counts not the syntax. To this end I'll tell two tales.

My first job interview was for a start-up company. They had some interesting bits around Eiffel and were trying to create a meta-language on Eiffel that enabled multiple different GUIs and Databases from a single code base. Part of this would require me to know C. I was asked

"Do you know C"

"Sure" I said.

"You'll have to take a coding test next week to check" they said

This gave me 7 days to learn C, a language I'd never coded in before. By the end of that week I was coding with pointers to functions which took pointers to arrays of functions as arguments. The reason was I understood the theory and could quickly apply it to the new syntax.

I got the job..... but they went bust 6 months later owing me 2 months wages so it wasn't the best story.

Now for another story, a good friend wanted to shift out of his current IT job which didn't do coding into a coding job. He had a bunch of theory and brains but no experience. I boldly said that I could coach him through a C++ interview in a couple of weeks. For 2 weeks we talked about classes, STL, friends and lots of other things.

He got to the interview, chatted for 30 minutes about computing in general and was asked the killer question

"So you know C++"

To which he quickly replied "Yes".... and the interview was over. He got the job and was pretty bloody good at it, despite the level of bluffing (although the single word "Yes" isn't the strongest bluff in the world).

The point is that if you understand the theory of programming languages and computing then individual languages are just a set of syntax that implements that theory in a specific context. Unfortunately in IT very few people understand the theory and are therefore condemned to badly implement software in the manner of an orang-outang who doesn't understand English but has a dictionary of English words to point at.

Lots of times Theory is less important than practice, but in IT if you don't know the theory then the odds are you'll be rubbish at the practice.

Technorati Tags: ,

Wednesday, September 16, 2009

REST-* can you please grow up

Well didn't Mark Little just thrown in a grenade today around REST-* by daring to suggest that maybe just maybe there needs to be a bit more clarity on how to use REST effectively.

As he said this "The REST-* effort might end up documenting what already exists which indicates that part of the challenge is that lots of people don't really know what REST is and certainly struggle as they look to build higher class systems and interoperate between organisations.

Part of this is of course about up-front contracts and the early v late validation questions. But part of it also appears to be pure snobbery and a desire to retain arcane knowledge which goes back to that "Art v Engineering" debate.

A few choice quotes from twitter

"Dear REST-*. Get a fucking clue. Atom and AtomPub already do messaging. No new specification needed, that's just bullshit busy work." - Jim Webber

"REST might lack clear guidelines, but something called REST-* with a bunch of vendors is hardly going to help!"
- Jim again

"and if they think REST lacks guidelines for messaging/security/reliability/etc.., they're not looking hard enough" - Mark Baker

Now part of Mark Little's point appears to be that we need more clarity around what good should be in the REST world and this needs to be easier to access than it currently is. I've seen some things described as REST that were truly horrific and I've seen other bits in REST that were a superb use of that approach. The problem that all of them had was in learning about Atom, AtomPub how to use them, how to use MIME types and of course the balance between up front contracts and late evaluation.

Would it really be such a bad thing to have an effort that got people together and had them agree on the best practices and then have the vendors support developers in delivering against that practice?

The answer of course is only yes if you want to remain "1337" with your arcane skills where you can abuse people for their lack of knowledge of AtomPub and decry their use of POST where quite clearly a DELETE should have been used.

If REST really is some sword of Damocles that can cut through the integration challenges of enterprises and the world then what is the problem with documenting how it does this and having a few standards that make it much clearer how people should be developing. Most importantly so people (SAP and Oracle for instance) can create REST interfaces in a standardised way that can be simply consumed by other vendors solutions. It can decide whether WADL is required and whether Atom and AtomPub really cover all of the enterprise scenarios or at least all of the ones that count (i.e. lets not have a REST-TX to match the abomination of WS-TX).

This shouldn't be an effort like WS-*, its first stage should be to do what Mark Little suggested and just document what is already there is a consistent and agreed manner which vendors, developers and enterprises can agree on as the starting point and that this starting point would be clearly documented under some form of "standards" process.

Would that be a bad thing?

Update: Just found out that one of the two things that they want to do is REST-TX... its like two blind men fighting.

Technorati Tags: ,

Business Utilities are about levers not CPUs

"As a Service" is a moniker tagged against a huge number of approaches. Often it demonstrates a complete marketing and intelligence fail and regularly it just means a different sort of licensing model.

"As a Service" tends to mean utility pricing and the best "As a Service" offers have worked out both what their service is and what its utility is. have a CRM/Sales Support service (or set of service) and the utility is "people". Its a pretty basic utility and not connected to the value but this makes sense in this area as we are talking about a commodity play and hence a simple utility works.

Amazon with their Infrastructure as a Service/Cloud offer have worked out that they are selling compute, storage and bandwidth. Obvious eh? Well not really as some others appear to confuse or mesh the three items together which doesn't really drive the sort of conservation behaviour you'd want.

The point about most of these utilities though is that are really IT utilities. SFDC measures the number of people who are allowed to "log on" to the system. Amazon measure the raw compute pieces. If you are providing the base services this is great. But what if you are trying to then build these pieces for business people and they don't want to know about GB, Gbps, RAM or CPU/hrs? Then its about understanding the real business utility.

As an example lets take retail supply chain forecasting, a nice and complex area which can take a huge amount of CPU power and where you have a large bunch of variables
  1. The length of time taken to do the forecast
  2. The potential accuracy of the forecast
  3. The amount of different data feeds used to create the forecast
  4. The granularity of the forecast (e.g. Beer or Carlsburg, Stella, Bud, etc)
  5. Number of times per day to run it
Now each of these has an impact on the cost which can be estimated (not precisely as this is a chaotic system). You can picture a (very) simplified dashboard

So in this case a very rubbish forecast (one that doesn't even take historical information) costs less than its impact. In other words you spend $28 to lose $85,000 as a result of the inaccuracy. As you tweak the variables the variables the price and accuracy vary enabling you to determine the right point for your forecasts.

The person may choose to run an "inaccurate but cheap" forecast every hour to help with tactical decisions and run an "accurate and a bit expensive" forecast every day to help with tactical planning and run a weekly "Pretty damned accurate but very expensive" forecast.

The point here is that the business utilities may eventually map down to storage, bandwidth, CPU/hrs but you are putting them within the context of the business users and hiding away those underlying IT utilities.

Put it this way, when you set the Oven to "200 C" you are choosing a business utility. Behind the scenes the power company is mapping this to a technical utility (kW/h) but your decision is base on your current business demand. With the rise of smart metering you'll begin to be able to see what the direct impact is on your business utility decision on cost.

This is far from a simple area but it is where IT will need to get to in order to clearly place the controls into the hands of the business.

They've paid for the car, shouldn't we let them drive?

Wednesday, September 02, 2009

Why I like Open Source documentation

I've got someone creating a structured Semantic Wiki for me at the moment and we are using Semantic Forms. One of the things we needed to do was pre-populate the fields. This means something like


With the query string set... The documentation said

query_string is the set of values that you want passed in through the query string to the form. It should look like a typical URL query string; an example would be "namespace=User&User[Is_employee]=yes".
Now this is accurate but misses out a couple of important bits.

  1. The Namespace doesn't actually matter unless you are using namespaces (we aren't)
  2. The second "User" doesn't refer to the form name or to the namespace it refers to the template name
  3. The underscore is only valid if you actually put it in the field name yourself (i.e. unlike other bits in MediaWiki where "Fred Jones == Fred_Jones" that isn't true
So after a bit of randomly focused hacking I found the solution.... and what did I do. I updated the documentation to add
The format of a query string differs from the form_name in that it uses the template name. As an example if you have a "Person" template (Template:Person) and a Person Form (Form:Person_Form) for entry then it is the names from the Template that matter. An example to populate the Home Telephone field would therefore be: {{#forminput:PersonForm||Add Person|Person_Form[Home Telephone]=555-6666}} N.B. The FORM uses underscores while the field uses spaces.
Now this could be written better I agree, but the point is that the next poor bugger through will now have a better starting place than we did. Adding examples is something that is particularly useful in much documentation and something that is often missing. I regularly find myself Googling for an example after failing to understand something that the person writing the documentation clearly felt was beneath them to explain.

For commercial software you'd clearly like to see a bit more of an editorial process to make sure its not stupid advice like "Install this Malware", but its an area where more companies could benefit from improvements in customer service and self-help by enabling people to extend their current documentation in ways that better fit how end-users see their technologies.

Thursday, August 20, 2009

What conference calls tell us about REST

I've just got off a conference call, the topic isn't important. What is important is that at the end of the call lots of other people started joining. Why was this? Well they were joining the next call that the meeting organiser had.

This got me thinking about REST and resource identifiers and why if you are doing REST its really important to understand what the right resource is. With conference calls there are basically two choices
  1. Have a unique conference number by person, this person therefore can just hand it out to people and they can dial in at anytime for a meeting
  2. Have a unique conference number by meeting. So when you want a meeting you have to arrange it and get a unique ID
Now the first one basically means that you always have a meeting ID but of course has a major problem: Your meetings can all begin melding into one.

The second is what you should be doing as it means that the meeting is the resource and the participants join the meeting. Someone can still be the chair if required but its the meeting that is the discreet entity.

The point here is that when doing REST you need to think about the implications of your resource hierarchy selections and not tie them to the first thing that you think makes sense.

Technorati Tags: ,

Thursday, August 06, 2009

Start on the road to SaaS with SPUD

Lots of companies are leaping onto SaaS in the manner of a drowning man grabbing onto a tiger shark in the vain hope that all will be fine. The problem is that they haven't really thought about what they want and what it takes to properly adopt SaaS so what they do is take the same old approach to package adoption
  1. Work out what you want to do
  2. Buy a package/SaaS solution to do some of it
  3. Customise the package/SaaS solution to a point where its specific to your business
  4. Act surprised when it goes over budget, over time and turns out to be a nightmare to maintain
SaaS can actually be worse in the 4th point that traditional packages as at least if you do the truly dumb of changing the database schema under Oracle or SAP then you have the choice of not upgrading something that just isn't possible with SaaS where all of your customisations and specific developments have to be upgraded when the SaaS vendor says you do.

The right way to adopt SaaS is really the right way to adopt packages. Its not rocket science and comes down to a very a simple point
You've chosen to use a package as you don't see this area of your business as differentiating
If you do think its differentiating then why on earth are you picking something that all of your competitors can buy? If you think its "10%" differentiating then you are probably kidding yourself and in reality its a case of its 100% non-differentiating but there is something you should be building on-top of the package that could add something. In otherwords you've got something extra that differentiates, not something different.

Anyway so once you've faced the reality what is the obvious conclusion?
Package implementation is about changing the business to fit the package not the other way around
So you need to focus on fitting the business to the best-practice that you've purchased from the package vendor and delivering a Standardised Package (SP) as quickly as possible. Not unsurprisingly companies that do this tend to find their package implementations hit time and budget much more often and they tend to deliver greater operational efficiencies as it challenges locally held truths that turn out to just be historical relics. The point here is that in reality you are undertaking a business programme that is just enabled by IT and the package vendor is providing the business processes.

Now the next question is of course how do you pay for this sort of solution? Well its now not about differentiation so its really about cost to serve which means that what you want to do is pay based on the business utility that is being provided. Implementation aside this means that the running of the solution should scale up and down in-line with that utility. The utility could be number of staff, number of transactions, value of transactions or anything else that represents what the business is actually paying for. This means that the package needs to be utility delivered (UD).

Hence the best way to adopt SaaS is really to look at it as just one delivery mechanism the core is really around two key decisions, the first is the business decision
  • Standardised Package (SP)
The second is the economic decision
  • Utility Delivered (UD)
Hence the best way to adopt SaaS is to ignore SaaS and concentrate on what you are trying to achieve as a business, which is SPUD. SaaS is the UD part of SPUD but to make it successful you need to do the SP bit as well.

Technorati Tags: ,

Friday, July 03, 2009

Vendor Managed Infrastructure - Are clouds just a VMI solution?

Tweeting with Neil Ward-Dutton I had a thought about what he has written on public v private clouds and it made me think that the only real difference between them is in the who manages and pays. This might sound like a big thing but taking a leaf out of the retailers book it doesn't need to be that large.

Vendor Managed Inventory is simply where a supplier takes over the management of a products inventory and ensures that it meets the buyers SLAs (availability, price, etc). The advantage for this on the buyer is that they don't need to worry about ordering, they just need to track against the SLA.

What is a cloud proposition if not that? Further more if we take this to its logical conclusion then even private clouds could be delivered to the same economic model as public ones. Maybe not with quite the same leverage but why couldn't IBM, HP or whomever supply you with hardware and software infrastructures against an SLA that you define and be responsible for ensuring that the capacity and pricing of the infrastructure meets the SLA? What is security and separation but part of an SLA?

My point is that "Private Cloud" really tends to mean "Still hugging your own tin" and that the real impact of cloud is in the economic model of procurement (the switch from CapEx to OpEx) and in the scaling of infrastructure independently of the current direct demand (i.e. you don't pay for Amazon to buy more hardware, that is part of their calculation to meet the SLAs).

So in 5 years will there really be Private Clouds that have a CapEx model, or will people be demanding that the H/W vendor provision capacity in a private environment with a specific SLA. In other words will VMI be applied to infrastructure in the same way as a supermarket applies it to apples?

Personally I think it will and that this makes strong financial sense for both businesses and suppliers as it changes the relationship and enables hardware vendors to undertake hardware refresh directly (after all if their Powerpoints are to be believed you'll always save money this way) and the business will have a defined capacity model.

Don't believe me? Well an awful lot of companies are already doing just this around storage. Getting "private" pieces of a great big SAN and paying a utility price for it.

This to me means that the current sales pitches of end user purchases of "cloud" infrastructures are just a temporary marketing led blip and that the future is VMI for everything.

Technorati Tags: ,

Thursday, June 25, 2009

When successful systems go bad - boiling frogs with Technical Debt

One of the challenges I often see in companies is when successful systems go bad. These aren't the systems that were delivered 3 times over time and 5 times the budget, these are the systems that many years ago delivered real benefits for the business and delivered in a reasonable time and budget.

The problem is that all those years ago the team in question was focused absolutely on getting the system live and successful, and like teams often do they cut corners. This started the technological debt of the system and began to act as a drag on future projects from day 1. Thanks however to the talent of that original team and the abject failure of systems elsewhere the success was rightly lauded and the system held as a shining jewel.

Roll forwards a couple of years and the situation as evolved. Those smart developers are now managers and some of them have left the company all together. The team profile has changed and the odds are the talent pool has decreased. Those pieces that the first team had missed out: metrics, unit tests, documentation are starting to be felt, but the current team would struggle to justify the cost to put them in and the rate of progress, while slowing, still delivers in a reasonable time frame. The level of debt is increasing and some of those short cuts in the first build are becoming evident and the newer short cuts to keep things on track are getting ever more desperate. Cut/Paste/Modify is probably becoming a normal strategy at this stage but people are building successful careers, quite rightly, based on their previous success.

Roll forwards five or more years and the situation has evolved again. The managers are becoming directors, more disconnected from the actual technology but still aware of what it represented to them all those years ago. The people working on it now have little connection to the original vision and are in a full on duct-tape mode. The pace of change has slowed to a crawl, the cost of change has gone through the roof and the ability to actually innovate has all bit disappeared. The problem is that this has happened slowly and over an extended period of time so people are still thinking that they have a wonderfully flexible system.

Its pretty much like boiling a frog, no-one has noticed that the pace has slowed and that the major cost of all projects is from the technological debt of the system. Some retrofitting efforts have been made no doubt, and these are lauded as being important, but the fundamental challenge is that the code base now is hugely fragmented from a management perspective while maintaining points of critical instability. Testing and releases take an age because changing one thing breaks five others, all of which are then just patched.

I'm not talking here about back-end transactional systems in which change is an irregular thing, who cares if the COBOL is hard to maintain if last year you only modified 5 lines, I'm talking about dynamic systems that are meant to be flexible and agile and have suffered greatly under 5 or more years of continual development of variable quality.

The challenge is that senior IT managers in these types of companies often are wedded emotionally to the system and can create elaborate arguments which are based around their perception of the system when it was built and their desire to see what was a great system continue to be a focus for the business. Arguments that off the shelf components now offer a better and more flexible approach or that starting again would be cheaper than next years development budget to add 5 features will just fall on deaf ears.

But its something that people in IT must do as a normal part of their business, to step back and realise that 5 or 10 years is a huge period of time in IT and that there will have been significant changes in the business and IT market during that time which will change your previous assumptions. The right answer might be to continue on the current path and to invest to a level that removes the technical debt, it might be to do a full rebuild in a new technology or even in the same technology just based on the learnings from the last 5 years, or it might be that what was differentiating before has now become a commodity and you can replace it with a standardised package (and make the business change from differentiating to standardising with its additional cost benefits).

So have an honest look at those successful systems and ask yourself the question Is the frog boiling? be honest, think about the alternatives, and above all think about what the right economic model would be for that system. This last bit is important, differentiating systems are about top line growth, about Capital investment (CapEx) and looking at the business ROI. Standardised systems are about cost saving in IT and the business and measuring by the Cost to Serve (OpEx).

Here is the check list to see if you have a boiling frog
  1. Yearly development spend is higher than the initial development estimate
  2. Many people in IT have a significant emotional attachment, but don't code
  3. "differentiation" and "flexibility" are the watch words
  4. The code quality metrics are in the scary territory (or aren't collected)
  5. There have been several large "refresh" projects through the code base
  6. The competition seems to be getting new features to market quicker
  7. The pace of change on other systems is limited by the pace of change of the boiling frog
  8. Each generation of buzzwords is applied to the solution, but no major refactoring has been done
I'm sure there are others but another one is simple

If the business gave you the development budget for the next two years and asked you if you could rebuild the site with a couple of friends for that amount would you
  1. Say "god no its way to complicated and high quality"
  2. Say "No way, it would cost a bit more than that"
  3. Say "Ummm I think so"
  4. Say "definitely, when do I start?"
  5. Bite their hand off at the elbow and give them an order form
If you are scoring 3 or more then you've probably got a boiling frog.

Technorati Tags: ,

Tuesday, June 23, 2009

Religion as a Service

Okay a few weeks ago I had a brain-wave for a new business. What do you really need for a business to take off in the "as a Service" space?
  1. It needs to be 80%+ commodity
  2. You need a large customer base
  3. You need the end customer to add their own differentiation
Above all I wanted a business that would be a much higher margin one than simply a SaaS business, which meant including the actual business process work as well.

Hence the idea for "Religion as a Service". Focusing initially on the Abrahamic religions (where lets face it around 80% of the rules and processes are the same) this would expand to enable a more active selection of deity (or alien race) and a greater degree of customisation around the rules of your newly created faith.

Religion as a Service goes far beyond simply providing you with a customised version of your "one single book that has all the truth in it" to actually providing you with a modern industrialised solution to your religion's needs. This includes
  1. A multi-language call centre to handle donations
  2. A set of "white-labelled" preachers who can be branded to your own faith (for cheaper religions this can be mutualised if the demand isn't there yet)
  3. An "avoid damnation" generic TV show including presenter with amazing hair that can be tailored to your religious target market
  4. Believers on Demand - a set of white labelled individuals from your core demographic to create the impression of an already thriving faith
  5. iPhone versions of your sacred text
  6. Place of Worship in a box - a high value service that converts buildings into a faith centre, in a similar manner to the successful "Irish pub in a box" model
  7. PR support to help you claim discrimination and subjugation
All of this is backed by a robust IT and BPO solution that handles training, indoctrination, HR, payroll, finance and accounts.

Because "Religion as a Service" isn't limiting itself to a single religious ideal it will be able to offer dramatically cheaper costs than establishing your own religion from scratch. For people looking to establish a new evangelical Christian sect for instance you can be up and running in a matter of hours and proclaiming yourself to be the 2nd coming of Christ without all of the massive overheads that this normally entails.

People with more demanding needs, for instance wanting to create a full church for the flying spaghetti monster, are still able to leverage the underlying resources and staff of RaaS but will need to invest more in the customisation of the central "core text", as this is however simply a reference volume and our staff are supported by the very latest holy knowledge management tools even the most esoteric demands can be met and scaled. This means that your local Cargo cult can rapidly be turned into a world wide phenomenon.

Religion as a Service's ability to leverage knowledge and resources across multiple faith channels means that it can offer new religions a much more efficient manner of scaling its followers and help a sect turn into a profitable business much more effectively.

Now all I need is a VC to fund it and I'm away.

Technorati Tags: ,

Wednesday, June 17, 2009

The Clint Eastwood school of change

Change is hard, change against people who don't want to change is extremely hard bordering on impossible. In any change programme therefore you need to be clear about what you are up against and what success looks like.

This is where Clint Eastwood can help. Sure Chuck Norris can run around the world and punch himself in the back of the head but Clint Eastwood has many more lessons on delivering change in difficult circumstances and presenting different approaches.

This isn't for the collaborative occasions when you need to work with people, Clint isn't very good at that. This is for the occasions where you need to overcome people who are actively working against you and the objectives of change.

So what does Clint teach us? First off he teaches us that there are different types of people who we need to overcome.

There are people in authority who are abusing their position to make sure the change doesn't happen and are instead driving their own competing change agenda(Pale Rider).

There are people within a given group who are working against you (Gran Torino)

There are people who operate in a different management structure and are using that independence to try and prevent change (Unforgiven)

And sometimes there are people who need to be run down and shot because they are never going to be part of the new world (several films)

The point here is how does Clint deal with these challenges? The answer is that there are some obvious similarities and a few key differences.

Firstly the similarities.

Clint never gets angry, at least not against the people who are targeted as the blockers to change. This is a really important lesson. If people are blocking change you really can't just shout and scream at them, it doesn't work and just makes you look impotent.

Clint is focused. Clint doesn't have a huge number of approaches to deal with the situation and he never runs into the situation without significant amounts of planning and thought. This means that when Clint decides to take action he has already determined what the outcome he wants is, and then makes sure that it happens.

Clint is talented. Clint makes sure he is the big guy in any engagement, this doesn't mean shouting his head off it means making sure that he knows exactly what he needs to do and making sure he has the skills and resources to deliver against it. Clint makes sure that what ever the other guy is doing that he has the skills to adapt to the situation and make clear that he is in control ("Do you feel lucky? Well do ya punk"). The point here is that Clint backs his ability against his competition and this is a core way that he ensures the right outcome.

Clint is clear and concise. Why say 100 words when a stare or a grunt will do? Don't waffle, don't prevaricate , just make it absolutely clear what you want and how you will achieve it.

Now the differences

There really aren't that many, its really about how the similarities are applied differently.

The point here is that these key Clint lessons are how anyone should be looking at difficult change where you have a group of people who are adamantly set out against you.

Know you facts, know what they know, know what you want and concentrate on achieving that.

The lesson of Gran Torino is also that losing a battle can also be winning the war. I've had a number of experiences where you pull people into a "Road Runner" moment, i.e. you draw them to a point where they think they have won, then as the dust clears they realise they have stepped off the cliff and are about to fall.

The lessons of the "Dollars" films is that sometimes you just have to face it that people won't come on the journey and they need to be sidelined or exited. Don't try and get them on-board, its pointless, just work out the easiest way to eliminate the,

The lessons of Unforgiven are that people in groups often act in an uncoordinated way if you attack them individually. Don't go for the overall group, work out individual weaknesses and use that to drive the group apart.

The lessons of Pale Rider are that coordinating those that do support change behind you (also known as the Magnificent Seven approach) and using your focus and talent can overcome the larger but over-confident blockers.

Change programmes come in lots of guises but often you'll find yourself come to a moment where you realise that there is a group of people who just won't be coming on the journey with you. Then you've got to ask yourself the question

What would Clint do?

Technorati Tags: ,

Tuesday, June 16, 2009

Why would a cloud appliance be physical?

IBM often lead in technology areas and with the history of LPAR on the mainframe they've got a background in virtualisation that most competitors would envy. So clearly with cloud they are going to go after it. Sometimes they'll do what they did with SOA and tag a (IMO) dog of a product with the new buzzword (MQSI = Advanced ESB - I'm looking at you) and other times they will actually do something right.

Now a product that can handle the deploy and manage instances sounds like a good idea. IBM have created just such a product which basically acts as a dispenser and manager for WebSphere Hypervisor edition images. The WebSphere Cloudburst Appliance will deploy, it will reclaim, it will monitor and it will manage. Very nice for people who have large WebSphere estates.

And this is what the product looks like

Yes I did say look like because IBM have built this cloud manager into a physical box. Now appliances for things that need dedicated hardware acceleration I understand, but why on earth is something that is about managing virtual machines, something that might be doing bugger all for large periods of time not in itself a virtual image?

Given that the manager is unlikely to be a major CPU hog it seems like an ideal thing to be lobbed into the cloud itself (yes I know its not really a cloud, but lets go with the marketing people for now, they've made a bigger mistake here IMO). If it was in the cloud then you could add redundancy much more easily and of course it would require its own dedicated rackspace and power.

Like I said I can understand why you might like a virtual machine to do what the CloudBurst appliance does, but I have no idea why you would want a dedicated physical machine to work on a low CPU task. As IBM expand this technology into DB2 and other WebSphere elements you could end up with 20 "Cloudburst" appliances managing and deploying to a single private cloud. How much better for these to be cloud appliances in the truest sense and to be virtualised within the cloud infrastructure itself.

A physical box to deploy virtual images makes no sense at all.

Technorati Tags: ,

Friday, June 12, 2009

Single Canonical Form? Only Suicidal Dinosaurs need apply

Now I said a while ago that Single Canonical form wasn't for SOA well now I've been doing some SaaS projects and I've realised with traditional modesty that not only am I right, but that people who are still pushing it as an approach can be described as suicidal dinosaurs.

If SaaS is anywhere in your future, and it will be unless you are a military secure establishment and even then it might me, then GIVE UP NOW on the idea that you can mandate data standards in applications and create a single great big view that represents everything. It isn't going to happen, you are now back in the wild old world of multiple different systems each with their own specific exchange formats and....

you have to cope with it

Moaning about it or trying to push back on it because it doesn't meet your "corporate data model" isn't going to get you very far, its liable to get you heading out the door in fact.

Single Canonical Form is for people who believe in the great big database in the sky theory of architecture, it didn't work for SOA but some people still tried to force it in. Now the love child of SOA and the Cloud is coming to destroy it completely. The only sensible policy is to look at an "active" MDM strategy and a brokerage approach to communication between systems ideally based around a federated data strategy that leaves information its its source systems but provides references between them.

The brokerage challenge and active MDM challenge will only grow as SaaS becomes more common. There is no ability to inflict, and I choose the word advisedly, your Single Canonical Form on SaaS providers so you have to actually take a sensible solution oriented approach to data consistency and visibility.

Single Canonical Form is a dinosaur like approach but it was one which some people still managed to get away with proposing as a way to create a career. Well SaaS is the meteorite heading towards their earth and the choice is evolve or die.

Technorati Tags: ,

Monday, June 01, 2009

The challenge of "build for the web" to SaaS economic models

Build for the web is a strong meme. Most of the time people focus on the technical elements of this and the end user elements. What I haven't seen people talk about much is the impact that this has on the current financial models from SaaS vendors. SaaS can give great benefits for companies by enabling an OpEx rather than CapEx model but the solutions today do assume that a person is using them. Working on the system I'm looking at right now its clear that this model is okay in most cases in 2009 but won't be the case for long.

Think about a world where you are looking to combine a SaaS CRM provider (user based licensing) with a Logistics solution (shipment based) and an analytics solution (user based) and create a new unified sales/marketing and shipping solution.

All fine. But now lets say you get clever and you realise that from the CRM solution what you are really interested is tracking every interaction you have with customers. Your internal people aren't interested in the various options the CRM solution has, they just want to be logging what is going on but using the analytics to provide them with visibility of what they should be upselling and what are the current globally successful products.

Looking forwards as SaaS vendors shift towards different utility models which address their financial demands (e.g. what is a "user" if a remote service or other SaaS solution is accessing it?) the complexity of information sharing is liable to increase, both in terms of licensing ("you shall not cache or store information from XaaS outside of the system except for your own individual use, it shall not be loaded into another application where users other than yourself can access it") and in terms of efficient financial management.

The transactional model, shipments, however remains valid and is very difficult to get around. Sure you can be more efficient on the bundling but you are always going to pay for the actual transaction. This presents both a challenge and opportunity for user based licensing solutions, they can move away from it towards a more transactional based approach, e.g. accounts/prospects/campaigns/etc for CRM, and start providing a business with a more direct measure of the actual value they get from an IT system.

The SaaS revolution has a long way to run and its economic shift is only just starting.

Technorati Tags: ,

SaaS and the Cloud a development challenge

A while back I blogged on how to do ERP with a middleware solution. The point was to leave the package untouched while adding your customisations in an environment that was better suited to the challenge. It made upgrades easier and also would help to reduce your development timescales.

Well the world is moving on and now I'm looking at a challenge of SaaS + cloud. A standardised package delivered as SaaS where we need to extend it using a cloud solution to enable it to scale to meet some pretty hefty peaks. So how to meet this challenge? Well first off lets be clear I'm not talking about doing APEX in Salesforce I'm talking about treating the SaaS solution in the same way as I'd treat an ERP, namely as the service engines that provide the transactional functionality. What the cloud part would need to do would be to do the piece on top, the user interaction, pulling in some other web delivered elements in a late integration piece.

Model wise its the same

We've got a bunch of back-end services with interfaces (Web Services mainly) and we need to build on top.

The first new challenge is that of bandwidth and latency. "Build for the web" is all very well but there is nothing quite like a gigabit ethernet connection and six feet of separation. Here we have the information flowing over the web (an unreliable network performance wise) and we need to provide a good responsive service to the end user. So clearly we need to invest a bit around our internet bandwidth. Using something like Amazon clearly helps as they've done a lot of that already but you do need to keep it in mind and it becomes more important to make sure its not "Chatty" as those latency hops really add up.

The next piece of course is that we need to cache some information "Use REST" I hear people cry, the trouble is that even if I used REST (or rather if the SaaS provider did) then I'd still have to over-ride the HTTP Cache headers and build some caching in myself. Why? Well the SaaS solution has a per transaction model in certain sections and a per user in others, this means I need to limit users until the point at which they REALLY have to be known to the SaaS solution and I need to cache transactions in a way that reduces the money going to the SaaS vendor. So here the caching is 20% performance and 80% economic. Its an interesting challenge in either REST or WS-* as you are going against the policy that the service provider would set.

So the objective here is to build proxy services on the cloud side which handle these elements. These proxies are going to be reasonably dumb, maybe a basic rules engine to control pieces but are there to make sure that the "web" doesn't get in the way of performance. These proxies will however have a database that enables searching across sub-queries as well as the matching of exact queries (e.g. "find me all letters between A and Z" should enable the query "find me all letters between M and U" to be done as well without a SaaS hit) not sure yet whether we will go for a full database or do some basic pieces ala Amazon.

"Build for the web" is a mantra that many people are supporting these days, and there are good reasons for making your services available via that channel. But combining solutions and still delivering high performance is a significant challenge particularly when economic contracts can rule approaches such as the basic REST approaches redundant.

So when looking to build for the web think about the structure of your application and in particular the impact of latency and bandwidth on its performance as you look to consume other web applications. If you do have a financial contract in place with a SaaS vendor be very clear what you are paying for.

Technorati Tags: ,