Friday, December 31, 2010

Why MDM and SOA are shifting out of IT

Now I've said previously why MDM is required for successful SOA but there is another important piece around MDM and SOA that is happening at the moment and explains both why MDM and SOA haven't historically gone together and, crucially, why the new trends are liable to help bring them together next year.

Why SOA and MDM didn't go together
The first question is why historically MDM and SOA projects tended to not go together. Sure organisations would "do" SOA programmes and would "do" MDM programmes but rarely would the SOA programmes and MDM programmes be tightly joined. There are I think three reasons for this

1 - Different bits of IT
The first challenge is that MDM and SOA were normally done by different bits of IT with different mentalities. MDM was done by the "data" folks who worried about data warehouses and saw data as the most important thing in the world. SOA was done by the enterprise integration and architecture folks who worried about operational processes and integration. The MDM folks tended to have a post-transactional view of the world where things were "true" or "false" while the SOA folks tended to view things as "in processes" or "completed".

These different communities in IT have very different backgrounds and focuses. ETL, Data Warehouses and big databases rule in the MDM/Data side while fast transaction throughput is the key for the SOA folks.

2 - Different bits of vendors
The next challenge is that the vendors have exactly the same split in their view of the world. Look at IBM for instance. SOA is in the WebSphere brand while MDM is in the InfoSphere brand, two very different parts of IBM Software with independent management structures and teams, sure the idea is that they should co-operate and work together but we all know that different divisions in companies like to do things differently. This also means that you often get different sales people for the different pieces of software. Oracle for instance put MDM in the Applications area while SOA is in the Middleware space, if you want to use their MDM products with their middleware products (for instance using the MDM PIP) this means you have to deal with two different sales people to get what you want.

3 - They've been IT projects
The final reason is that MDM and SOA have traditionally been internal IT programmes owned and managed by IT and largely invisible to the business. This means that the two cultural elements above are then hard-baked into the solution which ensures that the SOA and MDM programmes are kept distinct.

Why its changing
So why is it changing? Well the first reason is that people have started finally realising that SOA is a business thing and needs to be viewed from a business architecture perspective. This has been a long time coming but finally it appears people are thinking about the challenges of business services, service value and the business architecture. At the same time MDM is shifting as well, firstly its shifting away from post transactional into an operational space which means it needs to consider operational processes and performance in a way it hasn't before, this means that the MDM/Data folks now have to not only talk to the SOA folks but they have to... shock horror... actually become MDM/SOA folks.

MDM is also shifting in that business people want to do more active information management, both in terms of newer analytical tools and secondly in terms of including that mastered and high-quality information into core operational processes. This means that the business realises that the old "data landfill" approaches which were poor before are massively hindering the analytical models and have a direct impact on the efficiency of the operational processes. By taking control recognising that Information needs Mastering the business is now actively taking over those processes and thus MDM is moving out of the background into being a key part of the corporate information strategy.

So that is why MDM and SOA are shifting out of IT, its because being within IT made them technically centric programmes that often failed to deliver the promised value. By enabling the business to more actively own its IT estate, through Business SOA, and its core information, through MDM, it suddenly ceases to be a question of competing IT cultures but a simple question of how to present a consistent operational and analytical view of the business.

SOA and MDM are made to be together.... that fixes the enterprise culture... but will it fix the vendors?

Technorati Tags: ,

Monday, December 20, 2010

When clouds really will be cloudy

People are talking about clouds and SaaS as the future, and I really believe that they are, in fact I'd say they are the present reality for leading companies. However one of the questions is always "where does this go"? Now there is one world that says "everything on the cloud and delivered via HTML 5". This is an interesting view but it misses out a couple of key questions
  1. When does Moore's Law go away?
  2. When is it really a cloud
The first point is that I'm sitting here with an iPad, iPhone, MacBook Pro and AppleTV (I am a fanboi) with miles more processing at my disposal than commercial systems and websites I put live late in the last century. Clouds talk about dynamic deployment and portability... but normally within a specific data centre environment. When we think about services being consumed and co-ordinated and assume that this is being done over the internet then two questions raise themselves.
  1. What decides where a service is deployed?
  2. Why can't it be deployed to my phone?
What is the point of these questions? Well my son and I can play Need for Speed:Undercover with one of us "hosting" the game on the iPhone or iPad. This is therefore an example of a piece of Software being delivered "as a Service" from a mobile device to another device. Sure its a specific use case but its a very real one to scale up.

Why wouldn't the "Rich" interface still be deployed to the device but now as a client service? Why wouldn't the information cache and some clever software that proactively populates the cache be deployed to the local device?

Now folks like RightScale already do deployment and management across multiple cloud platforms and why wouldn't this be extended to ever more powerful mobile devices, laptops and other devices. Why couldn't my operating system be deployed as part of the cloud rather than just a consumer and the elements such as latency determine where the most effective deployment is for each service in a network? Think about all those apple iPhone apps running in the background on millions of devices... who needs more capacity than that and what latency problems when the app is actually spread across a few devices in the local area?

Now there are challenges to this but there are also big advantages, your data centres are cheap because you don't need them anymore, you just deploy to your clients devices.

This clearly isn't a solution for 2011 but it is something I firmly believe will happen and its driven by the power of devices. Sure HTML 5 is cool, sure Amazon AWS is neat and sure SaaS is wonderful.... but the day that clouds really become cloudy is when no-one can point at the great big data centre that it ultimately all connects to.

Technorati Tags: ,

Monday, December 13, 2010

MDM isn't a hub - its just simpler to implement it that way

One of the things about MDM that people often get wrong is the idea that MDM provides a central information hub around a given entity and its relationships.

It doesn't.

MDM provides 3 core facilities
  1. Cross-referencing of core entities between systems
  2. Standardisation around the critical "matching" attributes
  3. Synchronisation of attributes modified within multiple systems
Only one of these really requires some form of centralisation, the x-ref, the rest can be handled via governance processes and integration processes without requiring a central system. You can implement match and merge within the integration layer or end application then propagate those changes along.

However in the world of Simple IT and "doing one thing well" its liable to be much more effective to have an MDM solution that manages this integration and which is designed to do that integration rather than building it all yourself.

MDM doesn't require a central solution, you'll just probably find it simpler that way

However there are good reasons

Technorati Tags: ,

Friday, December 10, 2010

Stop blaming Oracle on Java

With Apache joining Doug Lea in walking out on the JCP the talk has been all about how this is Oracle's fault.

I disagree, the stagnation of Java and its issues very much started under Sun as the JavaSE 6 debacleshowed. The problem that Oracle have actually made is in leaving the same mentality and people in charge of Java rather than actually looking to refresh the leadership and focus it more on the Java market rather than an internal view of what that market should be.

So I don't blame Oracle for this debacle in the same way as I don't blame Oracle for putting JAX-WS into JavaSE or the massive amount of time that JavaSE 7 has taken. The reality is that Java lost its direction and started chasing "Joe-sixpack" and while Sun paid lip-service to Open Source they actually meant "their" open source when it came to Java rather than opening up to Apache.

As someone who championed, and still champions, Java as an environment it has been sad to see how intellectually stunted Java has become in the last 5 years and how myopic its leadership has been. That leadership appears to have made it through the acquisition pretty much unscathed and the attitudes have if anything become more hardline and more myopic due to the protection of a larger parent company.

Java needs new leadership, the current fiasco and the comments on the votes show that the current Java leadership in Oracle has the same problems of consensus building and intellectual direction as they had 5 years ago. Oracle has some fantastic intellectuals and some great leaders who can build consensus in the Java community but the bravest thing for them to do now would be to open up the door and appoint a leadership team from outside potentially one that included real representation from the major players and industry.

Oracle aren't the problem, they've just inherited the problem child and let the bad behaviour continue

Technorati Tags: ,

Monday, November 15, 2010

Simple IT - a proposal

I've been thinking more and more about why Simplicity is hard and I've come up with a few key things that Simple IT needs to be and how you judge Simple IT. Part of this links back to the SOA anti-patterns and it comes down to a few key questions
  1. Can your IT estate be described as a series of discreet elements
  2. Can each of these elements be easily maintained within their business context
  3. Can each of these elements be simply described
This comes down to that old principle of "one thing well", in IT this doesn't mean low level services, it can be very high-level services, for instance putting in an HR system which does everything that the business wants in a vanilla package solution would meet all of these requirements. HR is in fact a single discreet element and its delivered via a single package, this matches how the business wants to manage that area.

So the building blocks of a simple IT strategy are not all of the same size, they are of the size that makes sense within the context of the business architecture

In a simple IT approach the focus is always on the on going evolution of the IT estate in line with business strategy and not based on a single project delivery. The way IT is set up tends to focus on the The Ivory Tower of Implementation Optimism v Long term viability which drives against simplicity and towards complexity.

So the focus of simple IT is to value
Long term evolution over short term expediency
Architectural clarity over coding efficiency
Business Strategy over IT strategy
Simple IT is also to
Drive IT costs in-line with their business value
Drive IT technology selection in-line with the business value
Create clear upgrade boundaries between different business value areas
Manage IT based on the different business value areas
The point here is that simple IT is not actually about making a single project faster, its about making the 2nd project and its support faster and more efficient. This means having control and direction into which the right approaches can be used, Agile where its about value creation and reaction, package where its about standardisation and SaaS where its about utility.

This is about having the business architecture, having the heatmap, and then aligning IT clearly into those areas.

Simple IT is hard, it requires control, it requires vision and it requires focus.

Technorati Tags: ,

Friday, October 29, 2010

Windows 7 - more proof on why iPhone is better

A while back I posted why Phone adverts tell you that the phone is rubbish

Well lets compare

Windows 7 Mobile

Apple FaceTime

The difference remains. Apple have the confidence to show you people actually using their phone, Microsoft have the confidence to not show you people actually using the phone but doing other stuff to pretend that their phone is cool

If people can't even fake an advert to make a phone look useable, what does that say about the devices themselves?

Technorati Tags: ,

Sunday, October 17, 2010

MDM made easy - why people make Master Data Management hard

Okay so I've spent a while in the last 10+ years on various MDM (Master Data Management) programmes and its come down to a very simple reason why lots of MDM programmes fail..
People screw up MDM programmes by forgetting what MDM actually is.
The first bit is that people look at the various different MDM packages and then really miss the point. Whether its Oracle UCM, Informatica, SAP MDM, IBM MDM, Initiate, TIBCO or anything else people look at it and go...
Right so this is what we do, what else can we fit into it
Now this is a stupid way to behave but its what most people do. The use the MDM piece as the starting point. The reality is that MDM is only about two things
  1. The cross references of the core entity between all the systems in the IT estate
  2. The data quality and governance around the core entity to uplift its business value
And that really is it. The more attributes that are added beyond the those required for these two elements then the more expensive and more complex and less effective your MDM solution will be.

The other core part of this is that there really are only a very limited number of bits to MDM, its about three things
  1. Things - assets, accounts, parts, products, etc
  2. Parties - individuals, organisations, customers, suppliers, etc
  3. Locations - postal addresses, email address, web addresses, physical address, geo locations, etc
That really is it from an entity perspective then you've got the relationships for those entities (and remember a relationship is just the keys to do the match) which means
  1. Parties to Things
  2. Parties to Locations
  3. Things to Locations
  4. (Parties to Things) to Locations (e.g. a persons specific account address)
There really isn't much else to do it really is just

So all you need for MDM to succeed is the above entity model and a list of attributes required to uniquely identify a high quality version of that entity.

So why do people screw up?

Well first off is because they do something like picking a customer mastering package and then trying to master product information or product relationships within it (e.g. Asset to Location).

Secondly because they start extending the number of attributes rather than creating a cheap ODS.

Thirdly because they create their own versions of the relationships because of a belief that business rules that they have change the entity model rather than actually just being data quality rules against the standard entity model.

So the reality is that MDM is simple, its about the data quality and matching, which means its about the business governance of information.

And that is the real reason that most MDM programmes fail to deliver, because in order to get to the business governance of information and make MDM simple you have to do an awful lot of hard work in building up the trust between the business areas and IT that governance can be done in a way that benefits rather than impacts business.

MDM isn't a technology solution its a simple business solution, the problem is that IT people tend to make it a complex IT solution because they can't make it a simple business solution.

Technorati Tags: ,

Thursday, October 14, 2010

Simplicity is hard

One of the things I get to do a lot of is look at IT estates that are a complete and utter mess. Systems overlap in functionality, are difficult to maintain and the links between them are more complicated than Glenn Beck's issues with reality. When doing a Business Service Architecture it becomes clear that the big issue here is that IT doesn't learn the lesson of Unix Do one thing and do it well. In SOA, particularly business driven SOA this is the whole point of services, they do one thing and they are designed to be integrated.

Having the "services" as clean though is pointless if what you have under the covers is just the same old crud with some REST or WS-* lipstick on top, you actually have to have an implementation that is clean all the way down or you are still screwed.

The BSB Specification was based around that principle of doing one thing well, and the whole point of the DSB/BSB split is to keep it simple.

This then becomes the real issue, its actually really hard to architect and deliver simply. In the MDM space for instance you see MDM solutions that morph into MDM + ODS + Reference Data Management solutions. "clean" ERP installations are destroyed by customisation and the Java solution gets some crufty bolt ons because "it was easier to do it there". The delivery builds the blob with lipstick on it and suddenly we are no better off.

Why does this happen? Well more and more I believe its because the SIMPLE pictures that describe a business architecture are either not drawn at all or are abandoned because of their simplicity. People, architects especially, don't like putting in place the rigour and control that is required to deliver a simple solution, its much easier to deliver a blob and let people cope with it in support. Simplicity isn't a valued commodity because it doesn't allow people to show off their understanding of complexity.

"Je N'ai fait celle-ci plus longue que parceque je n'ai pas eu le loisir de la faire plus courte.
--I have only made this letter rather long because I have not had time to make it shorter."
Pascal. Lettres provinciales, 16, Dec.14,1656. Cassell's Book of Quotations, London,1912. P.718.

Simplicity takes time and effort and the end result is much more satisfying, easier to explain, easier to maintain and easier to use. Most people however take the easy route to complexity.

Technorati Tags: ,

Monday, September 06, 2010

Apple TV - it's just a dongle

With the various reviews of the Apple TV device I think that people, including Apple, have missed the point. This shouldn't be a device via which you stream and buy content it should simply be considered as a dongle, which in future can be integrated into the TV, car or what ever to stream content from a portable device. $99 is quite a bit for a dongle but Apple have actually over speced the Apple TV device so it will stand alone, the smart move in future will be that it is simply an extension to TVs in the way they have iPod connectivity today. The storage is on your PC, Mac, iPod, iPhone or iPad and then you simply stream to the various different displays that are available.

So you carry around your movies, in conjunction with your Apple cloud service and thus it is all on demand as you travel around your life. Get in the car and the kids can get it, get home and you can use the TV. Go to a friends house and it is all sorted.

Seriously is it that hard to spot? Apple TV isn't a product its just a dongle that can be integrated into TVs based on the iPod, iPhone, iPad dominance. Airplay is the real thing, Apple aren't aiming at the set top box market or even the movie rental market, they want the last metre between you and the display.

Saturday, July 03, 2010

MDM is required for successful SOA

One of the things that I've noticed over the years about successful SOA programmes is that they all undertake a formal approach to MDM. By programme here I'm not talking about one website using REST or WS-*, I'm talking about a strategic transformation initiative that aims to move the IT estate forwards to a more business centric approach.

Why is MDM important for SOA? Well there are a couple of reasons.
  1. MDM stops you having to do Single Canonical Form
  2. MDM helps you start from a point of federation
Now there are a number of MDM approaches that enterprises take and I'll massively generalise them into three groups
  1. Digital Landfill MDM
  2. Operational MDM
  3. Federated MDM
The first is the most common and is generally, in fact pretty much exclusively, a post-transactional approach where lots and lots of systems feed down into the "MDM" solution which is really there to provide the cross referencing required for a data warehouse reporting solution. These tend to be database centric and batch oriented and not overly useful for a real-time or operational environment

The second is where the MDM solution has actually taken on a transactional approach where information, for instance customer details, are actually held within the operational MDM solution and the cross referencing information is stored and available in real-time. This really is the minimum level at which an SOA environment should be operating. The MDM solution is there to ensure the loose coupling between services and making sure that a minimal reference model approach to information sharing can be handled.

The third is where there is no central MDM solution but it is still operating transactionally. This is the very very hard and not really to be attempted yet type approach. Here you can think about semantic web and those sorts of funky technologies and on doing the matching and cross referencing on an on demand basis. I'm pretty sure this type of solution won't work very well, or indeed potentially at all, but its really what lots of people are trying to do today when they have an ESB, lots of services and nothing set up to do the mappings.

So basically the point is simple. A transactional, operational, real-time MDM solution is an absolute requirement for a professional SOA environment. Without it you are going to have nasty tight integrations between areas at the data layer or some amazingly complex mappings that will fail on a regular basis. I'm not even talking here about the benefits of data quality improvements by using a decent MDM solution I'm just talking about something that manages the cross-referencing of similar information between systems and services.

The odds are however that in doing your SOA solution you've left MDM being something that just feeds the data warehouse and looks at reporting.

Technorati Tags: ,

Monday, June 28, 2010

Orange say that the iPhone 4 has been recalled

Okay so I've just got off the phone from Orange on the non-delivery of my iPhone and here are the litany of excuses

1) They ran out of stock

So I pointed out that they'd accepted the order from me, given me a fixed delivery date (the order went in at 2am on the Thursday and was confirmed at 9:05am). This seemed rather bizarre that they'd run out of stock that quickly

That is when it got a lot more interesting.

Then I was told that

2) "The phones have all been recalled as they've got an antenna problem and they keep crashing"

I pushed on this just to check and it was confirmed that the reason that Orange don't have any stock is because their has been a recall due to the antenna issue. The call centre drone said that the iPhone antenna issue was one thing and also that the phones kept on crashing.

I asked why, if it was a result of a recall why they hadn't emailed me about it, the reason was that

3) Their system was so overloaded that it couldn't handle the volumes.

I pointed out that they regularly seem to spam in pretty large volumes but apparently the iPhone is much higher in volume.

There was no hint of an apology and the stock line was "7-10 days or a full refund".

So either its straight Orange incompetence with a rubbish excuse or there are some major league iPhone 4 issues.

Friday, June 25, 2010

Location Centric packages - don't buy a 1990s package

One of the big problems with package solutions is that they are very database centric. Changing the data model is basically suicide for a programme. Adding a few new tables is dodgy but sometimes required and adding columns is reasonably okay but modifying the core data model is always going to get you in hot water.

One area that I've seen consistently as a problem over the years though is down to how the package vendors have thought about physical and electronic addresses. When the packages were created there were really only one set of important addresses, physical addresses. Phone numbers were pretty much fixed to those premises and email was a million miles away from the mind. This means that the data models tend to look at electronic addresses as very much second class citizens, normally as some form of child table rather than as a core entity.

The trouble is that as packages are being updated I'm seeing this same mistake being made again with some of the new technology models being used by vendors (AIA from Oracle appears to make the mistake). The reality is that the model is pretty simple

That really is it. There are two key points here
  1. Treat all actors as a single root type (Party) then hang other types off that one
  2. Do the same for Locations
The reason for doing this is pretty obvious. These days mobile phone numbers and email addresses are much better communication tools than physical addresses. As you want to send e-statements, e-invoices, and other elements to customers like SMS delivery notices, then you want to be able to channel shift customers much more simply. If a customer switches their delivery address for a book to an email then that is fine as long as you can ship them an e-book.

Now I know that anyone from an OO background is going to go "well duh!" but it does amaze me how in package land the database centric mindset still dominates and people just don't seem to want to revisit the assumptions they made in the 1990s when their hacks to put in electronic addresses seemed like a safe bet, after all email and the internet weren't considered as future strategies.

Its now well into the 21st century and I'd really advise people buying packages to look long and hard at the data model and ask yourself "is this a 1990s view of business or a 21st Century view" if its the former then be aware that you will have pain.

Technorati Tags: ,

Monday, June 21, 2010

Tin-huggers the big problem for cloud adoption

Going through yet another one of those "holy crap that infrastructure is expensive" occasions recently I did a quick calculation and found that we could get all of our capacity at an absolute fraction of the internal price. Think less than 1/10th of the quoted price when installation was factored in.

What stopped us shifting? Well a little bit of compliance, which we might have overcome, but the big stopper were the tin-huggers.

Tin-huggers are people who live by the old adage "I don't understand the software, I don't understand the hardware but I can see the flashing lights" which I've commented on before.

Tin-huggers love their tin, they love the network switches, they love the CPU counts and worrying about "shared", "dedicated", "virtualised" and all of those things. They love having to manually upgrade memory and having to select storage months or years in advance. Above all of these things they love the idea that there is a corner of some data centre that they could take their tin-hugging mates into and point and say "that is my stuff".

Tin-huggers hate clouds because they don't know where the data centre is and their tin-hugger mates would laugh at them and say "HA! Google/Amazon/Microsoft/etc own that tin, you've just got some software". This makes the tin-hugger sad and so the tin-hugger will do anything they can to avoid the cloud. This means they'll play the FUDmeister card to the max and in this they have a real card to play...

Tin-huggers are the only ones who work in hardware infrastructure design, software people couldn't give a stuff.

This means its all tin-huggers making the infrastructure decisions, so guess what? Cloud is out.

Tin-huggers are yet another retarding force on IT. Sometimes the software folks can get it out and work with the business but too often the TIn-hugging FUDmeistering is enough to scare the business back into its box.

Its time to build a nice traditional bypass right through the tin and into the cloud and let the tin-huggers protest from their racks as we demolish them from underneath their feet.

Technorati Tags: ,

Sunday, June 20, 2010

REST has put enterprise IT back five years, Sun has put it back ten

Okay I've watched REST, Clojure and the other shiny new things rise up and for the last 9 months I've been back in the bowels of large, lets just say massive, scale enterprise IT delivery and I've come to a conclusion.

IT is in a worse place now than it was 5 years ago. The "thinkers" in IT are picking up the shiny new tools and shiny new things and yet again continuing to miss the point of what makes enterprise IT better. There are a few key points that need to be remembered.
  1. Art v Engineering is still the big problem - 90%+ of people in IT aren't brilliant, a large proportion aren't even good
  2. Contracts really matter - without them everything becomes tightly bound no matter what people claim about "dynamism"
  3. No technology has ever, or will ever, deliver a magnitude increase in performance
  4. The hardest part of IT is integrating systems and service not integrating people. People are good at context shifting and vagueness, good interfaces are fantastic optimisers but even average user interfaces can be worked around by users.
The likes of Clojure and REST haven't improved this situation in the enterprise in any marked way. Its been 5+ years since REST started being hyped and in comparison with the benefits that WS-* delivered to the enterprise in 5 years its been zip, zero, nada. The "dynamic" languages piece that kicked off about the same time has delivered similar benefits to large scale enterprise computing (you know the stuff that keeps most people in IT in a job) over the "old school" Java approach.

A few years ago I said that if you wanted a career then learn Web Services, if you want to be cool learn REST. Since then its become clear that some people have made careers in REST... but you know what?
  1. Its as hard to do a large scale programme with a large integration bent as it was 5 years ago.
  2. There are less really good enterprise qualified developers as they've got "dynamic" language experience and struggle, or worse bring the dynamic language in and everyone else struggles
  3. Vendors have been left to their own devices which has meant less innovation, less challenge and higher costs as the Open Source crowd waste time on pet projects that aren't going to dent enterprise budgets

In 5 years from 1998-2003 Java and Web Services went from near-zero to being everywhere, innovation was being done at the edges and then being applied to the enterprise. It was a melting pot and it improved enterprise IT, thanks in part to the smart people working at the edge and the way this pushed the vendors forwards...

Well now SAP, Oracle and IBM are still heavily backing Web Services but there is a big problem....

No one is holding them to account. With all of the cool, and smart, kids off playing with REST and Clojure and the like we've got an intellectual vacuum in enterprise IT that is being "filled" by the vendors in the only way that vendors know how. Namely by increasing the amount of proprietary extensions and software and pushing their own individual agendas.

So we get the bizarre world in which Siebel's Web Service stack has a pre-OASIS specification of WS-Security, last updated in 2006 by the looks of it. We get a world where IBM is still pushing the old MQSI cart horse as an "Advanced ESB" and generally the innovation in this space has just collapsed in the last few years. Working on a programme doing integration which uses Web Services in 2010 feels pretty much like 2005, sure some specifications have come out and there is some improvement but overall its pretty stagnant.

"Oh do REST then" I hear the snake-oil salesmen cry. Really? If you had to do an integration between 20 different programmes and over 300 systems in a heavily regulated area then you'd use REST? High value assured transactions between different vendors and providers over a "dynamic" interface?

Give me a break.

What works in these areas is what has always worked
  1. Define your interfaces, nail them down, get them right
  2. Test like a bastard to those interfaces
You can't do complex programmes without having those firm areas, this is why major engineering programmes don't have variable interfaces for screws. Now before someone pipes up with a nice edge case where 200 people did something complex then please do that 20 times in a single organisation and give me a call.

In 2006 I asked why there was no WS-Contract and the real reason was that it wasn't a good vendor specification (WS-TX is that) but its a brilliant enterprise specification. So WS just had Security and Reliability, important things for the enterprise, but didn't make then next step.

And what has REST given us in the last few years? Errr please, folks... Now in my current engagement there was a great area where REST would have been very useful (Reference Data) and some where it would have been quite useful (Federated Data Network Navigation). The problem is two fold
  1. Most people in IT appear to know bugger all about it. Now I continue to be surprised at how little people who work in IT read about what is going on, but I was really surprised at how little traction REST had.
  2. EVERYTHING is manual and there is still no standardised way to communicate on WTF you are doing between teams
Now if you had 2 then you can do 1. I did this with WS back in 2000-1 when most people thought I was just making up this WS stuff I could run between the data centres over port 443, I had interfaces, they had tools, we got it working.

Now before RESTafarians jump up and talk about all of the wonderful WEB things they've been doing. That is great and wonderful, but its not my world, my world is having 300 systems that vary from 20+ years old to just being built and I need to get them all working together. Even when REST was the right "architectural" solution it was the wrong "programme" solution as it would have driven us off a cliff. My world does have stratospherically large budgets however... you know what, if you want to make real cash wouldn't it be a good idea to address the richest part of the IT market?

But my real ire I reserve for a company I used to really respect but which, at the same time as REST began to get a load of buzz, drove a huge amount of enterprise IT off a cliff. When Java SE 6 was released I said it wasn't enterprise grade and indeed very rapidly the stupidity of the decision to push JAX-WS into JavaSE became apparent (yes please note I was massively against WS-* in JavaSE, partly because if someone wants to be a RESTafarian why the hell should they have to have WS-* cruft in their environment?). This was also the release that added in scripting to Java SE.

I'm now seeing 4 years later the impact of this stupidity on the enterprise. Java SE 6 is dribbling in under the application servers but the mentality that it represented, namely that Sun was more interested in "Joe Sixpack" and the cool crowd than the enterprise really helped to ensure that it was left to the vendors to undertake the enterprise side and Java began to stop being the innovative platform or language. The bits that the enterprise wanted, things like profiles and dynamic loading, were deferred into Java SE 7, which is now (by my reckoning) 2 years over due.

Sun championed the new "cool" languages and undermined the whole thing which made Java good for the enterprise, consistency. Having lots of different languages is rubbish in the enterprise, having the same basic platform from 4 different vendors is much much better on every level. So now we have people proposing doing programmes with 4 or 5 different languages and its being seen as "reasonable", we are also seeing some great developers doing some interesting things on platforms that will never bring benefits.... I can't help but wonder whether Spring or Hibernate would ever have delivered any benefit if it wasn't for the fact that they operated on the common platform... oh wait I don't have to wonder, they wouldn't have been anywhere near as successful.

So the last 5 years have been poor ones for enterprise IT. WS-* is the only viable system to system integration mechanism for large scale programmes, but its stagnating. REST has some cool stuff at the front end and for people who can invest in only the very highest calibre individuals but is delivering bugger all for the average enterprise environment.

Why is this a problem?

Well most of the modern world is driven by computers, computers are what makes economies tick and its integration that matters in a networked world. The immature and techno-centric way in which IT approaches enterprise solutions is ensuring that far from being an accelerator that works ahead of business demand it is IT that is all to often the dragging anchor to growth. This obsession with purist solutions that solve problems that don't exist are just exercises in intellectual masturbation which are actively harming developed economies.

Far too much shiny, shiny, far too little getting hands dirty with pragmatic realities.

So maybe I'll come out of this funk and people will point to the wonderful things that are massive improvements over the past 5 years and how some of the biggest enterprise IT challenges in the world are being solved by people in application support thanks to these new developments.....

But I won't hold my breath

Technorati Tags: ,

Business SOA and package delivery

I've been rather quiet this year, mainly due to being full out on a rather large programme delivery effort. Package and Data (MDM) rather than my normal SOA work but still very much around the Business SOA approach, and more in future on how to do Business SOA for a package and have a really strong governance approach to ensuring a successful package delivery.

But I thought I'd give a brief over view here.
  1. Detail the Business Services - including the "nearby" Business Services that aren't in your scope - this tells people at a high level what you are, and aren't, doing
  2. Create a "Business Catalogue" that details the fine detailed capabilities that the programme will be delivering.
  3. Map the catalogue to the Out of the Box (OOTB) functionalities in the package
  4. Create a strong governance approach around managing changes to the catalogue
  5. Document the capabilities using use cases. This gives you the explicit definition of scope that you need the enterprise to accept. These use cases are documented based on the OOTB what the package does, rather than being requirement gathering exercises
That is the high-level view and there will be more to come

Technorati Tags: ,

Saturday, April 24, 2010

Language in requirements, design and code reviews

Recently I've been doing a bunch of reviews in documents and other artefacts with multiple different groups of people and I've noticed a few things about what works when reviewing and what doesn't. I'm not talking here about the document format or the availability of tea but about how you review documents.

First off some ground rules, what I mean by this is that if you are in a key review position then you should be setting the expectations on what you consider to be good. So before people even start creating the stuff you are going to review spend 5 minutes with them just giving them some context on what you are looking for. This might be as simple as outlining where their piece fits into the broader picture or just making sure they have the right clarity on how they should be structuring what they have been set to do. This initial piece will save you a huge amount of pain later on.

So now when you get to the actual review you should at least be talking more about the content than wasting time telling someone that they've not created it properly and have to do some major rework.

So on into the review. I'm assuming here that you don't use design/requirement/code reviews to bollock people as that would be completely counter productive. If there are big issues pull them aside 1-on-1 later and have the discussion. So that said how to get people to learn from their mistakes?

The key here is language. There are some great phrases and some bad phrases. Lets say that someone has written something down that just isn't clear you can say

a) "This just isn't clear what you are trying to say"
b) "I'm confused around this bit, could you explain what you mean"

Now the former says "Crap work" the second says "Its probably me but lets just check" 9/10 times they'll explain in detail what they mean and you can say the magic words

"Great, now I understand it, you might like to write out what you've just said so no-one else gets confused"

Now lets say they've got something plain wrong you can say either

a) "That's just wrong"
b) "Umm what would the implications be if we do this?"

Then with b) you go into a discussion where you challenge them with points like "I see, but wouldn't X apply here?". This way you get to find out if its a mistake or they are actually a bit thick. If you do the former then you'll never get to know.

Now lets say there is an area where you realise that something you've done isn't clear and the person you are reviewing would benefit if it was clarified (for instance there is a diagram missing which would help explain their area). This is where you get to make the reviewee feel really good AND get work off your plate. The point here is to say something like

"I've just realised that I really should have created a diagram about Y by now as that would help you explain this area. I tell you what could you have a go at creating it and then we'll make sure that everyone sees it once we've got it right"

Here if you are a senior reviewer you are not only helping the person, and getting work away from yourself, you are really making the reviewee want to demonstrate that they can do a good job. That is the main aim with reviews. Catch the errors, help people improve and keep up morale. Kicking people in reviews for errors just doesn't make sense.

Pull the problems onto yourself, have the reviewee explain them and hopefully (if they aren't a muppet) they'll come to the right answer themselves, they'll think you are a great coach and they'll want to work harder for you.

The same does not apply to managers when reviewing project plans that are rubbish, they must be beaten about the head with a stick.

Technorati Tags: ,

Tuesday, April 06, 2010


Okay so I talked about Anti-Principles so now I thought I'd talk about the final thing I list to list out in the principles sections of the projects I do. The non-principles this might sound like an odd concept but its one that has really paid dividends for me over the years. While Principles say what you should do and anti-principles say what you shouldn't the non-principles have a really powerful role.

A non-principle is something that you don't give a stuff about. You are explicitly declaring that its not of importance or consideration when you are making a decision.

While you can evaluate something against a principle to see if it is good or against a non-principle to see if it is bad the objective of the non-principles is to make clear things that shouldn't be evaluated against. In Freakonomics Steven Levitt talks about "Received Wisdom" and how its often wrong. I like to list out pieces in the non-principles that are those pieces of received wisdom and detail why they aren't in fact relevant.

Scenario 1 - Performance isn't the issue

A while ago I worked on a system where they were looking at making changes to an existing system. A mantra I kept hearing was that performance was a big problem. People were saying that the system was slow and that any new approach would have to be much quicker. So I went hunting for the raw data. The reality was that the current process which consisted of about 8 stages and one call to a 3rd party system was actually pretty quick. The automated testing could run through the process in about 6 seconds with 5 of those being taken up by the 3rd party system (which couldn't be changed) in other words the system itself was firing back pages in around 125 milliseconds which is lightning quick.

So in this case a core non-principle was performance optimisation. The non-principle was

"Any new approach will not consider performance optimisation as a key criteria"

This isn't an anti-principle as clearly building performant systems is a good thing but for our programme it was a non-principle as our SLA allowed us to respond in 3 seconds per page (excluding that pesky 3rd party call) so things that improved other core metrics (maintainability, cost of delivery, speed of delivery, etc) and sacrificed a little performance were okay.

Scenario 2 - Data Quality isn't important

The next programme was one that was looking to create a new master for product and sales information, this information was widely seen as being of a very poor quality in the organisation and there was a long term ambition to make it better. The first step however was to create the "master index" that cross referenced all the information in the existing systems so unified reports could be created in the global data warehouse.

Again everyone muttered on about data quality and in this case they were spot on. The final result of the programme was to indicate some serious gaps in the financial reporting. However at this first phase I ruled out data quality as being a focus. The reason for this was that it was impossible to start accurately attacking the data quality problems until we had a unified view of the scale of the problem. This required the master index to be created. The master index was the thing that indicated that a given product from one system was the same as one sold from another and that the customer in one area was the same as the customer in another. Once we had that master index we could then start cleaning up the information from a global perspective rather than messing around at a local level and potentially creating bigger problems.

So the non-principle for phase 1 was

"Data Quality improvement will not be considered as an objective in Phase 1, pre-existing data issues will be left as is and replicated into the phase 1 system"

This non-principle actually turned out to be a major boon as not only did it reduce the work required in Phase 1 it meant that the reports that could be done at the end of phase 1 really indicated the global scale of the problem and were already able to highlight discrepancies. Had we done some clean up during the process it wouldn't have been possible to prove that it wasn't a partial clean-up that was causing the issues.

Scenario 3 - Business Change is not an issue

The final example I'll use here is a package delivery programme. The principles talked about delivering a vanilla package solution while the anti-principles talked of the pain of customisation. The non-principle outlined however a real underpinning philosophy of the programme. We knew that business change was required, hell we'd set out on the journey saying that we were going to do it. Therefore we had a key non-principle

"Existing processes will not be considered when looking at future implementation"

Now this might sound harsh and arrogant but this is exactly what made the delivery a success. The company had recognised that they were buying a package because it was a non-differentiating area and that doing the leading practice from the package was where they wanted to get to. This made the current state of the processes irrelevant for the programme and made business change a key deliverable. This didn't however mean that business change was something we should consider when looking at process design. We knew that there had to be change, the board had signed off on that change and we were damned well going to deliver that change.

This non-principle helped to get the package solution out in a very small timeframe and made sure that upgrades and future extensions would be simple. It also made sure that everyone was focusing on delivering the change and not bleating about how things were done today.


So the point here is that the non-principles are very context specific and are really about documenting the perceived wisdom that is wrong from the programme perspective. The non-principles are the things that will save you time by cutting short debates and removing pointless meetings (for instance in Scenario 1 a whole stream of work was shut down because performance was downgraded in importance). Non-principles clearly state what you will ignore, they don't say what is good or bad because you shouldn't measure against them (e.g. in Scenario 3 it turned out that one of the package processes was identical to an existing process, this was a happy coincidence and not a reason to deliberately modify the package).

So when you are looking at your programme remember to document all three types of principles

  1. The Principles - what good looks like and is measured against
  2. The Anti-Principles - what bad looks like and is measured against
  3. The non-principles - what you really couldn't give a stuff about

All three have power and value and missing one of them out will cause you pain.

Technorati Tags: ,

Friday, February 19, 2010


Okay so everyone knows what a principle is, its a core concept that you are going to measure things about. I've seen projects littered with the buggers.

The problem is that there is another concept that is rarely listed, what are your anti-principles?

In the same way as Anti-Patterns give you pointers when its all gone wrong then Anti-Principles are the things that you will actively aim to avoid during the programme.

So in an SOA programme people will fire up principles of "Loose Coupling", "Clear Interfaces" and the like but they often won't list the Anti-Principles. These are often more important than the Principles. These are the things that indicate danger and disaster.

So what are good (bad?) SOA anti-principles?

Small return Values
This is where people have been used to Batch interface where returns are just codes and descriptions with reports being used to indicate problems. The sort of statement here is just return a code and description rather than returning a decent amount of data

Direct Calling
This anti-principle is all about where people just get a WSDL and consume it directly without any proxy or intermediary. Its programme suicide and it shouldn't be done

Interface Generation
This anti-principle says that you shouldn't generate your interfaces from code but should design them up front and make them explicit. So from an anti-principle perspective you are banning the "right-click generate WSDL" IDE approach to Web Service exposure.

No matter what the project you are doing you should think about your anti-principles as much as your principles. Think about what is bad practice as much as what is good practice. Make it clear, make it explicit.

Anti-Principles - making clear where people are going wrong.

Technorati Tags: ,


Possibly my shortest post ever but seriously

Do remember if you have a team that your job is actually to delegate things to them otherwise there is no point having a team.

Technorati Tags: ,

Thursday, February 04, 2010

Why contracts are more important than designs

Following on from my last post on why IT evolution is a bad thing I'll go a stage further and say that far too much time is spent on designing the internals of elements of services and far too little on their externals. Some approaches indeed claim that working on those sorts of contracts is exactly what you shouldn't do as its much better for the contract to just be "what you do now" rather than having something fixed.

To my mind that view point is just like the fake-Agile people who don't document because they can't be arsed rather than because they've developed hugely high quality elements that are self-documenting. Its basically saying that everyone has to wait until the system is operable before you can say what it does. This is the equivalent of changing the requirements to fit the implementation.

Now I'm not saying that requirements don't change, and I'm not advocating waterfall, what I'm saying is that as a proportion of time allocated in an SOA programme the majority of the specification and design time should be focused on the contracts and interactions between services and the minority of time focused around the design of how those services meet those contracts. There are several reasons for this
  1. Others rely on the contracts, not the design. The cost of getting these wrong is exponential based on the number of providers. With the contracts in place and correct then people can develop independently which significantly speeds up delivery times and decreases risk
  2. Testing is based around the contracts not the design. The contract is the formal specification, its what the design has to meet and its this that should be used for all forms of testing
  3. The design can change but still stay within the contract - this was the point of the last post
The reality however is that IT concentrates far too much on the design and coding of internals and far too little on ensuring the external interfaces are at least correct for a given period of time. Contracts can evolve, and I use the term deliberately, but most often older contracts will still be supported as people migrate to newer versions. This means that the contracts can have a significantly longer lifespan than the designs.

As people rush into design and deliberately choose approaches that require them to do as little as possible to formally separate areas and enable concurrent development and contractual guarentees they are just creating problems for themselves that professionals should avoid.

Contracts matter, designs are temporary.

Technorati Tags: ,

Is IT evolution a bad thing?

One of the tenants of IT is that enabling evolution, i.e. the small incremental change of existing systems, is a good thing at that approaches which enable this are a good thing. You see it all the time when people talk about Agile and code quality and clearly there are positive benefits to these elements.

SOA is often talked about as helping this evolutionary approach as services are easier to change. But is the reality that actually IT is hindered by this myth of evolution? Should we reject evolution and instead take up arms with the Intelligent design mob?

I say yes, and what made me think that was reading from Richard Dawkins in The Greatest Show on Earth: The Evidence for Evolution where he points out that quite simply evolution is rubbish at creating decently engineered solutions
When we look at animals from the outside, we are overwhelmingly impressed by the elgant illusion of design. A browsing giraffe, a soaring albatross, a diving swift, a swooping falcon, a leafy sea dragon invisible amoung the seaweed [....] - the illusion of design makes so much intuitive sense that it becomes a positive critical effort to put critical thinking into gear and overcome the seductions of naive intuition. That's when we look at animals from the outside. When we look inside the impression is opposite. Admittedly, an impresion of elegant design is conveyed by simplified diagrams in textbooks, neatly laid out and colour-coded like and engineer's blueprint. But the reality that hits you when you see an animal opened up on a dissecting table is very different [....] a haphazard mess that we actually see when we open a real chest.

This matches my experience of IT. The interfaces are clean and sensible. The design docs look okay but the code is a complete mess and the more you prod the design the more gaps you find between it and reality.

The point is that actually we shouldn't sell SOA from the perspective of evolution of the INSIDE at all we should sell it as an intelligent design approach based on the outside of the service. Its interfaces and its contracts. By claiming internal elements as benefits we are actually undermining the whole benefits that SOA can actually deliver.

In otherwords the point of SOA is that the internals are always going to be a mess and we are always going to reach a point where going back to the whiteboard is a better option than the rubbish internal wiring that we currently have. This mentallity would make us concentrate much more on the quality of our interfaces and contracts and much less on technical myths for evolution and dynamism which inevitably lead into a pit of broken promises and dreams.

So I'm calling it. Any IT approach that claims it enables evolution of the internals in a dynamic and incremental way is fundamentally hokum and snake oil. All of these approaches will fail to deliver the long term benefits and will create the evolutionary mess we see in the engineering disaster which is the human eye. Only by starting from a perspective of outward clarity and design and relegating internal behaviour to the position of a temporary implementation will be start to create IT estates that genuinely demonstrate some intelligent design in IT.

Technorati Tags: ,

PS. I'd like to claim some sort of award for claiming Richard Dawkins supports Intelligent Design

Monday, January 25, 2010

Define the standards FIRST

One of the bits that often surprises, no infact not suprises it stuns me, is the amazing way that people don't define the standards they are going to use for their project, programme or SOA effort right at the start. This means the business, requirements and technical standards.

Starting with the business architecture that means picking your approach to defining the business services. Now you could use my approach or something else but what ever you do it needs to be consistent across the project and across the enterprise if you are doing a broader transformation programme.

On requirements its about structuring those requirements against the business architecture and having a consistent way of matching the requirements against the services and capabilities so you don't get duplication.

These elements are about people processes and documentation and they really aren't hard to set up and its very important that you do this so your documentation is in a consistent format that flows through to delivery and operations.

The final area are the technical standards and this is the area where there really is the least excuse. Saying "but its REST" and claiming that everything will be dynamic is a cop-out and it really just means you are lazy. So in the REST world what you need to do is
  1. Agree how you are going to publish the specifications to the resources, how will you say what a "GET" does and what a "POST" does
  2. Create some exemplar "services"/resources with the level of documentation required for people to use them
  3. Agree a process around Mocking/Proxying to enable people to test and verifying their solutions without waiting for the final solution
  4. Agree the test process against the resources and how you will verify that they meet the fixed requirements of the system at that point in time
This last one is important. Some muppet tried to tell me last year that as it was REST that the resource was correct as it was in itself it was the specification of what it should do and the test harnesses should dynamically discover only what the REST implementation already did. This was muppetry of the highest order and after forcing the individual to ingest a copy of the business requirements document we agreed that the current solution didn't match the business requirements no matter how dynamically it failed to do so.

So with REST there are things that you have to do as a project and programme and they take time and experience and you might get them wrong and need them updating. If you've chosen to go Web Services however and you haven't documented your standards then to be frank you really shouldn't be working in IT.

So in Web Service world it really is easy. First off do you want to play safe and solid or do you need lots of call-backs in your Web Services. If you are willing to cope without callbacks then you start off with the easy ones
  1. WS-I Basic Profile 1.1
  2. WSDL 1.1
  3. SOAP 1.1
Now if you want call-backs its into WSDL 2.0 and there are technical advantages to that but you can get hit by some really gnarly XML marshalling and header clashes that exist when going between non-WS-I compliant platforms. You could choose to define your own local version of WS-I compliance based around WSDL 2.0 but most of the time you are better off investing in some decent design and simple approaches like having standard matched schemas for certain process elements and passing the calling service name which can then be resolved via a registry to determine the right call-back service.

Next up you need to decide if you are going WS-* and if so what do you want
  1. WS-Security - which version, which spec
  2. WS-RM - which version, which spec
  3. WS-TX - your kidding right?
For each of these elements it is really important to say which specification you are going to use as some products claim they support a specification but either support an old version or, more impressively, support a version of the standard from before it was even submitted to a standards organisation.

The other pieces is to agree on your standard transport mechanism being HTTP. Seriously its 2010 and its about time that people stopped muttering "performance" and proposing an alternative solution of messaging. If you have real performance issues then go tailored and go binary but 99.999% of the time this would be pointless and you are better off using HTTP/S.

You can define all of these standards before you start a programme and on the technical side there really is little excuse in the REST world and zero excuse in the WS-* world not to do this.

Technorati Tags: ,

Saturday, January 09, 2010

Think in RPC develop in anything

Gregg Wonderly made a good comment on the Yahoo SOA list the other day
I think one of the still, largely unrecognized issues is that developers really should be designing services as RPC interfaces, always. Then, different service interface schemes, such as SOAP, HTTP (Rest, Jini, etc., can be more easily become a "deployment" technology introduction instead of a "foundation" technology implementation that greatly limits how and under what circumstances a service can be used. Programming Language/platform IDEs make it too easy to "just use" a single technology, and then everything melds into a pile of 'technology' instead of a 'service'.

The point here is that conceptually RPC is very easy for everyone to understand and at the highest levels it provides a consistent view. Now before people shriek that "But RPC sucks" I'll go through how it will work.

First off lets take a simple three service system where from an "RPC" perspective we have the following:

The Sales Service which has capabilities for "Buy Product" and "Get Forecast"

The Finance Service which has capabilities for "Report Sale" and "Make Forecast"

The Logistics Service which has capabilities for "Ship Product" and "Get Delivery Status"

There is also a customer who can "Receive Invoice"

Now we get into the conceptual design stage and we want to start talking through how these various services work and we use an "RPC language" to start working out how things happen.

RPC into Push
When we call "Make Forecast" on the Finance Service it needs to ask the Sales Service for its Forecast and therefore does a "Get Forecast" call on the Sales Service. We need the Forecast to be updated daily.

Now when we start working through this at a systems level we see that the mainframe solution of the Finance team is really old and creaky but it handles batch processing really well. Therefore given our requirement for a daily forecast what we do is take a nightly batch out of the CRM solution and Push it into the Mainframe. Conceptually we are still doing exactly what the RPC language says in that the data that the mainframe is processing has been obtained from the Sales area, but instead of making an RPC call to get that information we have decided in implementation to do it via Batch, FTP and ETL.

RPC into Events
The next piece that is looked at is the sales to invoice process Here the challenge is that historically there has been a real delay in getting invoices out to customers and it needs to be tightened up much more. Previously a batch has been sent at the end of each day to the logistics and finance departments and they've run their own processing. This has led to problems with customers being invoiced for products that aren't shipped and a 48 hour delay in getting invoices out.

The solution is to run an event based system where Sales sends out an event on a new Sale, this is received by both Finance and the Logistics department . The Logistics department then Ships the Product (Ship Product) after which it sends a "Product Shipped" event which results in the Finance department sending the invoice.

So while we have the conceptual view in RPC speak we have an implementation that is in Event Speak.

The final piece is buying the products and getting the delivery status against an order. The decision was made to do this via REST on a shiny new website. Products are resources (of course), you add them to a shopping basket (by POSTing the URI of the product into the basket) this shopping basket then gets paid for and becomes an Order. The Order has a URI and you just simply GET to have the status.

So conceptually its RPC but we've implemented it using REST.

Conceptual v Delivery

The point here is that we can extend this approach of thinking about things in RPC terms through an architecture and people can talk to each other in this RPC language without having to worry about the specific implementation approach. By thinking of simply "Services" and "Capabilities" and mentally placing them as "Remote" calls from one service to another we can construct a consistent architectural model.

Once we've agreed on this model, that this is what we want to deliver, we are then able to design the services using the most appropriate technology approach. I'd contend that there really aren't any other conceptual models that work consistently. A Process Model assumes steps, a Data Model assumes some sort of entity relationship a REST model assumes its all resources and an Event model assumes its all events. Translating between these different conceptual models is much harder than jumping from a conceptual RPC model that just assumes Services and Capabilities with the Services "containing" the capabilities.

So the basic point is that architecture, and particularly business architecture, should always be RPC in flavour. Its conceptually easier to understand and its the easiest method to transcribe into different implementation approaches.

Technorati Tags: ,