Archive for August, 2010

If Policy Admin Systems Are a Thing of the Past, What’s Next?

Friday, August 27th, 2010

Last month, I shared the reasons why I believe large policy administration systems are a thing of the past.  As a follow up to that post, I would like to offer some insight into what I think the next generation might look like.

Certainly, the future solutions will leverage SOA and small re-usable components.  Here is an example of a life insurance financial transaction and how it can be constructed using a series of web services sitting on top of standalone components.    A series of discrete web services sitting on top of standalone components can be orchestrated to perform the overall transaction.  Typically this logic is buried in code somewhere, but by doing it this way, the components are decoupled.

It is fairly easy for people to describe and agree on what is needed, but pretty difficult to actually get there.  So, here is a brief explanation of the steps that can be taken.

  • Build all new components to be small enough and able to stand alone – All functions your components perform must assume all data is received from the outside.  They will only have access to read data that is specific to and contained within the component.  All other necessary data should be received as parameters.
  • Leverage an existing component when you can - To leverage your existing functionality, functionality needs to be walled off and you need to integrate with it however you can (not one size fits all).
  • Separate out orchestration – Using 3rd party software for orchestration forces a discipline to make the components truly standalone and guarantees the isolation of orchestration logic.  We use Microsoft Biztalk and IBM Process Server to do this.  However, there may be times for performance that you need to write code to have these components truly talk to each other.  This code has to be isolated into a separate level.
  • Find the right level of granularity, which can be an art – I have debated this point with people over the years.  On one hand, you can make every single method its own web service that can be orchestrated.  The other extreme is what most people have.  Like most things, the answer is in the middle and unfortunately case by case.  We draw the line at useful components that can do their own jobs.
  • Assume data can come from anywhere –The user interface, orchestration, web services and business process logic should not care where that data came from.  The data source should be abstracted from those layers.  In addition, the data translation needs to be handled “outside” your core code (either isolated in its own code or preferably in a 3rd party tool).
  • Continuously eliminate redundancy – There is no way to completely eliminate redundancy, but you can certainly try.  Purchasing existing components that have single functional purposes is the best way to start off with less redundancy.  However, you will still have to evolve and continue to find the places where there is redundancy and pull it out.
  • Use a modern “technology stack”– Wrapping your legacy keeps some of the older technologies in play, but if you limit its functional purpose, you are also limiting what needs to be maintained.  The newer components can leverage more of the modern technologies and get the benefits of development efficiency as well as scalable organization.
  • Implement as needed – Keep it simple and only add functionality that’s really needed.  It’s important to avoid the “bloat” of having every feature possible.  Only add components and functionality that is needed to run the business.  By stringing together the components that are needed, you can avoid having to maintain things you don’t use.

In the future policy administration model, companies can incrementally add functionality into their enterprise without major conversions and “big bang projects”.  The business can get benefits sooner and when the priorities change, you can adapt.

If you want to learn about the next generation of Life Insurance enterprise components, see

Tom Famularo

The Game of Telephone: Why Does the Customer Rarely Get What They Want?

Thursday, August 19th, 2010
“Documents create illusions of agreement…100 different people can read the same words, but in their heads, they’re imagining 100 different things” – Jason Fried, “Rework”

This is part of my series of blogs regarding how iterative development methodologies like Agile solve practical problems that have been haunting software implementation projects for years.

Too often, too much time and money are  spent, the customer is frustrated, and the wrong things are built into a software application.  I’ve seen projects at many companies where more than 50% of the effort was done after the development team was “code complete.”  While I have many opinions on this broad topic, I am going to narrow my focus to  the top reasons why the notion of gathering business requirements through traditional means is flawed, as well as some tips for getting around this.

I have to apologize up front that most of my writing here describes why this problem happens and my tips for mitigation are pretty brief.  My thinking is that this is one of those cases where recognition of the problem being a big part of the solution.

People Develop Software for People

One of the first things to remember is that we are building something abstract (software for businesses), not a physical thing that we can represent in drawings (like bridges or rocket ships).  Much of the functionality in business software is open to interpretation of people.  Because so many people involved with building or implementing software are analytical with an engineering mindset,  they want everything to fit into nice algorithms with a set of true and false facts.  Computers are after all just a set of 1’s and 0’s, right?  However, people are not.  There are too many variables related to a person for their software “needs” to be boiled down to “the right answer.”

Software “Requirements”

The typical process that people have been implementing for years seems to make sense:

  1. A team of Business Analysts interviews a series of users, managers and other BAs to figure out what the company needs to do to run their business
  2. The BAs go back to their hotel room or some other remote location and compile all of their notes and sketch out their thoughts
  3. Those thoughts will then get turned into pages and pages of documents that contain process flows, use cases, descriptions of requirements and much more
  4. The BAs then mail the documents back to their counterparts for review, get some feedback and incorporate
  5. Then comes judgment day – the sign-off process…the customer then signs off and the requirements are “locked down.”

NOTE:  In some cases, there is a “liaison” between the end customer and the interviewer (BA).

So, what could possibly go wrong with this process?  It is “tried and true.”  It all comes down to one significant flaw and that is everything is open to interpretation and we then run into the “human factor.”  The interviewer and interviewee(s) have a different frame of reference to draw upon.  All of their experiences leading up to those key interviews drive their perspectives. Typically, the interviewer is thinking about the “new system” and the interviewee is thinking about the same system they’ve been using for the past 20 years.

We’ve all seen many discussions where you can tell two people are saying the same thing, but they mean something different.  When those parties have completely different frames of reference, they hear the words and process them the way it makes sense to them.  So, that’s why we put it in writing.  Right?  Now we can be sure that we both mean the same thing.  Unfortunately, we humans can process ideas in oral communication about 5 times faster than we can when reading.  The benefits of body language, facial expressions, and inflection are lost.  So, in the traditional system implementation process, the BA writes what he or she was verbalizing during meetings and the client now understands even less.  However, for so many valid reasons, the client does not want to slow down the process, so he or she only asks a few questions and provide some limited feedback.  The feedback is then interpreted by the original author – the BA.  This process continues until an “agreement” is reached – which doesn’t mean that what is in each other’s heads is the same.  As they say, “there are many ways to skin a cat.”  This holds true for software implementation.

Detailed Design

My favorite part comes in next…detailed design documents are then written telling the client what is going to change within an application.  In a vendor situation, the client really doesn’t know the details of what is already in that application.  So, you then ask the client to “sign off” on the delta of something where the starting point is vague to the client and you can only hope that you have an exact interpretation of the end state.

Development Time

Developers are notorious for not being the best readers.  Typically, they were on the opposite end of the spectrum from the English majors in college.  Reading text that has been written for the end user to understand is counter to the make-up of someone who wants to read numbers, technical jargon, and instructions.  So, naturally, they are interpreting what someone else has already interpreted.  Rarely do they get it right   No surprises here – this is what everyone expected – which is why we have the functional test after the unit test.  Note:  In many cases, the developers aren’t trusted to do the work on their own, so they often have a Tech Lead in between.  Yet, another person to translate.

The Functional Test

First, many organizations skip this and go straight to a QA person to review what the programmer did.  In those cases, there is yet another person to interpret this requirement.  For those who have implemented this step, they’ve noticed some benefit.  Unfortunately, most of the time, when the original BA tests the functionality, it doesn’t operate the way they thought it should.

HintThis part actually works. The BA will then communicate to the developer, in small chunks, quite often sitting with them, to tell them what isn’t correct.  The developer, then makes the changes and hands it back.  They iterate until the functionality works.  So, at least now, we’ve eliminated one more level of interpretation…what is developed now works to the interpretation of the BA.


From the time the customer indicated what they wanted to the time they actually see something, several months (in some cases over a year) have passed.  In that time, many things could’ve happened:    turnover in client personnel, reprioritization of business objectives, frustration with project delays and cost overruns, or, simply, now that they see what was built, they realize it just isn’t what they wanted.

I’ve been involved in too many cases of conflict resolution after the fact.  The development team built something different than the customer thought they were going to get.  In most cases, when I look at the requirements, I can see both “sides” of the story.  Is this anyone’s fault?  Usually, it is not.  It was a flawed process in the first place.

Unfortunately, the amount of time lost and dollars spent in this process were significantly more than they needed to be.  Most experts would agree that so many of the problems related to software projects are related to “bad requirements,” but why do so many people still follow this same process?

Why Iterative Development Works

Here are some pretty simple ways that an iterative development process can reduce the cost, timeline, and frustration meter.  I am not going to describe the details of those processes.  You can pick up any book or just google on Agile.

  • Interpretation discrepancies are found much sooner in the process (before significant amount of time is spent headed the wrong direction)
  • Since so many requirements can be supported different ways, the customer might even prefer the misinterpreted version that was quickly built – more likely when it’s sooner rather than later and more discrete rather than when it’s combined with tons of other changes
  • Developers have more of a sense of ownership when they are closer to the customer and are more likely to be working as part of the team
  • Functionality is broken down into smaller parts (“stories”) that can be prioritized.  Therefore, not all pieces the customer thought they needed are really needed (this is a cornerstone of Agile and is worth its own topic for a later date)
  • Less people have to learn and internalize the requirements

If your organization is struggling with some of the things I describe above (whether with internal development or with vendors), then I encourage you to consider switching over to an iterative development methodology like Agile.

Tom Famularo