Archive for March, 2011

An Old Cliché: Less is More

Tuesday, March 8th, 2011

“Simplicity means the achievement of maximum effect with minimum means.” – Albert Einstein

In the book “Rework”, by Jason Fried, he boasts how his company’s software has far less features than its competitors.  When I first read that, I found it odd, but it got me thinking.

There are many reasons why I think larger software packages or systems are “on their way out” in favor of a combination of smaller components, calculation services, and rules engines orchestrated together.  The area I’d like to highlight here is related to “feature bloat” in those software larger packages.

Having been a Product Manager for various Life Insurance software products and reviewing many others,  I would estimate that at least 50% of the overall functionality is not needed or used by the average customer (and up to 75% of the domain specific functionality).  Over the past 20 years, one of the objectives for many insurers has been to consolidate legacy systems onto a single platform.  As a result, the vendors were forced into supporting all different types of products and business processes.   In order to be considered for such consolidation projects, vendors had to be able to support traditional product, universal life, annuities – fixed and variable.  Then, when vendors needed to expand they either went into other markets (health, disability, LTC ) or geographies (Asia-Pac, Europe, South America).  This caused their systems to grow more and more.  In addition, Policy Administration wasn’t enough, companies had to expand their breadth of functionality to the front office, underwriting, claims, billing, and so on.  In addition, over the years, they’ve chased whatever the cool feature of the day was or whatever some programmer developed over any given weekend.

To say the least, for these vendors to keep up in the features and functions race, the size of the systems grew immensely.  This is not even taking into consideration the vendors who were acquired and their software was “merged into a larger enterprise”.

On the surface, it sounds like a good thing to have all of this functionality in one place.  So, why is all of this a problem?

1.      The systems were initially architected for a purpose that wasn’t different than accommodating everything it ended up accommodating.  So each step of the way, more and more code was layered on top its foundation (similar to a foundation that wasn’t big enough for the building sitting on top of it)

2.      Each of the major additions usually resulted in some re-architecture or software pattern changes that left the system with a lot of inconsistent ways of doing things.

3.      In addition to all of that extra functionality that isn’t really needed, there’s always a ton of customization required to meet the customer’s real needs.  As a result of navigating through all of this extra code and configuration (the aforementioned feature bloat), the cost, effort, and risk of doing the customization is extremely high.  One could argue, it’s 50/50 as to whether it’s just easier to start from scratch.

4.      The end result of all of the customization (configuration or code) wrapped around all of the code in this large system is very difficult and costly to maintain.  So, you’re now stuck with dealing with this problem for the life of the software.

5.      Performance usually suffers from having to perform more operations than necessary to support the business as well as a larger memory footprint.  The answer is usually “add more servers” – which then blows up the infrastructure costs (both hardware and additional software licenses)

6.      Tons of inconsistent code and inter-dependencies makes it difficult to use lower cost resources (off-shore or less experienced) and usually results in requiring expert developers or significant ties to the vendor.

7.      The vendors have to keep up with their diverse customer group and spread their investments pretty thinly.  In addition, the hottest new topics are where the majority of the investment dollars end up going – which leaves the average customer not getting a good return on the recurring license/maintenance they’re paying the vendor…which probably doesn’t matter because the combination of the software architecture and all of the customizations make real upgrades too difficult anyway.

By moving to a model with smaller components and services, companies can meet their immediate business needs with a lot less code, configuration hardware, and headaches.  It allows them to have an evolutionary expansion plan where they’re adding only what they need.  The net result is smaller implementations, lower cost maintenance and more technology choices over the long run.

Tom Famularo

FAST

Tapping the Potential of the Amazon Cloud

Friday, March 4th, 2011

The recently released “2011 US Insurance CIO Survey” from Celent provides a great snapshot of where insurance CIOs are focused in a number of areas.   While cloud computing is clearly emerging, it’s interesting that of the surveyed respondents:

  • No companies reported that cloud computing is “in wide use”
  • 38% of companies have cloud computing “in limited use”
  • 52% “will investigate/pilot in 2011”

Obviously, it’s still in the early stages and it will take a few years for the model to develop and for cloud computing to be fully exploited.  Because the benefits are so great, I believe that companies and providers will quickly start to find new and interesting ways to tap into the vast potential of “the cloud”.

In our case, I can’t see how we could have gotten FAST off the ground without the Amazon Cloud.  In 2010, we paid Amazon between $1,500 and $2,000 each month – about $20,000 annualized.  What do we get for that?

  • 6 Windows servers.  Each has 100 GB of disk space, dual core processors, and 7.5 GB of memory
  • 2 Linux servers.  Each has 150 GB  of disk space, single processor, and 2 GB of memory

If we need a new server, it literally takes minutes.  We pay for what we need, when we need it.  We don’t have an MIS department.  We don’t have a server room.

Doing this “the old way”, we would have spent vast sums on hardware and building out a server room.  And we would have had to hire experts to configure and manage the servers.  Even if we had done this “on the cheap”, we would have spent $250,000 to $300,000 on hardware and staff (compared to about $20,000). As a startup, it’s almost impossible to replicate the quality and stability of the Amazon environment at any price.  Having lived through that in the past, there’s a tremendous drain on management and on staff productivity as you iterate toward a “rock solid” environment.

In other words, we paid Amazon less than 10% of what it would have cost to do it ourselves. THAT’S 90% LESS for something better.

So, does this have any implication for the insurance industry more broadly?  It sure does.  While the option of putting everything “up on the cloud” would not be practical today for a large insurer, as the industry moves to a more SOA-based approach it is practical to deploy discrete web services or components on the cloud.   The potential is obvious and the benefits could be extraordinary.

John Gorman

FAST