Archive for April, 2013

Data Redundancy – Why is it that it’s not acceptable in a database but okay across the enterprise?

Thursday, April 25th, 2013

STP – It’s been the “in” thing, don’t you know? Everyone is doing it.  A while back, this hot new acronym was all the rage, and for the most part still is.

For a while now, in response to the “we need it now” mentality, insurance companies have placed an emphasis on the importance of Straight-Through Processing (STP) and how it can help them adapt to the marketplace.

STP, at its core, moves data from a starting point to an end point without human intervention. In the insurance world, this may mean moving data from an e-app system to a new business system, to a policy admin system, to a commission system, to a web portal for clients and producers.

However, in a rush to implement STP and use it to reduce throughput time, eliminate human intervention and increase ease of doing business; many times data is stored in multiple systems along the way.  STP improved IGO business by reducing the possibility of human error during entry, but exposed new issues when producing and providing data to downstream consumers or systems. Without a post-entry process in place, data would many times become out-of sync. Data that was initially propagated through systems after user entry, could produce a multitude of issues if a user in one of those systems performed an action to update it.  This essentially repeated one of the very issues STP was created to solve.

What do I mean? Let’s quickly examine some of the pitfalls of STP when not implemented correctly.

Significant redundancy of functionality and data across traditional applications:

Traditional systems, of any type, in which data is stored are typically coupled with a dedicated database (see Fig 1.0).  STP many times creates data in these for each system to be able to access.  Thus, although data was entered one time, it’s stored in numerous areas.

Potential for out-of sync data:

An app with an expected premium is entered once through a front end and propagated throughout systems.  The check is received, information now is updated in the policy admin system, but unless a procedure is put in place, it has no way of updating the data in the producer portal or customer portal.

Significant maintenance costs:

Time and resources.  It takes both of these to manage the syncing of this data across systems.  To correctly sync, procedures and methods are put in place to update data in each system; these prove costly, timely and extremely cumbersome in the long run.

Inefficient business processes designed around systems:

Business process now suffers as accurate data is now rarely provided real-time. As a result, we have to step out of the STP flow to handle exception processing which typically involves redundant and sometimes illogical manual workarounds.

 

Data should be captured AND stored only once, exposed to any system or consumer that would require it. A real-time component based approach that has that has the ability to hook-up with any existing system solves this issue.

Components (methods/events) would link together through web services and enable business processes to communicate with one other. This communication lets data reside in a single repository and permits sharing across each without storing.

Similar to database normalization, as a method to organize data within tables to prevent redundancy, the use of components normalizes business processes across the enterprise. Components maintain integrity through web service hooks that only need to identify the method by which they were invoked, input, to provide the user the necessary values defined as output.