Learn how you can provide real-time status updates on insurance policies and claims as well as personalized reports and statistics for in-field adjustors and agents.
Doing things better and faster has been a human quest since the beginning of our intellect. This quest gave us our first stone tool and wheel and led us to develop supersonic jets.
The same is true for our eagerness to understand and use data instantaneously and convert it into actionable information. With access to large processing and storage resources, our ability to act on information in real time has become the new frontier of innovation.
What is Real-Time, Really?
First, let’s clarify how real real time truly is.
That depends on the context you want to use it for. Similar to the fact that there is truly no ‘unstructured data’ (not ‘yet’ structurally understood data), there is no true ‘real-time’ data either. It’s just near real-time. When we talk about real-time data or real-time systems, we are essentially talking about systems that allow you to ‘work’ or use data before actually storing it. In other words, real time denotes our ability to use data as soon as it arrives, rather than storing it first and analyzing it in the future. This is the primary significance of the term real-time, using data in the present rather than in the future.
Real-Time Data in Insurance
If we look at this through the lens of the insurance industry, we see a need for data to be available a lot faster for us to use it, but as with any other industry, this need is not equal for all use cases. For example, financial and statistical reporting are more like snapshots in time. So, their timeliness needs aren’t necessarily real-time. They’re more like end-of-day, week or month.
Whereas agents and adjusters in the field need a timelier turnaround – within seconds or minutes – on the status and feedback of a claim.
That’s not happening today. Here’s how you solve this dilemma:
Historically speaking, aggregate reporting use cases became the bread and butter for BI and data professionals as the application development world tackled more ‘real-time’ challenges. This seamless distinction caused solutions to become more siloed, which over time caused an inherent latency in the reporting world and aggregation limitation in the application world. This should not happen.
Until recently, whenever we talked about moving data in real-time, we inevitably talked about using some version of service bus or message queues. These systems provided us low latency and high availability, but the main drawback of these ecosystems are extensive development lifecycles and a somewhat flaky nature to accommodate for changes that need to be coordinated throughout the downstream pipeline.
Not to mention, we end up changing the shape of the data multiple times. But as data replication technologies become more robust, it opens an opportunity for us to maintain relational databases in sync- in a ‘real-time’ manner, without encountering huge development or maintenance costs. These technologies also allow us to retain the ‘shape’ of the data and implement or account for changes without much hassle – and in a decoupled manner.
With these mechanisms, we can maintain multiple copies of an application database, a live one for the online application and another one for data solutions. This allows us to design solutions that require ‘atomic’ data without waiting for additional processing to occur. This, too, comes with an architectural problem as application databases are much more normalized by design. They are not easy to use for analytical purposes.
Therefore, we still need to modify and reorganize these structures to be conducive for analytical usage. But since we have a true separate physical copy, we can query it as frequently as we need without worrying about adversely affecting the application performance.
This helps us start looking into designing micro-batch jobs and trickle-down data feeds that can keep analytic-friendly structures up to date with hourly and semi-daily loads. All of this leads to daily, multiple dashboard refreshes and mid-day financial reporting.
Beyond this, we can now create solutions for real-time use cases and not just for ‘atomic’ data. We can link this to other aggregated sources and enhance the results to provide more actionable insights.
By using these technologies and design paradigms, you can provide the status of policies and claims in ‘real-time.’ You can also couple it with detailed personalized reports and statistics for in-field adjustors and agents. How do you accomplish this? With a fraction of infrastructure, development cost and simplified architecture.
I am sure that as we move forward with these solutions, bringing newer capabilities to light, the silos I mentioned earlier will become a thing of the past and the data will be available for all use cases in near real-time. The questions we will be asking then are “what is the time context of your use case” and “how soon do you need it.”