In our paper, we first explore the stochastic oderings of linear risk sharing before moving on the some fun simulations. The idea of this post is to give a general overview that motivates our choices in the simulation steps.
Here some of the most important results are summarised (this post being a summary of a summary skips a few steps - for details see our article). Further, the blog only contains the propositions and lemmas but skip the proofs.
The main results are below:
Given some random loss and an insurance scheme offering indemnity against a premium , an agent will purchase the insurance if for the convex order where , where is the associate premium to transfer that risk.
Def.: Convex Order
Consider two variables and such that for all convex functions
(provided the expectations exist). Then is said to be smaller than in the convex order, and is denoted as .
Proposition:
If , then and .
This motivates the use of the variance in the simulations later on.
Def. Risk sharing scheme
Consider two random vectors and on . Then is a risk-sharing scheme of if almost surely.
For example, the average is a risk sharing principle: , for any . A weighted average, based on credibility principles, too: , where (we will get back on those averages in the next section). Observe also that the order statistics defines a risk sharing mechanism: where . Given a permutation of , defines a risk sharing (that we we write, using vector notations, , where is the permutation matrix associated with ). This example of permutations is actually very important in the economic literature.
Which then leads to the proposition
Proposition:
Consider two linear risk sharing schemes and of , such that , for some doubly-stochastic matrix. Then
.
Finally some extensions to work with cliques. This is important as it shows that for networks that are most often encountered in real life, we will need to settle for an approximation because the optimal solution will be NP-complete.
Def. Clique
A clique within an undirected graph is a subset of vertices such that every two distinct vertices are adjacent. That is, the induced subgraph of by is complete. A clique cover is a partition of the graph into a set of cliques.
Now we can start to work on the cliques:
In a network, a clique is a subset of nodes such that every two distinct vertices in the clique are adjacent. Assume that a network with $n$ nodes has two (distinct) cliques, with respectively and nodes. One can consider some ex-post contributions, to cover for losses claimed by connected policyholders. If risks are homogeneous, contributions are equal, within a clique. Assume that policyholders face random losses and consider the following risk sharing
In order to illustrate the proposition above, let $I$ denote a uniform variable over , and ,
as expected since it is a risk sharing, while
since , so that , meaning that if we randomly pick a policyholder, the variance of the loss while risk sharing is lower than the variance of the loss without risk sharing. Observe further that is maximal when , which means that risk sharing benefit is maximal (socially maximal, for a randomly chosen representative policyholder) when the two cliques have the same size.
Cliques in practice
What does that imply for any practical solution though?
Cliques have attractive properties, in that they allow to use a linear risk sharing mechanism on the resulting subgraphs. Further, they can be used to represent a (more) homogeneous subgroup within a larger network, as for example Feng & Taylor propose (in their case, they partition the network based on a hierarchical structure). As we depart from the idea of a complete graph to begin with, partitioning the graph into cliques becomes a clique problem. Given a number of cliques , the number of nodes and the number of members of clique (where ), the variance is always minimized as:
implying and . This in turn means that:
So the variance decrease is increasing in as in standard partitioning problems, but decreasing in up to the extreme case where (ie. every node is its own clique). Even when abstracting from the fact that the set of all cliques in graph might not contain ideally sized cliques, the problem of finding a clique cover that minimizes becomes a minimal clique problem and this was shown to be NP-complete by Karp in 1972. Hence and optimal solution will be difficult if not practically impossible to find. One idea to circumvent this is to apply approximate clustering algorithms and reconnect cliques from there