Hostname: page-component-cd9895bd7-p9bg8 Total loading time: 0 Render date: 2024-12-26T12:28:51.935Z Has data issue: false hasContentIssue false

Weekly dynamic motor insurance ratemaking with a telematics signals bonus-malus score

Published online by Cambridge University Press:  11 November 2024

Juan Sebastian Yanez*
Affiliation:
Department of Econometrics, Riskcenter-IREA, Universitat de Barcelona, Barcelona, Spain
Montserrat Guillén
Affiliation:
Department of Econometrics, Riskcenter-IREA, Universitat de Barcelona, Barcelona, Spain
Jens Perch Nielsen
Affiliation:
Bayes Business School, City, University of London, London, United Kingdom
*
Corresponding author: Juan Sebastian Yanez; Email: yanez.juansebastian@gmail.com
Rights & Permissions [Opens in a new window]

Abstract

We present a dynamic pay-how-you-drive pricing scheme for motor insurance using telematics signals. More specifically, our approach allows the insurer to apply penalties to a baseline premium on the occurrence of events such as hard acceleration or braking. In addition, we incorporate a bonus-malus system (BMS) adapted for telematics data, providing a credibility component based on past telematics signals to the claim frequency predictions. We purposefully consider a weekly setting for our ratemaking approach to benefit from the signal’s high-frequency rate and to encourage safe driving via dynamic premium corrections. Moreover, we provide a detailed structure that allows our model to benefit from historical records and detailed telematics data collected weekly through an onboard device. We showcase our results numerically in a case study using data from an insurance company.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of The International Actuarial Association

1. Introduction

In recent years, telematics information has given researchers and actuaries a new perspective on how motor insurance premiums can be calculated. Through vast amounts of data collected from GPS devices in either the car or the cellphone of drivers, a clearer picture of a client’s driving profile can be obtained. Ideally, an insurer could determine premiums based on driving skills and habits, leading to better risk identification, as shown by Verbelen et al. (Reference Verbelen, Antonio and Claeskens2018), where the predictive power of claim frequency models benefited from including telematics data. This reduces adverse selection for insurers and motivates the client to drive more securely, as safe driving is encouraged via premium reductions. Consequently, there is a real incentive for the insurer to implement pay-how-you-drive (PHYD) pricing schemes to take full advantage of the telematics data collected.

Although telematics information has proven to be a powerful tool to identify risks, actual dynamic pricing methods using this information in real time, such as Henckaerts and Antonio (Reference Henckaerts and Antonio2022), are scarce. Indeed, there are a number of difficulties when incorporating these data into a pricing scheme. First, information is collected as the client drives and is unknown before the coverage period is observed. Therefore, the insurer must either charge a premium after the client has driven or predict future telematics events based on present information. Second, including telematics information in models often leads to big data difficulties, since a considerable amount of data may be collected, and there may be data quality issues. Third, from a legal standpoint, the heavily regulated nature of personal vehicle insurance in some countries may render weekly dynamic schemes inapplicable. This paper provides a pricing scheme that tackles the first and second challenges. However, we do not adapt our pricing scheme according to specific regulations of any particular country. Nevertheless, in cases where dynamic telematics pricing schemes cannot be directly considered for ratemaking, we believe such approaches may still benefit the insurer in their evaluation of risk.

We focus our research on telematics signals (also known as near-misses, risky events, or near-claims in the literature). In an insurance context, a high number of breaking events or speed events are clearly indicative of dangerous driving, but there are many other types of events that could also be considered high risk, such as cellphone use. Commonly, in the ratemaking literature, a telematics signal is more broadly defined as a punctual event that potentially affects claim occurrence. This paper focuses on two events: accelerations and braking events, given that the positive correlation between these events and accident risk is well documented in the literature (see, e.g., Ma et al. (Reference Ma, Zhu, Hu and Chiu2018)).

The advantage of including telematics signals in a pricing scheme is twofold. The first advantage is that these signals occur more frequently than actual claims. The ratio between signals and actual claims could easily be hundred to one, depending on the exact definition of a telematics signal. This allows us to turn from a low-frequency setting to a more dynamic one, for example, from a yearly premium structure to a weekly one. The second advantage is that the driving profile of a client can be determined in simpler terms by focusing on only a couple of telematics signal events rather than broader telematics data. This enables the insurer to possibly circumvent potential difficulties in implementing big data. In addition, with more transparent data to hand, the insurers are able to explain the premium increases more easily to their clients.

In practice, an insurer determines a baseline premium based on traditional covariate information, which is charged at the end of short time intervals (e.g., weekly). Then, on top of this baseline premium, penalisations in the form of extra charges are added depending on the number of dangerous telematics signals that have occurred. In other words, high-risk driving behaviour can be identified and penalised dynamically. The issue with this pricing scheme is that if the insurer wanted to preemptively assess the risk of a driver at the start of a given interval, they could only base this on information provided by traditional risk factors since the signals (or the lack thereof) for the interval in question would not have been observed.

In this paper, we suggest a pricing scheme that allows the insurer to dynamically benefit from data from previous telematics signal counts to determine a more accurate premium. Indeed, by predicting the signal count to be observed on a given period based on past information, a classical pricing structure can be implemented. The idea is to combine traditional risk factors and signal predictions to charge a premium at the beginning of a covered period. Hence, our approach allows the insurer to charge premiums preemptively (as classical approaches would) while benefiting from past telematics information. Furthermore, as the period ends, the insurer acquires new telematics data in the form of mileage, signals, or more broad information, such as the time spent driving at night. By having access to such variables, the insurer is able to adjust the previously charged premium based on the observed driving patterns of the client. In other words, new telematics data impacts the total premium charged in two ways. First, the data enable better prediction of future signals and thus can adapt the preemptive premium for an unobserved period. Second, the data allow the insurer to adjust the previously charged premium by including observed telematics data.

In terms of the telematics signal predictions used in our pricing scheme, we choose to adapt a bonus-malus system (BMS), which has been studied alongside telematics data in the form of mileage (see Lemaire et al. (Reference Lemaire, Park and Wang2016)). We believe this is a natural implementation. The telematics signal pricing scheme and the bonus-malus approach share a similar goal in rewarding (or punishing) drivers based on the occurrence of events (or lack thereof). Furthermore, most bonus-malus scores are designed for claim counts, which, like signals, are punctual events. It is worth noting that, with a higher frequency of observations, the bonus-malus structure can be more dynamic as we can allow it to change over a shorter period (its value could change every week rather than every year).

To summarise, this paper proposes a dynamic pricing scheme allowing the insurer to charge premiums before telematics information is observed. Moreover, as information is collected through telematics signal events, weekly premiums are updated dynamically through a bonus-malus score. For a graphical depiction of our approach, see Figure 1.

Figure 1. Flowchart of a weekly telematics signal BMS ratemaking scheme.

We present our findings as follows. In Section 1, we review the literature on telematics ratemaking, telematics signal applications, and BMS credibility models. Section 2 showcases the statistical framework of our model, focusing on the step-wise implementation we suggest. In Section 3, we consider a numerical application of our framework to a case study, where we provide results in terms of goodness-of-fit and discriminatory power, along with numerical examples of a billing procedure. Finally, we make concluding remarks and comment on possible extensions.

2. Background

The research on ratemaking modelling for motor insurance has dramatically evolved in recent years, especially, in terms of risk selection such as in Vallarino et al. (Reference Vallarino, Rabitti and Chokami2023), where Shapley effects were considered, or in terms of novel methods to incorporate data in pricing, for example, through the generation of driving and accident data (Kim et al. (Reference Kim, Kleiber and Weber2023)) and through the inclusion of open claims in predictive models (Okine (Reference Okine2023)). In addition to these examples, a growing trend of publications has focused on telematics data applications. Some of the earlier papers focused primarily on distance travelled to measure risk; for example, in Ayuso et al. (Reference Ayuso, Guillén and Pérez-Marn2014) and Ayuso et al. (Reference Ayuso, Guillen and Pérez-Marn2016a), the distance travelled before an at-fault claim is observed was analysed through a Weibull distribution. Later, Boucher et al. (Reference Boucher, Côté and Guillen2017) considers generalised additive models (GAMs), including distance travelled and the exposure time as covariates measuring the risk of an accident.

Other authors have focused on more broad telematics data leading to big data applications. In this context, machine learning methods with telematics data were introduced by Wüthrich (Reference Wüthrich2017). Since then, various contributions have been made using similar methods. For example, in Baecke et al. (Reference Baecke and Bocca2017), data mining techniques were used to show that telematics data can improve risk selection. While in Gao and Wüthrich (Reference Gao and Wüthrich2018), speed–acceleration heat maps were developed using neural networks. Their approach was further studied in Gao et al. (Reference Gao, Meng and Wüthrich2019), where they showed the advantages of considering the telematics covariates they studied compared to traditional rating factors.

Alongside research on telematics data as previously mentioned, signal implementations for actuarial applications have been developed. This is due to strong indications that the occurrence of certain events has an impact on accident risk. For example, in Ma et al. (Reference Ma, Zhu, Hu and Chiu2018), the authors demonstrated that sudden acceleration and braking influence the accident rate. Additionally, Quddus (Reference Quddus, Noland and Chin2002) also found that specific telematics signals, such as rapid acceleration and sharp turns, are correlated with accident risk. Also worth mentioning is the analysis by Stipancic et al. (Reference Stipancic, Miranda-Moreno and Saunier2018), where locations with a high rate of braking and acceleration proved to be indicative of higher accident rates. More recently, Huang and Meng (Reference Huang and Meng2019) concluded that driving habits (including certain critical events) impact the frequency of accidents.

Let us discuss the research developed so far in an actuarial context. In this regard, some papers have focused on the prediction of signals using other telematics and traditional covariates (see Sun et al. (Reference Sun, Bi, Guillen and Pérez-Marn2021) and Guillén et al. (Reference Guillen, Nielsen, Pérez-Marn and Elpidorou2020)). Both papers benefited from the high-frequency setting of signals when performing their analysis. Later, in Guillén et al. (Reference Guillen, Nielsen and Pérez-Marn2021), one of the first pricing schemes using signals was put forward, where penalisations were applied when dangerous events occurred. More recently, Henckaerts and Antonio (Reference Henckaerts and Antonio2022) published the first ratemaking application with telematics data that included a profit and retention analysis. In parallel to these publications, another strand of the literature has focused on credibility applications using past signal observations. But, before we discuss these papers in detail, let us briefly review relevant literature.

As part of our paper, we refer to bonus-malus credibility models popularised by Lemaire (Reference Lemaire2013). These credibility models incorporate a bonus-malus score to summarise the history of an insured and modify the premium accordingly. At the end of an insurance contract, this score will decrease if the insurer does not make claims and will increase otherwise. Currently, these methods have proven to be flexible and have been adapted to several contexts and pricing issues. An example is given by Dizaji et al. (Reference Dizaji and Payandeh Najafabadi2023), where an adaptation of BMS was considered for long-term health insurance. Another example is given by Xiang et al. (Reference Xiang, Neufeld, Peters, Nevat and Datta2023), where the authors adapted a BMS to a cybersecurity setting. Also, in Yanez et al. (Reference Yanez, Boucher and Pigeon2023), a BMS was adapted to a micro-level loss reserving context. Meanwhile, dependency structures have been adapted to BMS models, such as in the paper by Oh et al. (2023), where a copula was considered to accommodate the dependence between claim frequency and severity. Likewise, in Verschuren (Reference Verschuren2021), premiums from different products were incorporated into a BMS structure through multiple scores. Furthermore, other contributions have adapted BMS models to solve complex problems in the context of Property and Casualty insurance. For example, Boucher and Inoussa (Reference Boucher and Inoussa2014) generalised the approach to better adapt information from policyholders who had been protected for long time periods. Similarly, in Cao et al. (Reference Cao, Li, Young and Zou2023), a BMS was incorporated into an evaluation of reporting and underreporting strategies from insured clients. We can also cite Boucher (Reference Boucher2022) when a method to recreate the past history of policyholders was developed when this information was missing. This application redefined a BMS within the generalised linear models (GLM) framework.

With this context in mind, let us discuss the applications of telematics signal credibility that have been researched. To our knowledge, currently, there are only two published papers that incorporate signals or near-misses in a credibility context. On the one hand, Denuit et al. (Reference Denuit, Guillen and Trufin2019) designed a multivariate credibility model for telematics signals. This model provides a credibility-based mechanism that updates a random vector containing both signals and claim counts. Furthermore, the authors provided methods to combine multiple signals into a single score or consider them separately. Their numerical analysis was based on discretised mileage under risky conditions, such as night driving. On the other hand, Corradin et al. (Reference Corradin, Denuit, Detyniecki, Grari, Sammarco and Trufin2022) proposes a mixed Poisson model for signals, where one of the key contributions is to allow their model to include non-integer signals, for example, proportions or continuous measurements.

There are several differences between this present paper and these propositions. The first is that claim count credibility is treated separately and in a different context. In our context, traditional claim count-based credibility cannot be directly considered due to the short observation period of the dataset that contains telematics signals. Instead, claim count data are extracted from historical records with lengthier observation periods but without telematics observations. This is the context from Guillén et al. (Reference Guillen, Nielsen and Pérez-Marn2021) and is expanded and reexamined in this present paper by providing credibility structures that better capture the individual risk of each driver in the historical records. Consequently, our claim credibility approach is ideal for insurers with access to historical records with lengthy observation periods and shorter-term telematics datasets with few to no claims. The second difference is that we design a two-step process that allows risk to be evaluated at two stages of development: the beginning and end of weekly intervals. The first evaluation shares many similarities with Denuit et al. (Reference Denuit, Guillen and Trufin2019); however, rather than considering a mixed model, we consider the framework from Boucher (Reference Boucher2022) where a bounded BMS is added as a covariate. In fact, the bounded structure showcased in this paper proves particularly relevant when adapted to our case study. However, it is worth noting that a similar bounded structure could be adapted to the mixed models presented by Denuit et al. (Reference Denuit, Guillen and Trufin2019) and Corradin et al. (Reference Corradin, Denuit, Detyniecki, Grari, Sammarco and Trufin2022). Regarding the second evaluation at the end of the interval when signals have been observed, we refer to the pricing scheme by Guillén et al. (Reference Guillen, Nielsen and Pérez-Marn2021) but expand upon it by including the preliminary evaluations from the next unobserved week. Consequently, this paper significantly expands upon the pricing scheme from Guillén et al. (Reference Guillen, Nielsen and Pérez-Marn2021) by incorporating credibility-based predictions from historical data and by adding a bounded BMS approach to predict telematics signals.

3. Ratemaking schemes with telematics information

In vehicle insurance, the more significant portion of historical datasets does not include telematics variables, since, in the past, pay-as-you-drive (PAYD) and PHYD products were not offered, and there was no real need to collect this information. Comparatively, datasets that do contain telematics information are often very large due to the detailed data collected. For instance, for any given driver, rather than having global information for a long observation period, dense and detailed weekly information for a shorter time span is gathered. In general, telematics datasets are often smaller regarding the number of drivers and have shorter observation periods but are rich in detailed data. Given the low frequency of claims in P&C insurance, insurers may need to draw information from both the historical data (that does not have telematics information) and the newer information collected from an onboard device (OBD). However, there is a discrepancy between the data points as the historical data may be collected annually, while the telematics data may be collected in shorter intervals (daily, weekly, etc.). For our paper, we follow the approach from Guillén et al. (Reference Guillen, Nielsen and Pérez-Marn2021) to solve this issue, which has the benefit of leaving telematics count data unchanged. However, if a telematics dataset is sufficiently large in terms of the length of the observation periods, the need to complement it by drawing information from older datasets without telematics data may become optional.

Let us showcase the statistical framework of our pricing scheme through five subsections. In the first one, we provide an introductory GLM framework that is later used as a baseline to compare our results. Then, in the next two sections, we introduce weekly telematics information to our model. Subsequently, we link the previous subsections to present our full ratemaking scheme. Finally, a Gini index application for motor insurance is considered to analyse the quality of our pricing method.

3.1. Traditional claim modelling

Let us assume that $\mathcal{D}^{(h)}$ is a historical dataset from drivers for the periods where no telematics data were collected. Also, let $Y_i^{(h)}$ be the number of observed at-fault claims of driver i during the total coverage period in weeks, denoted by $W_i^{(h)}$ , for $i=1,2,\dots,I$ (I denotes the total number of drivers). Furthermore, when a new policy is signed, other than the duration of the contract, some information becomes available to the insurer in the form of static covariates within the exposure year, such as the power of the car or the age of the driver. Let $\mathbf{X}_i$ be the vector containing each client’s information.

In a ratemaking context, claim count models are often introduced through a mean parameter, as this classical structure allows for a streamlined computation of the pure premium. In particular, we also consider a logarithmic link function to have a multiplicative effect on the risk factors, which is particularly valuable when interpreting the impact of each covariate. With these considerations in mind, let the mean parameter for claim counts be:

(1) \begin{align} \mu_{Y_i^{(h)}}=E\left[Y_i^{(h)}\right|\left.\mathbf{X}_i,W_i^{(h)}\right]= W_i^{(h)} \text{exp}\left(\mathbf{X}_i' \boldsymbol{\beta}\right), \quad \text{for } i=1,\dots,I,\end{align}

where $\boldsymbol{\beta}$ is the parameter vector that combines linearly with $\mathbf{X}_i $ . Note that in (1), unlike in more traditional definitions, the exposure is measured in weeks, not years. In addition, this model structure can be incorporated into distributions that are members of the exponential family, such as the Poisson distribution or the Negative Binomial distribution. More specifically, the latter distribution’s mass probability function is given by:

(2) \begin{align} P\left(Y^{(h)}_{i}=y|\mu_{i},\sigma\right)=\frac{\Gamma\left(y+1/\sigma\right)}{\Gamma\left(1/\sigma\right)\Gamma\left(y+1\right)}\left(\frac{\sigma \mu_{i}}{1+\sigma \mu_{i}}\right)^{n}\left(\frac{1}{1+\sigma \mu_{i}}\right)^{1/\sigma},\end{align}

where $\mu_{i}$ is the mean parameter and the standard deviation is $(\mu +\sigma \mu^2)^{0.5}$ . In terms of the parameters the model uses, $\sigma$ is homogeneous among drivers of the portfolio, while the mean parameter varies according to the covariates in hand.

Once the framework of a GLM has been defined with our notation, we introduce telematics information into the model.

3.2. Predicting claim counts with observed telematics information

In this section, we re-introduce the approach from Guillén et al. (Reference Guillen, Nielsen and Pérez-Marn2021) with a notation adapted for our credibility pricing extension. In particular, we focus on predicting claim counts, assuming telematics information has already been collected. This is an important distinction, as in the traditional model showcased in Section 3.1, insurers can make predictions and charge a premium before the coverage period is observed. However, in this section, the insurer would first need to observe the telematics data (i.e., the coverage period) and then charge a premium accordingly. Given this is one of the core issues we discuss in this paper, from here on we distinguish between models that allow insurers to charge premiums before and after telematics data are collected. On the one hand, the mean parameter for models conditional to observed exact telematics data for a given period is denoted by $\mu^{(+)}$ . On the other hand, the mean parameter for models conditional to predicted telematics data for a given period is denoted by $\mu^{(-)}$ .

More specifically, in this section, we put forward models for observed telematics signals through three cases. We begin by introducing an optimistic case in Subsection 3.2.1, where an insurer has collected telematics data throughout the duration of the various contracts from drivers of a traditional dataset, akin to the one showcased in Section 3.1. In this context, telematics signals are summarised as an average per kilometre driven, while kilometres serve as exposure measures. Then, in the second case (Subsection 3.2.2), we reexamine the model from Subsection 3.2.1 in a less ideal scenario, where an insurer has access to a historical dataset without telematics information. However, in contrast, they have access to a distinct, more recent dataset where drivers are monitored through an OBD. In this situation, signals and kilometres driven from the telematics dataset are transferred to the historical dataset. Finally, in our third case (Subsection 3.2.3), we reprise the previous context, where telematics information is only available in a recent dataset and not in a historical dataset. However, our goal is not limited to merging the datasets, as we also want to consider signals as count variables and not averages over kilometres as in the previous two cases. We achieve this by constructing a dataset with the same number of data points as in the more recent telematics dataset. Consequently, by allowing weekly signal counts to be considered as covariates, we can develop a weekly PHYD billing scheme with telematics data.

3.2.1. Case 1: Telematics information available in a historical dataset

Ideally, the insurer would have gathered telematics information for a long period of time, for instance, in a historical dataset $\mathcal{D}^{(h)}$ . In this context, for this observation period, let us introduce telematics information in two forms: first, the kilometres driven and second, a signal event such as hard braking or acceleration. Accordingly, let $T_i^{(h)}$ and $M_i^{(h)}$ be, respectively, the kilometres driven and the observed number of signals in $\mathcal{D}^{(h)}$ for driver i. Then, we can compute $\overline{M}_i^{(h)}$ , the signal frequency per kilometre, which measures the risk of each driver by profiling their driving behaviour. Consequently, let the mean parameter for $Y_i^{(h)} \in \mathcal{D}^{(h)}$ be such that,

(3) \begin{align} \mu^{(+)}_{Y_i^{(h)}}&=E\left[Y_i^{(h)}\right|\left.\mathbf{X}_i,W_i^{(h)},T_i^{(h)},M_i^{(h)}\right] \nonumber \\[5pt] &= T_i^{(h)} \text{exp}\left(\mathbf{X}_i' \boldsymbol{\beta}^{(x)}+\overline{M}_i^{(h)}\beta^{(m)}\right) \nonumber\\[5pt] &=T_i^{(h)} \text{exp}\left(\mathbf{X}_i' \boldsymbol{\beta}^{(x)}+\frac{M_i^{(h)}}{T_i^{(h)}}\beta^{(m)}\right), \quad \text{for } i=1,\dots,I, \end{align}

where, similarly, $ \boldsymbol{\beta}^{(x)}$ is the parameter vector that combines linearly with $\mathbf{X}_i $ , while $ \boldsymbol{\beta}^{(m)}$ is the parameter that combines with the signal frequency per kilometre. The variables used for this case are summarised in Figure 2.

Figure 2. Case 1: Flowchart of variables from the historical dataset.

3.2.2. Case 2: Completing uncollected signal events from a historical dataset with newly acquired telematics data

Let us leave aside the ideal scenario we covered in the previous section, as the insurer may only have collected telematics data for a relatively short time. Therefore, suppose that telematics information has not been collected in $\mathcal{D}^{(h)}$ . However, in a later period, drivers from the historical dataset are monitored with an OBD, and their telematics information is collected weekly. Let $\mathcal{D}^{(t)}$ be this new telematics dataset where, for each driver, i, telematics data are collected for $W_i^{(t)}$ weeks. Moreover, let us define the following variables for this new granular dataset,

  • $Z^{(t)}_{i,j}$ : the claim counts for driver i during week j,

  • $N^{(t)}_{i,j}$ : the telematics signal counts for driver i during week j,

  • $U^{(t)}_{i,j}$ : the kilometres driven by driver i during week j,

  • $E^{(t)}_{i,j}$ : the exposure in weeks for driver i during week j (1 week in this case),

  • $K_{i,j}^{(t)}=N_{i,j}^{(t)}/U_{i,j}^{(t)}$ : the telematics signal per kilometre ratio for driver i during week j,

for $i=1,\dots,I$ and $j=1,\dots,W_i^{(t)}$ .

Then, for each driver i, let us introduce the aggregated version of the previously defined variables over $W_i^{(t)}$ weeks, such that.

\begin{align*} Y_i^{(t)}&=\sum_{j=1}^{W_i^{(t)}} Z^{(t)}_{i,j}, & M_i^{(t)}&=\sum_{j=1}^{W_i^{(t)}} N^{(t)}_{i,j}, & T_i^{(t)}&=\sum_{j=1}^{W_i^{(t)}} U^{(t)}_{i,j}, & W_i^{(t)}&=\sum_{j=1}^{W_i^{(t)}} E^{(t)}_{i,j}= \sum_{j=1}^{W_i^{(t)}} 1,\end{align*}

for $i=1,\dots,I$ .

Furthermore, we should bear in mind that in this second case neither $T_i^{(h)}$ nor $M_i^{(h)}$ are available in $\mathcal{D}^{(h)}$ . In this context, we consider a non-parametric credibility approach through a Bühlmann–Straub framework (see Bühlmann and Straub (Reference Bühlmann and Straub1970)) to compute the (t)-equivalent of $K_i^{(h)}$ and $T_i^{(h)}$ given data from $\mathcal{D}^{(t)}$ . In particular, we extend this approach to allow for unbalanced data in terms of the observation period, given that in $\mathcal{D}^{(t)}$ , each driver i is observed for $W_i^{(t)}$ weeks. Considering this, let the (t)-equivalent kilometres driven and the telematics signal per kilometre ratio in the historical dataset be given by:

\begin{align*}T_i^{*(h)}&=\zeta_i\overline{T}_i^{(t)}+ \left(1-\zeta_i\right)\overline{T}^{(t)},\\[5pt] K_i^{*(h)}&=\xi_i\widetilde{K}_i^{(t)}+ \left(1-\xi_i\right)\widetilde{K}^{(t)},\end{align*}

where,

\begin{align*}\overline{T}^{(t)}&=\frac{\sum_{i=1}^I T_{i}^{(t)}}{\sum_{i=1}^I W_i^{(t)}}, & \widetilde{K}^{(t)}&=\frac{\sum_{i=1}^I \sum_{j=1}^{W_i^{(t)}} N_{i,j}^{(t)}}{\sum_{i=1}^I \sum_{j=1}^{W_i^{(t)}} U_{i,j}^{(t)}},\\[5pt] \overline{T}_i^{(t)}&=\frac{T_{i}^{(t)}}{W_i^{(t)}}, & \widetilde{K}_i^{(t)}&=\frac{\sum_{j=1}^{W_i^{(t)}} N_{i,j}^{(t)}}{ \sum_{j=1}^{W_i^{(t)}} U_{i,j}^{(t)}}.\end{align*}

In addition, let $s^2_\zeta$ and $s^2_\xi$ be the expected value of the process variance and let $\sigma^2_\zeta$ and $\sigma^2_\xi$ be the variance of the hypothetical means (respectively $T_i^{(t)}$ and $K_i^{(t)}$ ). With these assumptions, let the credibility factors be given by:

\begin{align*} \zeta_i&=\frac{1}{1+\frac{s^2_\zeta}{W_i^{(t)}\sigma^2_\zeta}}, &\xi_i&=\frac{1}{1+\frac{s^2_\xi}{ T_i^{(t)}\sigma^2_\xi}},\end{align*}

where

\begin{align*} s^2_\zeta&=\frac{\sum_{i=1}^I \sum_{j=1}^{W_i^{(t)}} \left(U_{i,j}^{(t)}-\overline{T}_i^{(t)}\right)^2}{\sum_{i=1}^I \left( W_{i}^{(t)}-1\right)}, &s^2_\xi&=\frac{\sum_{i=1}^I \sum_{j=1}^{W_i^{(t)}} U_{i,j}^{(t)}\left(K_{i,j}^{(t)}-\widetilde{K}_i^{(t)}\right)^2}{\sum_{i=1}^I \left( W_{i}^{(t)}-1\right)},\\[5pt] \sigma^2_\zeta&=\frac{-(I-1)s^2_\zeta+\sum_{i=1}^I \left(\overline{T}_i^{(t)}-\overline{T}^{(t)}\right)^2}{\sum_{i=1}^I W_{i}^{(t)}\left(1-\frac{W_{i}^{(t)}}{\sum_{i^*=1}^I W_{i^*}^{(t)}}\right)}, & \sigma^2_\xi&=\frac{-(I-1)s^2_\xi+\sum_{i=1}^I T_{i}^{(t)}\left(\widetilde{K}_i^{(t)}-\widetilde{K}^{(t)}\right)^2}{\sum_{i=1}^I T_{i}^{(t)}\left(1-\frac{T_{i}^{(t)}}{\sum_{i^*=1}^I T_{i^*}^{(t)}}\right)}.\end{align*}

Assuming that on average drivers behave similarly in terms of the telematics data collected for both $\mathcal{D}^{(h)}$ and $\mathcal{D}^{(t)}$ , we can incorporate the missing telematics information from model (3). More specifically, we can re-introduce the mean parameter for $Y_i^{(h)} \in \mathcal{D}^{(h)}$ such that,

(4) \begin{align} \mu^{(+)}_{Y_i^{(h)}}&=E \left[ Y_i^{(h)}\right. \left|\mathbf{X}_i,W_i^{(h)},T_i^{*(h)},K_i^{*(h)} \right] \nonumber \\[5pt] &= T_i^{*(h)} \text{exp}\left(\mathbf{X}_i' \boldsymbol{\beta}^{(x)}+K_i^{*(h)}\beta^{(m)}\right) \quad \text{for } i=1,\dots,I, \end{align}

where, similarly, $ \boldsymbol{\beta}^{(x)}$ is the parameter vector that combines linearly with $\mathbf{X}_i $ , while $ \boldsymbol{\beta}^{(m)}$ is the parameter that combines with the telematics signal frequency per kilometre. The variables used for this case are summarised in Figure 3.

Figure 3. Case 2: Flowchart of variables from a merged dataset.

3.2.3. Case 3: Telematics signals as count covariates in a merged dataset containing historical data and newly acquired telematics data

In the previous section, we covered case 2, where we dealt with the missing telematics data in the historical dataset by replacing the kilometres driven and signal counts per week in $\mathcal{D}^{(h)}$ by credibility-based values from the telematics data from $\mathcal{D}^{(t)}$ . In other words, by extrapolating the telematics data from $\mathcal{D}^{(t)}$ to $\mathcal{D}^{(h)}$ , we were able to complete the missing information from the latter dataset. In this section, in addition to obtaining a merged dataset with information from both $\mathcal{D}^{(t)}$ to $\mathcal{D}^{(h)}$ , we also want to consider signals as count variables and not averages over kilometres. This is achieved by ignoring the distance driven and by obtaining a merged dataset that has the same number of data points as in the telematics dataset, allowing each signal observation $N^{(t)}_{i,j}$ to be used directly as a covariate.

Let us begin by considering the telematics dataset $\mathcal{D}^{(t)}$ . We should bear in mind that although we do have access to weekly claim counts $Z^{(t)}_{i,j}$ , the time frame in which data were collected for $\mathcal{D}^{(t)}$ may be much shorter and there may likely be few to no observed claims. In practice, this limits our ability to fit a model considering claim counts as a response variable. To solve this issue, let us ignore the kilometres driven and focus on $E^{(t)}_{i,j}$ , the exposure in terms of time. In addition, let us introduce $Z_{i,j}^{*(t)}$ the (h)-equivalent weekly claim counts for the telematics dataset. Depending on the dataset in hand, this value may be approximated with the Bühlmann–Straub model we showcased in the previous section. However, if the number of observations per driver in $\mathcal{D}^{(h)}$ is low, a Bayesian approach may be more suitable. The framework of this paper is an example of this situation, having only one claim count observation per driver ( $Y_{i}^{(h)}$ ). Let us consider a Poisson–Gamma mixture, where the conditional distribution of $Y_{i}^{(h)}$ is given by:

\begin{align*} \left(Y_{i}^{(h)}|W^{(h)}_{i},\Lambda\right) &\sim Poisson \left(W^{(h)}_{i}\Lambda\right),\\[5pt] \Lambda &\sim Gamma(r,\gamma),\end{align*}

where parameters r and $\gamma$ can be approximated by maximum likelihood estimators. Note that in the context of a Poisson–Gamma mixture, the mass probability function of $Y_{i}^{(h)}$ becomes

\begin{align*} p_{Y_{i}^{(h)}}\left(y_{i}^{(h)}\left.\right|\gamma, r\right)&=\int_0^\infty\frac{\left(W_i^{(h)}\lambda\right)^{Y_{i}^{(h)}}}{Y_{i}^{(h)}!}\text{exp}\left(-W_i^{(h)}\lambda\right)\frac{\gamma^r}{\Gamma(r)}\lambda^{r-1}\text{exp}\left(-\gamma\lambda\right)d\lambda\\[5pt] &=\dots\\[5pt] &=\binom{Y_{i}^{(h)}+r-1}{Y_{i}^{(h)}}\left(\frac{W_i^{(h)}}{W_i^{(h)}+\gamma}\right)^{Y_{i}^{(h)}}\left(\frac{\gamma}{W_i^{(h)}+\gamma}\right)^{r}. \end{align*}

Additionally, let the claim count per week ratio for driver i during the historical period be $Z_{i}^{(h)}=Y_{i}^{(h)}/W_{i}^{(h)}$ . Then, let $Z_{i,j}^{*(t)}$ be the (h)-equivalent claim count per week in the telematics dataset, given by:

(5) \begin{align}Z_{i,j}^{*(t)}&=\left(\frac{W_i^{(h)}}{W_i^{(h)}+\gamma}\right)Z_{i}^{(h)}+\left(\frac{\gamma}{W_i^{(h)}+\gamma}\right)\left(\frac{r}{\gamma}\right),\end{align}

where the exposure for each observation (i,j) in $\mathcal{D}^{(t)}$ is 1 week. Meanwhile, the exposure for each observation i in $\mathcal{D}^{(h)}$ is $W_i^{(h)}$ weeks. This distinction is of particular importance when considering $Z_{i,j}^{*(t)}$ as a replacement of $Z_{i,j}^{(t)}$ since the random component of its formula is the product of $Y_{i}^{(h)}$ and $1/W_{i}^{(h)}$ (see (5)). In other words, the length of the observation period in the historical dataset disproportionately affects the variance of $Z_{i,j}^{*(t)}$ , leading to possible issues in terms of under-dispersion. In this regard, several models for under-dispersed count data have been suggested in the literature, such as the Generalized Poisson introduced by Consul and Jain (Reference Consul and Jain1973) and further developed in Consul and Famoye (Reference Consul and Famoye1992) or more recent generalisations such as the Conway–Maxwell model (Lord et al. (Reference Lord, Geedipally and Guikema2010)). However, in the context of this paper, these models are not well suited to model $Z_{i,j}^{*(t)}$ due to a second key characteristic discussed below.

The second issue that arises when modelling the (h)-equivalent weekly claim counts is that, unlike observable claim count variables, $Z_{i,j}^{*(t)}\in \mathbb{R}^+$ . Consequently, traditional count models such as the Poisson and Generalized Poisson cannot be considered. Moreover, although it is technically possible to fit a model with a continuous distribution, such an approach would prove cumbersome as it would hinder our ability to include a severity component into the model. Given these limitations, we suggest a quasi-likelihood approach (introduced in Nelder and Wedderburn (Reference Nelder and Wedderburn1972) and expanded in Wedderburn (Reference Wedderburn1974)). This approach allows us to incorporate key aspects of count distributions, such as their variance function while allowing us to bypass the inclusion of non-integer values in the computation of the likelihood.

To solve these issues, let us begin by assuming that for each observation (i,j), the exposure in weeks is not 1 week but rather $W_i^{(h)}$ weeks. In essence, let the extended exposure in weeks for observation (i,j) in $\mathcal{D}^{(t)}$ be

\begin{align*} E_{i,j}^{\blacktriangleright(t)}=W_i^{(h)}, \quad \text{for } i=1,\dots,I \text{ and } j=1,\dots,W_i^{(t)}.\end{align*}

Furthermore, by considering a repetitive behaviour over the $W_i^{(h)}$ weeks, we can assume that the extended (h)-equivalent claim counts and the extended signal counts are, respectively, given by:

\begin{align*} Z_{i,j}^{\blacktriangleright(t)}&=Z_{i,j}^{*(t)}E_{i,j}^{\blacktriangleright(t)}= \left(\frac{W_i^{(h)}}{W_i^{(h)}+\gamma}\right)Y_{i}^{(h)}+\left(\frac{\gamma}{W_i^{(h)}+\gamma}\right)\left(\frac{r}{\gamma}\right) W_{i}^{(h)},\\[5pt] N_{i,j}^{\blacktriangleright(t)}&= N_{i,j}^{(t)}E_{i,j}^{\blacktriangleright(t)}=N_{i,j}^{(t)}W_i^{(h)},\end{align*}

for $i=1,\dots,I \text{ and } j=1,\dots,W_i^{(t)}$ .

In other words, if for each observation (i,j) in $\mathcal{D}^{(t)}$ , the client would drive $W_i^{(h)}$ weeks instead of 1 week, $Z_{i,j}^{\blacktriangleright(t)}$ claims and $N_{i,j}^{\blacktriangleright(t)}$ signals would be observed. Next, let us assume that the mean parameter for $Z_{i,j}^{\blacktriangleright(t)} \in \mathcal{D}^{(t)}$ is given by:

(6) \begin{align} \mu^{(+)}_{Z_{i,j}^{\blacktriangleright(t)}}&=E \left[ Z_{i,j}^{\blacktriangleright(t)}\right. \left|\mathbf{X}_i,N_{i,j}^{\blacktriangleright(t)},E_{i,j}^{\blacktriangleright(t)}\right] \nonumber \\[5pt] &= W_i^{(h)} \text{exp}\left(\mathbf{X}_i' \boldsymbol{\beta}^{(x)}+\overline{N}_{i,j}^{\blacktriangleright(t)}\beta^{(n)}\right) \nonumber\\[5pt] &= W_i^{(h)}\text{exp}\left(\mathbf{X}_i' \boldsymbol{\beta}^{(x)}+\frac{N_{i,j}^{(t)}W_i^{(h)}}{W_i^{(h)} }\beta^{(n)}\right)\nonumber\\[5pt] &= W_i^{(h)} \text{exp}\left(\mathbf{X}_i' \boldsymbol{\beta}^{(x)}+N_{i,j}^{(t)}\beta^{(n)}\right), \quad \text{for } i=1,\dots,I \text{ and } j=1,\dots,W_i^{(t)}, \end{align}

where $ \boldsymbol{\beta}^{(x)}$ is the parameter vector that combines linearly with $\mathbf{X}_i $ , while $ \boldsymbol{\beta}^{(n)}$ is the parameter that combines with the weekly telematics signal counts. Note that while $N_{i,j}^{(t)}$ is a count variable, in this context, it is interpreted as the average signal counts over the extended exposure $W_i^{(h)}$ . The variables used for this case are summarised in Figure 4.

Figure 4. Case 3: Flowchart of variables from a merged dataset.

Based on the mean parameter in (6), two models are considered in this paper. First, the well-known quasi-Poisson (QP) model incorporates the variance function of the Poisson model, that is, $V(\mu)=\mu$ , while also allowing flexibility in terms of the choice of the dispersion parameter $\phi$ . The main advantage of this model is that it can accommodate over-dispersion and under-dispersion in a dataset. The other model is the extended quasi-Negative Binomial model (EQNB) developed by Clark and Perry (Reference Clark and Perry1989), which is an extension of the extended quasi-likelihood approach by Nelder and Pregibon (Reference Nelder and Pregibon1987). For this model, let the extended quasi-likelihood contribution of a single observation be:

\begin{align*} Q^+_\kappa(y;\mu)=-\frac{1}{2}log\left[2\pi\phi V_\kappa\left(y\right)\right]-\frac{1}{2}\phi^{-1}D_\kappa(y;\mu),\end{align*}

where $D_\kappa(y;\mu)$ is given by:

\begin{align*} D_\kappa(y;\mu)=-2\int_y^\mu \frac{y-u}{V_\kappa(u)}du.\end{align*}

The main advantage of the extended quasi-likelihood by Nelder and Pregibon (Reference Nelder and Pregibon1987) is that we can incorporate a variance function with unknown parameters. In particular, as shown in Clark and Perry (Reference Clark and Perry1989), the Negative Binomial distribution is a suitable candidate by considering:

\begin{align*} V_\kappa(\mu)=\mu+\kappa\mu^2.\end{align*}

Thus far, we have showcased $Z_{i,j}^{*(t)}$ , a possible replacement for the weekly claim counts in the telematics dataset by considering a Bayesian credibility approach based on the data from a historical dataset (see (5)). In addition, we have discussed two issues when considering this new response variable. On the one hand, the size of the observation period in $\mathcal{D}^{(h)}$ may lead to under-dispersed claim count data. On the other hand, this new replacement is not an integer limiting our ability to consider traditional count distributions. To tackle these issues, we suggest a step-wise approach. First, we replace weekly claim counts with (h)-equivalent values. Second, we assume that the observation period in $\mathcal{D}^{(t)}$ is extended from 1 week to $W_i^{(h)}$ weeks. Third, we consider the QP and EQNB statistical frameworks with (6) as a mean parameter.

We acknowledge the assumptions made so far are strong. However, they allow us to feed $\mathcal{D}^{(t)}$ with the more complete data in terms of claim counts from $\mathcal{D}^{(h)}$ . In an ideal scenario, we would refer to the first case (see 3.2.1).

After focusing on models that incorporate observed telematics data in Section 3.2, we shift our attention to Section 3.3 where we showcase models for predicted telematics data.

3.3. Predicting claim frequency with past telematics information

One of the challenges when incorporating telematics data in a pricing scheme is that, unlike traditional rating factors, these data becomes available as the insurer drives. For this reason, in all three cases from Section 3.2, we showcased predictive models that assume this information is known. In practice, a direct application of models based on this assumption would lead to drivers paying a premium after a portion or the totality of an insurance policy has been observed. Although such pricing structure would lead to realistic PHYD insurance applications, it would be at odds with the timeline of how risks are evaluated and rated in the industry. Put simply, generally, the insurer determines a driver’s risk profile and charges a premium before the coverage period is observed. Hence, in this section, we aim to allow the insurer to charge premiums before claims (or lack thereof) are observed while also benefiting from the information provided by telematics signal counts.

First, let us assume that the insurer wants to charge a premium for a given unobserved week j. However, since no telematics information has yet been collected for week j, we are limited in the amount of information available. Specifically, we have access to static covariates from vector $\mathbf{X}_i$ and past telematics data in the form of signal events from previous weeks. Moreover, given that signals are count variables, we can take inspiration from the actuarial literature to consider a credibility structure to better incorporate this past information. Among the various methods that have been suggested, the bonus-malus score definition in a generalised linear setting suggested by Boucher (Reference Boucher2022) serves particularly well as we have introduced all our models so far through a mean parameter.

In terms of the structure of this section, we begin with Subsection 3.3.1, where we put forward a predictive model for future signal counts. Moreover, previous signal events (or lack thereof) procure information through a bonus-malus score, used as a covariate. Then, in Subsection 3.3.2, we provide a model for weekly claim counts similar to the one suggested in Subsection 3.2.3. However, rather than using signal observations, we use predictions derived from our telematics signal model.

3.3.1. Weekly telematics signal modelling through a bonus-malus score

To begin, let $\mathcal{D}^{(t)}_{i,j}$ be all the available information from $\mathcal{D}^{(t)}$ for driver i up to week j (assuming that $j>1$ ). Thus, at the end of week $j-1$ , the insurer has access to telematics signal counts from week 1 to week $j-1$ , and this information can be used recursively to predict the signal count of the unobserved week j. Similarly to a claim count model, we can introduce the signal count model using a mean parameter. Let $\nu_{i,j}$ be the expected number of signals at week j, where past information is included in the form of a BMS structure. More formally, let the expected number of signals at week j for driver i be

(7) \begin{align} \nu_{i,j}&=\text{E}\left[N^{(t)}_{i,j}|\mathbf{X}_i,E^{(t)}_{i,j}=1,\mathcal{D}^{(t)}_{i,j-1}\right] \nonumber\\[5pt] &=(1)\text{exp}{\left( \mathbf{X}_i'\boldsymbol{\gamma}^{(x)}+\gamma^{(\ell)}\ell_{i,j-1}\right)}\quad \text{for } i=1,\dots,I \text{ and } j=1,\dots,W_i^{(t)},\end{align}

where $\boldsymbol{\gamma}^{(x)}$ is the parameter vector that combines linearly with $\mathbf{X}_i$ . We also let $\ell_{i,j-1}$ be the BMS for driver i at time $j-1$ for telematics signals, while $\gamma^{(\ell)}$ is its linear parameter. Specifically, $\ell_{i,k}$ can be computed with the following formula:

(8) \begin{align} \ell_{i,k} &= \begin{cases} \displaystyle \max\left\{\min\left\{\ell_{i,k-1}-\mathbb{1}\left(N^{(t)}_{i,k}=0\right)+\psi N^{(t)}_{i,k},\ell_{max}\right\},\ell_{min}\right\}, &\text{for $k = 1, 2, \ldots$}\\[5pt] 0, &\text{for $k=0$,}\end{cases}\end{align}

where $\ell_{max}$ and $\ell_{min}$ are, respectively, the maximal and the minimal value of the BMS score. In addition, $\mathbb{1}()$ is the indicator function, while the jump parameter is given by $\psi $ . It is important to mention that BMS formulas are not unique; indeed, many forms have been suggested in the literature. We chose a particular formula for use in our case study, although we encourage actuaries to include the structure better suited to their particular dataset. For reference in this matter, we recommend the book by Lemaire (Reference Lemaire2013). In the context of this paper, the performance of our BMS credibility structure is assessed by comparing the results with a random effect Poisson–Gamma mixed model for longitudinal data, known as a multivariate Negative Binomial model (MVNB). We provide a summary of the key features of this benchmark model in the Appendix, purposely considering the notation used so far in the paper (see Appendix A.1).

Given that the model from this subsection uses exclusively information known before a given week is observed, we can predict unobserved signal counts. This property is put to use in the next Subsection 3.3.2, where a claim count model is put forward.

3.3.2. Including telematics signal predictions as covariates for a claim count model

Let us recall the claim count model that includes signal events as counts (see equation (6)). In this context, by placing ourselves at the start of week j, $N^{(t)}_{i,j}$ becomes unobserved. In this scenario, we are unable to compute the mean parameter directly. However, we can incorporate predictions based on predictions of signal counts by assuming that:

\begin{align*} \widehat{N}_{i,j}^{(t)}=\nu_{i,j}, \quad \text{for } i=1,\dots,I \text{ and } j=1,\dots,W_i^{(t)}.\end{align*}

Note that, given that the expected number of signal events uses additional information other than the one derived from $X_i$ we can include it as a covariate to potentially improve our predictions. Let the mean parameter of the claim count model at the start of week j be:

(9) \begin{align}\mu^{(-)}_{ Z_{i,j}^{\blacktriangleright(t)}}&=E\left[ Z_{i,j}^{\blacktriangleright(t)}\right|\left.\mathbf{X}_i,N_{i,j}^{\blacktriangleright(t)},E_{i,j}^{\blacktriangleright(t)}\right] \nonumber \\[5pt] &=E\left[ Y_i^{(h)}\right|\left.\mathbf{X}_i,N_{i,j}^{\blacktriangleright(t)}=\nu_{i,j}W_i^{(h)},E_{i,j}^{\blacktriangleright(t)}=W_i^{(h)}\right] \nonumber \\[5pt] &= W_i^{(h)} \text{exp}\left(\mathbf{X}_i' \boldsymbol{\alpha}^{(x)}+\overline{N}_{i,j}^{\blacktriangleright(t)}\alpha^{(n)}\right) \nonumber\\[5pt] &= W_i^{(h)}\text{exp}\left(\mathbf{X}_i' \boldsymbol{\alpha}^{(x)}+\frac{\nu_{i,j}W_i^{(h)}}{W_i^{(h)} }\alpha^{(n)}\right)\nonumber\\[5pt] &= W_i^{(h)} \text{exp}\left(\mathbf{X}_i' \boldsymbol{\alpha}^{(x)}+\nu_{i,j}\alpha^{(n)}\right), \quad \text{for } i=1,\dots,I \text{ and } j=1,\dots,W_i^{(t)},\end{align}

where, similarly, $ \boldsymbol{\alpha}^{(x)}$ is the parameter vector that combines linearly with $\mathbf{X}_i $ and $\alpha^{(n)}$ is the parameter that combines with $\nu_{i,j}$ . It is important to note that the mean parameter in (9) assumes that the telematics signal $N_{i,j}^{\blacktriangleright(t)}$ has been observed and its value is equal to $\nu_{i,j}$ . We do not consider the joint distribution of vector $\left(Z_{i,j}^{\blacktriangleright(t)},N_{i,j}^{\blacktriangleright(t)}\right)$ .

Before we continue, let us discuss some of the considerations that have to be taken into account when introducing signal predictions into a claim frequency model. First, as indicated in Guillén et al. (Reference Guillen, Nielsen, Pérez-Marn and Elpidorou2020), not all near-miss events necessarily positively impact the increase of the claim frequency. In other words, the occurrence of certain signal events is an indication of safe driving, which, in turn, would lower the cost of the premium. For example, a skilled driver may perform hard braking to avoid an accident. Thus, for these specific cases, incorporating a bonus-malus score is likely to influence the premium similarly (where a higher value leads to a lower premium). Note that a higher value always leads to a higher premium in typical implementations of bonus-malus scores. Second, it is unlikely that signal events are independent from one another. Thus, a more complex structure has to be developed, especially if past information with different impacts on the risk is considered. We will leave this more intricate multivariate problem for future studies.

After developing a model for claim counts at the start of a given week, we can build a pricing scheme in Subsection 3.4, allowing the insurer to charge a premium before telematics data are observed while also benefiting from it.

3.4. Pricing scheme

So far, we have produced models for claim counts for two stages of development. First, at the end of week $j-1$ , case 3 can be incorporated (see Subsection 3.2.3) as telematics data for week $j-1$ have been collected. Second, for week j, signal counts can be predicted by considering the BMS structure from Subsection 3.3.1. Then, these predictions can be used to predict claim counts for j (see Subsection 3.3.2). The goal of this Section is to use these models to design a dynamic pricing scheme that allows the insurer to use as much information as possible at any stage of development of an insurance policy.

In practical terms, we follow the structure detailed in Figure 1. Based on this prediction, we begin by predicting claim counts at the start of a given week j and charging a premium. Then, at the end of week j, the weekly premium is recalculated with the newly acquired information. Thus, the discrepancy between the initial and final premium during week j is charged along with the initial premium of week $j-1$ .

Let $P_{i,j}$ be the premium charged to driver i at the beginning of week j. Thus, for the first week (i.e., $j=1$ ), we have

\begin{align*} P_{i,1}&=\overline{C}\cdot \left(\mu^{(-)}_{i,1}+ A_{i,0}\right)\\[5pt] &=\overline{C}\cdot\mu^{(-)}_{i,1},\end{align*}

where $\overline{C}$ is the average cost of a claim, while $A_{i,j}$ is the premium adjustment from the previous week. Given that no development was previously observed when $j=1$ , we set it to 0. Then, for $j>1$ we have

\begin{align*} P_{i,j}&=\overline{C}\cdot \left(\mu^{(-)}_{i,j}+ A_{i,j-1}\right)\\[5pt] &=\overline{C}\cdot \left(\mu^{(-)}_{i,j}+\left(\mu^{(+)}_{i,j-1}-\mu^{(-)}_{i,j-1}\right)\right),\end{align*}

where the adjustment $A_{i,j}$ is the difference between the expected number of claims at time j, given information at the beginning and at the end of the week.

Several statistical methods can be used to measure the benefits of applying our pricing scheme. Among them, we consider a Gini index min-max strategy tailored for motor insurance ratemaking.

3.5. Gini index application for ratemaking

One of our goals is to use telematics data to identify risky behaviour and safe driving, which can be assessed in terms of discriminatory power. A method that is often used to measure a model’s capacity to distinguish risks is the Gini index. This index is defined as twice the area between the Lorenz curve and the equality line. Although it is most often used to measure inequality in terms of wealth in an economics context, researchers have adapted this measure to quantify risk discrimination for ratemaking in P&C insurance (see Frees et al. (Reference Frees, Meyers and Cummings2011), Frees et al. (Reference Frees, Meyers and Cummings2014)).

Let us determine the Lorenz curve for two weekly pricing schemes benefiting from information provided by a historical dataset. Hence, let $P_{i,j}^{(1)}$ and $P_{i,j}^{(0)}$ be the premium charged to driver i for week j, respectively, based on an alternative pricing scheme and on a baseline pricing scheme. Furthermore, let the (h)-equivalent loss for driver i, at week j, be $Q_{i,j}=\overline{C} \cdot \left( Y_i^{(h)} / W_i^{(h)} \right).$

Then, by letting premium relativity be $R_{i,j}=\left(P_{i,j}^{(1)}/P_{i,j}^{(0)}\right)$ , the Lorenz curve for the $k^{th}$ premium can be obtained through set $\mathcal{G}_k=\left(Q_k,P_k^{(0)},P_k^{(1)}\right)$ , which is sorted from smallest to largest, according to the relativities, for $k=1,\dots,K$ , where $K=\sum_{i=1}^I\sum_{j=1}^{W_i^{(t)}} 1$ .

The baseline premium distribution is given by:

\begin{align*} \hat{F}_{P^{(0)}}(\omega)&=\frac{\sum_{k=1}^K P_k^{(0)} \mathbb{1}(R_k\leq \omega) }{\sum_{k=1}^K P_k^{(0)}},\end{align*}

and the weekly equivalent loss distribution is given by:

\begin{align*} \hat{F}_{Q}(\omega)&=\frac{\sum_{k=1}^K Q_k \mathbb{1}(R_k\leq \omega) }{\sum_{k=1}^K Q_k},\end{align*}

where $\mathbb{1}(\dot)$ is the indicator function. The graph $\left(\hat{F}_{P^{(b)}}(\omega),\hat{F}_{Q}(\omega)\right)$ is an ordered Lorenz curve.

The Gini can be calculated as follows:

\begin{align*} \widehat{Gini}=1-\sum_{k=0}^{K-1}\left(\hat{F}_{P^{(0)}}\left(R_{k+1}\right)-\hat{F}_{P^{(0)}}\left(R_{k}\right)\right)\left(\hat{F}_{Q}\left(R_{k+1}\right)+\hat{F}_{Q}\left(R_{k}\right)\right).\end{align*}

In Frees et al. (Reference Frees, Meyers and Cummings2014), the authors suggest a min-max strategy to determine the best model regarding discriminatory power. The idea is to calculate the Gini index using each model in consideration as a baseline model along with each other competing model as an alternative. For each combination, the Gini measures the mismatch between the baseline premium and the observed loss (in our context, the weekly equivalent loss from the historical dataset). Furthermore, given that the alternative models affect the ordering of the premiums in terms of risk, we can determine the vulnerability of each baseline model to competing classifications by computing the maximum Gini index (i.e., the highest mismatch between the baseline premium and the loss). We can then determine the claim design that is least vulnerable to competing orderings by considering the baseline model with the lowest maximum Gini index. However, as noted by Wüthrich (Reference Wüthrich2023), selecting a model based on this criteria must be met with caution as a model with the most optimal Gini index may not necessarily be the best in terms of predictive power. Notably, the Gini score is rank-based and is not calibration-sensitive.

So far, we have developed a pricing scheme that incorporates current and past telematics data. Furthermore, we have showcased the Gini index min-max strategy by Frees et al. (Reference Frees, Meyers and Cummings2014) in the context of our paper. Let us now implement our pricing scheme to a case study using historical and telematics data from an insurance company and measure the quality of the models considered through several statistical measures.

4. Case study

In this section, we produce a numerical application of the models showcased in the paper. We begin by detailing the dataset we considered by defining the variables and providing general descriptive statistics (see Section 4.1). Then, in Section 4.2, we provide a goodness-of-fit analysis through various statistical measures, and we quantify the quality of the models considered and the significance of the covariates in hand. We continue with Section 4.3, where billing examples for two distinct drivers are showcased, considering our pricing scheme as well as a benchmark scheme. Finally, in Section 4.4, we provide results in terms of discriminatory power for all the models considered through a Gini coefficient.

4.1. Data description

The data for our case study comprises information from historical and telematics data for 641 drivers. The policyholders are tracked weekly through an OBD in their vehicle for up to 149 weeks between August 2016 and July 2019, adding to 7570 weeks for the whole dataset. Moreover, while tracked, telematics data in the form of signal events of varying intensities are collected at the end of each week. However, no claims were observed due to the relatively small sample size in terms of the number of drivers and the length of the observation period. In contrast, we also have access to historical data for each insured. Although no telematics information is collected in this latter dataset, the exposure length is much longer (up to 549 weeks) and does contain at-fault claims.

Table 1. Variables description for the telematics and historical datasets.

Table 2. Descriptive statistics, by driver, for the telematics and historical datasets.

Let us analyse the variables from the telematics dataset, which are listed in Table 1 and are summarised in the form of descriptive statistics in Table 2 (by driver) and Table 3 (by week). As displayed in the tables, two types of telematics signal events are recorded: the acceleration (EAclr) and the braking (EBrak) events. Using similar definitions as the ones suggested in Guillén et al. (Reference Guillen, Nielsen, Pérez-Marn and Elpidorou2020) and Guillén et al. (Reference Guillen, Nielsen and Pérez-Marn2021), three severity levels are derived from a score comprised of values from 0 to 1. Regarding the acceleration events, we set a threshold of $6 m/s^2$ , which is based on standard values from the literature. As a reference, in Hynes and Dickey (Reference Hynes and Dickey2008), the authors suggest $5.7 m/s^2$ as the threshold for a low peak acceleration event during rear-end impacts. We then, we calculate a ratio between 1) the difference between the maximal acceleration reading and the first reading above the threshold and 2) the corresponding timestamps of the latter readings. In terms of braking events, their severity takes into account accelerations, given that braking can be considered as deceleration events (or negative acceleration). In addition to signal events, this dataset also contains other telematics information in the form of the distance driven (in metres) each week during the observation period and static covariate information in the form of the engine capacity. Finally, as previously mentioned, no claims were observed in this telematics dataset for any driver.

Table 3. Descriptive statistics, by week, for the telematics and historical datasets.

We now consider data from the historical dataset, which is also listed in Table 1 and is summarised in the form of descriptive statistics in Table 2. In contrast to the telematics dataset, the mean of the historical dataset’s exposure (in weeks) is 23 times longer than its counterpart. Furthermore, drivers are observed for at least a year (52 weeks) rather than 1 week, which is the minimal value of the exposure of the telematics dataset. Hence, the total number of claims observed is 49, with claim counts per driver ranging from 0 to 6 observations. In addition, for comparative purposes, although the car’s engine power is not explicitly indicated for this observation period, we extrapolate the information from the telematics dataset and assume it to be the same as the one in the historical dataset.

Following the framework from Section 3, we adapt our model for two telematics signals: EBrak3 and EAclr3. We chose these values as these more extreme events can be more easily interpreted as risky driving behaviour. Furthermore, aggregated values were not considered since; for these particular events (brakes and accelerations), distinct severity levels may impact the risk very differently (see Guillén et al. (Reference Guillen, Nielsen and Pérez-Marn2021)). It is worth noting, however, that for braking and acceleration, severity level 1 events occur much more often than level 3 events (respectively $10.9$ and $8.4$ times more likely to occur). Hence, most weeks do not record level 3 severity events. Indeed, respectively, for braking and acceleration, $95,26\%$ , and $95,48\%$ of weeks in the portfolio present none of these events. As for the rest of the weeks, the distribution of level 3 events is very heavy-tailed, although observations with more than 15 events are very rare. Indeed, among the 7570 weeks available, only 4 and 9 weeks have, respectively, 16 or more level 3 braking and acceleration events. These results are represented in Figure 5.

Figure 5. Histograms of EBrak3 and EAclr3 counts for weeks with at least one observed telematics signal (respectively, EBrak3 or EAclr3).

Let us now proceed to the numerical analysis, for which data was split into training and testing datasets. The former considers 80% of the drivers (513), while the latter considers the remaining 20% (128) drivers.

4.2. Numerical results for the claim count and telematics signal count models

4.2.1. Probability distributions for the claim count and telematics signal count models

We begin by fitting models for the two telematics signal events incorporated into our ratemaking scheme (EBrak3 and EAclr3). For both of these variables, we consider the Poisson and the Negative Binomial distribution (see (2)). Furthermore, among these covariates, we include a BMS score computed using the formula (8), having itself three parameters: $\psi$ , $\ell_{max}$ and $\ell_{min}$ . The best combination of these three parameters can be achieved through various methods. For example, one could look for the combination that maximizes the likelihood or reduces the mean squared error. In our case study, we chose the former approach. Moreover, we restrict the values of the minimal and maximal BMS parameters to be integers. This consideration aligns with classical BMS models and allows for an easier score interpretation. The optimal values for these parameters are given by Table 4. Additionally, the quality of this model is assessed by comparing it with an MVNB model as a benchmark (see Appendix A.1). As for the claim count models, we fit the QP and the EQNB showcased in Section 3.2.3.

With these considerations in mind, let us perform a goodness-of-fit analysis.

4.2.2. Goodness-of-fit analysis

The first models we work on aim to predict the future number of telematics signals using past information. We use EBrak3 and EAclr3 as response variables, and we consider two covariates: the engine power as the only static variable and a BMS as a telematics covariate. Table 5 summarises the results for, respectively, the Negative Binomial distribution and the Poisson distribution. In the former, we observe that all the covariates considered have a significant effect on the occurrence of braking (with a p-value lower than 5%). As for the acceleration model, similar conclusions can be drawn, particularly regarding the implementation of a BMS. However, the significance of the engine capacity varies given that for the Poisson distribution, the engine capacity does not reach a desirable level of significance ( $p=0.132$ ). A summary table for these results is included in the Appendix (see Table 9).

We now let us put forward the claim count models at the start of any given week (see Section 3.3.2). In this context, the expected number of signals are included as covariates to predict the future occurrence of claims. For consistency, the distributions of the models mirror the ones used in the prediction of the signal counts. That is, the QP claim count model considers predictions from the Poisson signals model, while the EQNB claim count model includes predictions from the Negative Binomial signals model. Let us outline the results from Table 5, which contains the z-tests for the EQNB and the QP model that include EBrak3 counts. We note that both the engine capacity and the mean of the previous model are significant (with a p-value lower than 5%) across both models. Meanwhile, similar conclusions can be drawn when accelerations are used instead of braking (see Table 9 from the Appendix). Based on these first results, in the context of our case study, a BMS provides valuable information in the prediction of claim counts when telematics data have not yet been collected.

Table 4. Bonus-malus parameters for EBrak3 and EAclr3 events.

Denotes distributions without traditional factors ( $\mathbf{X}_i$ ).

Table 5. Z-tests for EBrak3 count models and claim count models using EBrak3 predictions and observations as covariates.

We can now focus on claim count models with information updated by telematics data. In this scenario, rather than predicting the signal counts, we can use the actual values observed during the week (see Section 3.2.3). Yet again, we choose a distribution based on the one used in the two previous steps. Let us start by analysing Table 5, which summarises the results for the EQNB and the QP models when EBrak3 is considered as a covariate. We note that EBrak3 is statistically significant in terms of the test performed for both models, having a p-value lower than the 5% threshold. This is also the case for the engine power. Regarding models that incorporate accelerations (see Table 9), we note that the p-value for EAclr3 is higher than the 5% threshold, implying that this telematics signal is not significant for these models. This contrasts the results from the model at the start of the week when data from past observations were used instead. The difference between these two conclusions could be explained by the fact that the previous model uses more information (1 week vs. all the past weeks). Furthermore, in this step, we only considered one telematics covariate: the signal in question. Insurers would likely have obtained much more telematics data, which could be used to improve the model at this step.

In addition to the significance tests performed, we also compute the Akaike information criterion (AIC) for the signal models. These results are showcased in Table 6. We also provide the AIC of various benchmark models. Specifically, we consider models that do not include signals information while mirroring all the other aspects of their counterparts (in terms of distribution and covariates used) and an MVNB longitudinal model. We note that for all models, the inclusion of a BMS reduces the AIC and thus provides better models. This is consistent with our findings from the significance test. Additionally, we note that the benchmark credibility model yields the least desirable results in terms of the AIC. The poor performance of this model could be attributed to the higher prevalence of outliers when modelling telematics signals comparatively to modelling claim counts (see Figure 5). In essence, a particularly high count of EBrak3 or Aclr3 events during a specific week (e.g., 10 observations) may indicate an increase in terms of risk. However, one may not necessarily observe similar extreme values in the weeks to come. In contrast to the MVNB model, the suggested capped BMS model is better suited to handling such increases. This feature is showcased through the optimal BMS parameters from Table 4, where observing 2 to 3 events is enough to cap the value of the BMS.

Table 6. AIC for the telematics signal and claim count models*.

denotes distributions without traditional rating factors ( $\mathbf{X}_i$ ).

In addition to the statistical measures already mentioned, we perform a 5-fold cross-validation analysis by fitting and measuring the mean squared error of all the models considered thus far. For each fold, 80% of the drivers are used to fit the models, and the remaining 20% are used to validate the results. Results are displayed in the Appendix, respectively, for EBrak3 and EAclr3 events in Table 10 and in Table 11.

Let us first discuss the results from the telematics signal count models. We note that, unlike the conclusions drawn thus far, the performance of the BMS model is mixed. For EBrak3, BMS models outperform their counterparts in the first three folds, and in the fourth fold, only the full BMS Negative Binomial model outperforms its counterpart. For EAclr3, BMS models perform well only in folds 2 and 4. Nevertheless, BMS models overall perform better than the MVNB model, matching the conclusion drawn from the AIC analysis. In contrast, results for claim count predictions are much more homogeneous. For each model considered, the inclusion of either predicted or observed signals provides better results.

Overall, our BMS structure is better suited for predicting braking events than acceleration events. This implies that our approach should be selectively considered depending on the telematics signal at hand. Furthermore, when modelling claim counts, the telematics signal’s predictive power should also be considered. Specifically, in our case study, including EBrak3 events to predict claim counts is relevant overall, either as a predicted value or as an observation.

Having carefully examined the models’ performance, we now analyse the discrepancy between the predictions from models at the start and at the end, of any given week.

4.2.3. Simulation of claim counts

In this section, we assess the impact of predicted and observed telematics signals on the premium. To this end, we simulate the distribution of the total claim counts based on models at the start and at the end of any given period with the two models we have showcased, the EQNB and the QP. It should be noted that these models are based on the quasi-likelihood and do not have an intrinsic distribution. Hence, we aim to produce simulated results based on distributions that incorporate the results from these models. Our first proposal is to consider a Poisson model with the QP’s coefficients from Table 5, essentially incorporating the values of an equidispersed QP. We also propose a Negative Binomial model that integrates the mean parameter’s coefficients of the EQNB (see Table 5) and its approximation of the dispersion so that the standard deviation in (2) becomes $(\mu+\kappa\mu^2)^{0.5}$ . Moreover, some variations of these models are considered. These are based on the signal considered (EBrak3 or EAclr3) and the omission or inclusion of traditional factors.

This analysis is performed on the test dataset with models fitted on the entire training dataset, essentially, simulating 100,000 iterations of a portfolio consisting of the telematics and traditional factors from drivers in the test dataset. Purposefully, we consider the extended exposure (i.e., $E_{i,j}^{\blacktriangleright(t)}=W_i^{(h)}$ ) for our simulations in order to better illustrate visually the distribution of claim counts, as considering a weekly exposure would lead to an extremely low count of simulated values (mostly 0 for the whole portfolio). Results from our simulation are showcased in Table 7. Additionally, simulations EBrak3 models are illustrated in Figures 6 and 7, while simulations for EAcrl3 models are available in the Appendix (see Figures 9 and 10).

Table 7. Simulation of the total claim counts in the test dataset.

denotes distributions without traditional rating factors ( $\mathbf{X}_i$ ).

Figure 6. Total claim counts distribution of the extended test dataset. EBrak3 Negative Binomial model with traditional factors (left) and without traditional factors (right).

Figure 7. Total claim counts distribution of the extended test dataset. EBrak3 Poisson model with traditional factors (left) and without traditional factors (right).

For models derived from EBrak3 signals, the results fall in line with what one would expect from a weekly update based on the newly acquired information, where the signals switch from predicted to observed. We observe a small but noticeable change in the distribution between models at the beginning and end of weekly updates. This implies that using predicted rather than observed signals does not have a disproportionate impact on the premium. In contrast, for models that include EAclr3 signals, this change is more noticeable. This result corroborates our findings from Section 4.2.2, where the predictive power of EAclr3 was not as prevalent. In essence, the difference between predicted and observed accelerations is more noticeable than for braking. Hence, our pricing scheme seems to perform better when handling statistically significant telematics signals that can be accurately predicted through credibility frameworks.

Following the evaluation of our pricing scheme through the simulation of a portfolio, we now look at smaller-scaled examples by analysing the impact on the premium of single drivers depending on their risk profiles.

4.3. Example of a billing process

In this section, we provide examples of a weekly pricing scheme using current and past telematics signal observations and compare the results with those of a baseline model. We begin by identifying and describing two drivers for whom a PHYD product is proposed. In terms of static variables, we consider that they share the same values. In the context of this paper, this would mean the same engine capacity (4000 cc in this case). Also, we assume that both are new clients for which no prior driving history has been recorded, regarding both past claims and past telematics records. Our example covers a billing period of 2 months (or 8 weeks) in which a premium is charged weekly. During this period, signal counts are tracked by an OBD in the vehicle. This is where we differentiate our two clients. The first driver will produce four near-misses, either level 3 braking or accelerations, during the span of the coverage. The second will not produce any signals. Note that if no telematics information is used in the ratemaking, both clients would be charged the same premium, denoted by $P^{(b)}_{i}$ . In this example, by setting an average cost of $\overline{C}=3 000$ € and by considering an EQNB model to predict the claim frequency, we set our baseline weekly rate to $P^{(b)}_{i}=1.90$ €, for $i=1,2$ and $j=1,\dots,8$ . This is equivalent to a $98.96$ € yearly premium.

We now include the telematics signal information collected in the premium computation through the approach from Section 3.4. That is, computing a BMS score ( $\ell_{i,j-1}$ ) at the end of week $j-1$ using the current and past signal counts, and then, using this score ( $\ell_{i,j-1}$ ) to predict the signal count for week j (i.e., $\nu_{i,j}$ ). This allows the insurer to charge a preemptive premium before telematics data for week j is observed ( $\overline{C}\cdot\mu^{(-)}_{i,j}$ ). Furthermore, given that $j-1$ has already been observed, we can determine an updated version of the premium with telematics data ( $\overline{C}\cdot\mu^{(-)}_{i,j}$ ), which allows the insurer to adjust the premium charged at the beginning of week $j-1$ ( $A_{i,j-1}$ ). Thus, the premium at the start of week j ( $P_{i,j}$ ) is a combination of the current week’s expected cost and the correction from the previous week. In practice, other costs, such as administrative or marketing costs, would be incorporated into the final premium.

Let us consider the Negative Binomial models from Section 4.2.2 so that the two billing schemes can be calculated by either considering severe braking or acceleration as telematics signals. Table 8 details the pricing scheme for both drivers considering EBrak3 telematics data, while the details for EAclr3 models are given in the Appendix (see Table 12). We also provide two figures (Figure 8), one for each driver, representing the baseline premium (horizontal solid line), the EBrak3-BMS premium (dashed line), and the EAclr3-BMS premium (dotted line). The left side Y-axis contains the cost of the three premiums at the end of each week. In addition, we add a histogram (vertical lines) containing the number of signals observed the previous week; these values are indicated at the right-side Y-axis.

Table 8. Pricing scheme with a EBrak3 bonus-malus model for a risky driver (profile 1) and a safe driver (profile 2) in the context of a 2-month insurance policy.

Figure 8. Weekly billing process with a 2-month insurance policy for a risky driver (profile 1) and a safe driver (profile 2).

For the first driver, we notice that the occurrence of telematics signals influences the cost of the premiums paid. Indeed, the occurrence of signals impacts the BMS score, affecting the premium price the following week (see the jumps between weeks 2, 6, and 8). Furthermore, we also note that an immediate improvement in terms of driving (e.g., not producing signals the following week) leads to the client being rewarded a discount. For instance, at weeks 4 and 7, there is a significant reduction from the premium paid the week before. As the driver keeps reducing their BMS score by not producing signals, the premium stabilises to a value close to the baseline premium (see weeks 4 and 5). However, this stabilisation of the premium is better showcased by the second driver, who reaches the minimal value of the BMS at week 3 and, from then on, pays the same premium. It is worth noting that there is no significant difference between the baseline premium and the premium paid at the minimal BMS score (the difference being lower than $0.01$ €). Thus, for this case study, our BMS model is better suited for riskier drivers, who are very rapidly rewarded for improving their driving style (by not producing signals). However, there is no significant advantage for good drivers who end up with virtually the same premium as with a traditional approach.

4.4. Discriminatory power through a Gini index

For our case study, we adapted this min-max strategy from Section 3.5 leading to the results from Tables 13, 14 and 15 (available in the Appendix). For each table, several models were considered to compute a baseline premium, and then the Gini score was obtained for each combination of alternative scores. As with our previous numerical results, we considered the Poisson/QP and the Negative Binomial/EQNB models with and without static covariates. In the context of our pricing scheme, we considered similar models and the same static covariates for the telematics signal and the start/end of the week models. For example, premiums EBrak3-NB from Table 15 are computed using an EQNB model with static covariates for the start and the end of each week. Moreover, the end-of-the-week model uses predictions of EBrak3 events as covariates, which themselves originate from a Negative Binomial distribution with static covariates. We also considered benchmark premiums that do not use telematics signal data in their models. Finally, given that our paper focuses on frequency models, we set the average severity ( $\overline{C}$ ) to $1$ € for all premiums.

Table 13 compares the Gini index for benchmark models and models that incorporate EBrak3 events, while Table 14 showcases the same models replacing EBrak3 events by EAclr3 events. Finally, Table 15 compares all models from the two other tables. Let us discuss the results. The results show that the inclusion of the Engine Capacity as a covariate has a variable effect on the Gini score. Consequently, we cannot confirm that the Engine Capacity allows for better discrimination power and risk identification for every model. We also note that when a BMS credibility structure is included, the Negative Binomial distribution is more optimal than the Poisson distribution. Yet again, we obtain a lower minimal Gini score from models fitted with these distributions while keeping the same covariates. Lastly, across all models (other than EAclr3 PO in Table 15), we observe that our scheme improves risk classification when incorporated. Moreover, the overall best models are the Negative Binomial with a telematics credibility structure and without static covariates. Overall, we conclude that the BMS structure showcased in this paper optimises the min-max strategy.

5. Conclusion

This paper presents a new ratemaking scheme for telematics signals using an iterative approach. One of the main innovations we showcase is including a preliminary premium based on previous signal counts. This allows the insurer to maintain a more classical ratemaking structure, charging a premium before a short covered period is observed. We complement this approach by updating rates dynamically in two ways, first, by adjusting the previous premium based on the new information acquired, second, by incorporating a bonus-malus score that supplements traditional information to better capture a driver’s risk profile. There are several advantages to our approach. First of all, we provide the insurer with two new tools to better understand fluctuations in terms of risk. On the one hand, it is possible to track the difference between the premium paid at the beginning of a period and the one at the end. On the other hand, the bonus-malus score itself can be used as a measure of risk. For example, a driver who keeps producing dangerous telematics signals will maintain a high score, while if they drive more safely, their score will be reduced progressively. In other words, we can incorporate the same effect of a traditional bonus-malus scheme in a much shorter period. This benefits both the driver and the insurer, as risky behaviour can be tracked and corrected immediately.

To better focus our attention on the main contribution of our paper (i.e., the ratemaking scheme), several concessions and simplifications had to be made throughout the results presented. Some of them were due to data constraints. For instance, we acknowledge there is a significant gap between the information available from the historical dataset and the telematics dataset, particularly in terms of kilometres driven. However, this situation may not be as uncommon currently, since ratemaking with traditional covariates has been done for years while telematics applications are much more recent. Nonetheless, we provide numerical adaptations in this regard, which would allow for a transition as larger telematics datasets become available. We also did not have access to the severity of past claims, so we considered an average cost for each claim. Although this simplification is not ideal, bonus-malus models are usually adapted for claim frequency only. Furthermore, it should be noted that both claims and signals are punctual events. However, signals do not have an intrinsic cost unlike claims, as no loss or payments are directly produced when they occur.

Some of the other simplifications we made can be addressed by extensions in future projects. Our approach focused on one signal event at a time. However, in most scenarios, an insurer would have access to various distinct events with multiple degrees of intensity. Moreover, multiple events could pose various technical challenges and considerations (see Section 3.3.2). One of them is the issue of dependence between the variables considered. For instance, acceleration and braking have a clear correlation, as one could lead to the other. As such, a dependent multivariate structure would have to be considered. Another possible extension to our research would be to survey the various forms a bonus-malus score can take to determine the best adaption for a signal setting, as our paper only explored one structure. Finally, in terms of the numerical methods considered, we purposely focused on parametric models that are defined through a mean parameter to streamline the structure of our ratemaking scheme. Naturally, at the cost of interpretability, other statistical frameworks may provide better numerical results. For example, as a replacement for linear parameters, smoothing functions could be implemented in the context of GAMs. Another example is considering statistical learning techniques such as gradient boosting machines (GBMs).

Supplementary material

To view supplementary material for this article, please visit https://doi.org/10.1017/asb.2024.30.

Acknowledgements

Juan Sebastian Yanez gratefully acknowledges financial support from the Fonds de Recherche du Québec Nature et Technologie (FRQNT, grant B3X-346658). Montserrat Guillén thanks support received by the Spanish Ministry of Science AEI/10.13039/TED2021-130187B-I00 and ICREA Academia. In addition, the authors would like to thank the editor and four anonymous referees for their valuable comments on a previous version of this paper.

Competing interests

Montserrat Guillén has received funds from insurance companies, but the funding organisations had no role in the design of the study; in the collection, analysis, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results. The authors declare no other potential conflicts of interest.

A Appendix

A.1 Multivariate Negative Binomial model (MVNB)

Let us consider a mixed random effect model for telematics signals where, for any given week j of any given driver i in $\mathcal{D}^{(t)}$ we have

\begin{align*} N_{i,j}^{(t)}|\mathbf{X}_{i},\theta_i,E^{(t)}_{i,j}&\sim Poisson\left(\theta_i\lambda_{i,j}\right),\\[5pt] \Theta_i &\sim Gamma\left(\alpha,\alpha\right)\end{align*}

where $ \lambda_{i,j}=E^{(t)}_{i,j}\text{exp}\left(\mathbf{X}_{i}'\boldsymbol{\beta^{(x)}}\right)$ .

In addition, let us consider the cumulative values of $ \lambda_{i,j}$ and $n_{i,j}^{(t)}$ . That is, for $0<j^*\leq W_{i}^{(t)}$ , let these values respectively be

\begin{align*} \Lambda_{i,j^*}&=\sum_{j=1}^{j^*-1} \lambda_{i,j}= \sum_{j=1}^{j^*-1} E^{(t)}_{i,j}\text{exp}\left(\mathbf{X}_{i}'\boldsymbol{\beta^{(x)}}\right)\\[5pt] m^{(t)}_{i,j^*}&=\sum_{j=1}^{j^*-1} n^{(t)}_{i,j}.\end{align*}

We can then write the joint distribution of the observed telematics signal counts as:

\begin{align*} p(\mathbf{n}^{(t)})&= \prod_{i=1}^{I}\left\{\left(\frac{\Gamma\left(m^{(t)}_{i,\bullet}+\alpha\right)}{\Gamma\left(\alpha\right)\prod_{j=1}^{W_i^{(t)}} \Gamma\left(m^{(t)}_{i,j}+1\right)}\right)\left(\frac{\alpha}{\Lambda_{i,\bullet}+\alpha}\right)^{\alpha}\prod_{j=1}^{W_i^{(t)}}\left[\left(\frac{\Lambda_{i,j}}{\Lambda_{i,\bullet}+\alpha}\right)^{n_{i,j}^{(t)}}\right]\right\},\end{align*}

where

\begin{align*} \Lambda_{i,\bullet}&=\sum_{j=1}^{W_i^{(t)}}\lambda_{i,j},& m_{i,\bullet}=&\sum_{j=1}^{W_i^{(t)}} m_{i,j}^{(t)}.\end{align*}

and where the vector containing all the observed telematics signal counts is given by $\textbf{n}^{(t)}=\left[n_{1,1}^{(t)},\dots,n_{I,W_i^{(t)}}^{(t)}\right]$ .

Let us consider that a driver has been observed for $j^*$ weeks. Then, at exactly the end of week $j^*$ , suppose that the insurer wants to predict the number of events during week $j^*+1$ , given the information from week 1 to week $j^*$ (defined as $\mathcal{H}_{j^*}^{(t)}$ ). The conditional distribution of $N^{(t)}_{i,j^*+1}$ given previous observations up to week $j^*$ becomes

\begin{align*} N^{(t)}_{i,j^*+1}|\mathcal{H}_{j^*}^{(t)}\sim \mathcal{NB}\left(r_{i,j^*+1},\frac{\eta_{i,j^*+1}}{\eta_{i,j^*+1}+\lambda_{i,j^*+1}}\right),\end{align*}

for $0 < j^* < W_i^{(t)}$ , where

\begin{align*} r_{i,j^*+1}&=\alpha+m_{i,j^*},\\[5pt] \eta_{i,j^*+1}&=\alpha+\Lambda_{i,j^*}. \end{align*}

Hence, the conditional mass probability function of $N^{(t)}_{i,j^*+1}|\mathcal{H}_{j^*}^{(t)}$ is given by:

\begin{align*} p\left(n^{(t)}_{i,j^*+1}|\mathcal{H}^{(t)}_{j^*}\right)&= \frac{\Gamma\left(r_{i,j^*+1}+n_{i,j^*+1}^{(t)}\right)}{\Gamma\left(n^{(t)}_{i,j^*+1}+1\right)\Gamma\left(r_{i,j^*+1}\right)}\left(\frac{\eta_{i,j^*+1}}{\eta_{i,j^*+1}+\lambda_{i,j^*+1}}\right)^{r_{i,j^*+1}}\left(\frac{\lambda_{i,j^*+1}}{\eta_{i,j^*+1}+\lambda_{i,j^*+1}}\right)^{n^{(t)}_{i,j^*+1}}.\end{align*}

Accordingly, for $0 < j^* < W_i^{(t)}$ , the conditional mean of $N^{(t)}_{i,j^*+1}|\mathcal{H}_{j^*}^{(t)}$ is given by:

\begin{align*}E\left[N^{(t)}_{i,j^*+1}\right.\left|\mathcal{H}^{(t)}_{j^*}\right]=\lambda_{i,j^*+1}\left(\frac{\alpha+m_{i,j^*}}{\alpha+\Lambda_{i,j^*}}\right).\end{align*}

References

Ayuso, M., Guillen, M. and Pérez-Marn, A. M. (2016a) Using GPS data to analyse the distance travelled to the first accident at fault in pay-as-you-drive insurance. Transportation Research Part C: Emerging Technologies, 68, 160167.CrossRefGoogle Scholar
Ayuso, M., Guillén, M. and Pérez-Marn, A. M. (2014) Time and distance to first accident and driving patterns of young drivers with pay-as-you-drive insurance. Accident Analysis and Prevention, 73, 125131.CrossRefGoogle ScholarPubMed
Baecke, P. and Bocca, L. (2017) The value of vehicle telematics data in insurance risk selection processes. Decision Support Systems, 98, 6979.CrossRefGoogle Scholar
Boucher, J. P. (2022) Multiple bonus–malus scale models for insureds of different sizes. Risks, 10(8), 152.CrossRefGoogle Scholar
Boucher, J. P., Côté, S. and Guillen, M. (2017) Exposure as duration and distance in telematics motor insurance using generalized additive models. Risks, 5(4), 54.CrossRefGoogle Scholar
Boucher, J. P. and Inoussa, R. (2014) A posteriori ratemaking with panel data. ASTIN Bulletin: The Journal of the IAA, 44(3), 587612.CrossRefGoogle Scholar
Bühlmann, H. and Straub, E. (1970) Glaubwürdigkeit für schadensätze. Bulletin of the Swiss Association of Actuaries, 70(1), 111133.Google Scholar
Cao, J., Li, D., Young, V. R. and Zou, B. (2023). Equilibrium reporting strategy: Two rate classes and full insurance. Journal of Risk and Insurance, 91, 721752.CrossRefGoogle Scholar
Clark, S. J. and Perry, J. N. (1989) Estimation of the negative binomial parameter $\kappa$ by maximum quasi-likelihood. Biometrics, 45, 309316.CrossRefGoogle Scholar
Consul, P. and Famoye, F. (1992) Generalized poisson regression model. Communications in Statistics-Theory and Methods, 21(1), 89109.CrossRefGoogle Scholar
Consul, P. C. and Jain, G. C. (1973) A generalization of the Poisson distribution. Technometrics, 15(4), 791799.CrossRefGoogle Scholar
Corradin, A., Denuit, M., Detyniecki, M., Grari, V., Sammarco, M. and Trufin, J. (2022) Joint modeling of claim frequencies and behavioral signals in motor insurance. ASTIN Bulletin: The Journal of the IAA, 52(1), 3354.CrossRefGoogle Scholar
Dizaji, A. K. and Payandeh Najafabadi, A. T. (2023) Updating bonus–malus indexing mechanism to adjust long-term health insurance premiums. North American Actuarial Journal, 27(3), 546559.CrossRefGoogle Scholar
Denuit, M., Guillen, M. and Trufin, J. (2019) Multivariate credibility modelling for usage-based motor in-surance pricing with behavioural data. Annals of Actuarial Science, 13(2), 378399.CrossRefGoogle Scholar
Frees, E. W., Meyers, G., and Cummings, A. D. (2014) Insurance ratemaking and a Gini index. Journal of Risk and Insurance, 81(2), 335366.CrossRefGoogle Scholar
Frees, E. W., Meyers, G. and Cummings, A. D. (2011) Summarizing insurance scores using a Gini index. Journal of the American Statistical Association, 106(495), 10851098.CrossRefGoogle Scholar
Gao, G., Meng, S. and Wüthrich, M. V. (2019) Claims frequency modeling using telematics car driving data. Scandinavian Actuarial Journal, 2019(2), 143162.CrossRefGoogle Scholar
Gao, G. and Wüthrich, M. V. (2018) Feature extraction from telematics car driving heatmaps. European Actuarial Journal, 8, 383406.CrossRefGoogle Scholar
Guillen, M., Nielsen, J. P. and Pérez-Marn, A. M. (2021) Near-miss telematics in motor insurance. Journal of Risk and Insurance, 88(3), 569589.CrossRefGoogle Scholar
Guillen, M., Nielsen, J. P., Pérez-Marn, A. M. and Elpidorou, V. (2020) Can automobile insurance telematics predict the risk of near-miss events? North American Actuarial Journal, 24(1), 141152 CrossRefGoogle Scholar
Henckaerts, R. and Antonio, K. (2022) The added value of dynamically updating motor insurance prices with telematics collected driving behavior data. Insurance: Mathematics and Economics, 105, 7995.Google Scholar
Huang, Y. and Meng, S. (2019) Automobile insurance classification ratemaking based on telematics driving data. Decision Support Systems, 127, 113156.CrossRefGoogle Scholar
Hynes, L. M. and Dickey, J. P. (2008) The rate of change of acceleration: implications to head kinematics during rear-end impacts. Accident Analysis and Prevention, 40(3), 10631068.CrossRefGoogle ScholarPubMed
Kim, S., Kleiber, M. and Weber, S. (2023) Microscopic traffic models, accidents, and insurance losses. ASTIN Bulletin: The Journal of the IAA, 54(1), 1–24.Google Scholar
Lemaire, J., Park, S. C. and Wang, K. C. (2016) The use of annual mileage as a rating variable. ASTIN Bulletin: The Journal of the IAA, 46(1), 3969.CrossRefGoogle Scholar
Lemaire, J. (2013) Automobile Insurance: Actuarial Models (Vol. 4). Springer Science and Business Media.Google Scholar
Lord, D., Geedipally, S. R. and Guikema, S. D. (2010) Extension of the application of Conway-Maxwell-Poisson models: Analyzing traffic crash data exhibiting underdispersion. Risk Analysis: An International Journal, 30(8), 12681276.CrossRefGoogle ScholarPubMed
Ma, Y. L., Zhu, X., Hu, X. and Chiu, Y. C. (2018) The use of context- sensitive insurance telematics data in auto insurance ratemaking. Transportation Research Part A, 113, 243258.Google Scholar
Nelder, J. A. and Pregibon, D. (1987) An extended quasi-likelihood function. Biometrika, 74(2), 221232.CrossRefGoogle Scholar
Nelder, J. A. and Wedderburn, R. W. (1972) Generalized linear models. Journal of the Royal Statistical Society Series A: Statistics in Society, 135(3), 370384.CrossRefGoogle Scholar
Okine, A. N. A. (2023) Ratemaking in a changing environment. ASTIN Bulletin: The Journal of the IAA, 53(3), 596618.CrossRefGoogle Scholar
Oh, R., Shi, P. and Ahn, J. Y. (2019) Bonus-malus premiums under the dependent frequency-severity modeling. Scandinavian Actuarial Journal, 2020(3), 172195.CrossRefGoogle Scholar
Quddus, M. A., Noland, R. B. and Chin, H. C. (2002) An analysis of motorcycle injury and vehicle damage severity using ordered probit models. Journal of Safety research, 33(4), 445462.CrossRefGoogle ScholarPubMed
Stipancic, J., Miranda-Moreno, L. and Saunier, N. (2018) Vehicle manoeuvers as surrogate safety measures: Extracting data from the gps-enabled smartphones of regular drivers. Accident Analysis and Prevention, 115, 160169.CrossRefGoogle ScholarPubMed
Sun, S., Bi, J., Guillen, M. and Pérez-Marn, A. M. (2021) Driving risk assessment using near-miss events based on panel poisson regression and panel negative binomial regression. Entropy, 23(7), 829.CrossRefGoogle ScholarPubMed
Vallarino, A., Rabitti, G. and Chokami, A. K. (2023) Construction of rating systems using global sensitivity analysis: A numerical investigation. ASTIN Bulletin: The Journal of the IAA, 54, 121.Google Scholar
Verbelen, R., Antonio, K. and Claeskens, G. (2018) Unravelling the predictive power of telematics data in car insurance pricing. Journal of the Royal Statistical Society Series C: Applied Statistics, 67(5), 12751304.CrossRefGoogle Scholar
Verschuren, R. M. (2021) Predictive claim scores for dynamic multi-product risk classification in insurance. ASTIN Bulletin: The Journal of the IAA, 51(1), 125.CrossRefGoogle Scholar
Xiang, Q., Neufeld, A., Peters, G. W., Nevat, I. and Datta, A. (2023) A bonus-malus framework for cyber risk insurance and optimal cybersecurity provisioning. European Actuarial Journal, 141.Google Scholar
Yanez, J. S., Boucher, J. P. and Pigeon, M. (2023) Modeling payment frequency for loss reserves based on dynamic claim scores. North American Actuarial Journal, 14, 581621.Google Scholar
Wedderburn, R. W. M. (1974) Quasi-likelihood functions, generalized linear models, and the Gauss-Newton method. Biometrika, 61(3), 439447.Google Scholar
Wüthrich, M. V. (2023) Model selection with Gini indices under auto-calibration. European Actuarial Journal, 13(1), 469477.CrossRefGoogle Scholar
Wüthrich, M. V. (2017) Covariate selection from telematics car driving data. European Actuarial Journal, 7(1), 89108.CrossRefGoogle Scholar
Figure 0

Figure 1. Flowchart of a weekly telematics signal BMS ratemaking scheme.

Figure 1

Figure 2. Case 1: Flowchart of variables from the historical dataset.

Figure 2

Figure 3. Case 2: Flowchart of variables from a merged dataset.

Figure 3

Figure 4. Case 3: Flowchart of variables from a merged dataset.

Figure 4

Table 1. Variables description for the telematics and historical datasets.

Figure 5

Table 2. Descriptive statistics, by driver, for the telematics and historical datasets.

Figure 6

Table 3. Descriptive statistics, by week, for the telematics and historical datasets.

Figure 7

Figure 5. Histograms of EBrak3 and EAclr3 counts for weeks with at least one observed telematics signal (respectively, EBrak3 or EAclr3).

Figure 8

Table 4. Bonus-malus parameters for EBrak3 and EAclr3 events.

Figure 9

Table 5. Z-tests for EBrak3 count models and claim count models using EBrak3 predictions and observations as covariates.

Figure 10

Table 6. AIC for the telematics signal and claim count models*.

Figure 11

Table 7. Simulation of the total claim counts in the test dataset.

Figure 12

Figure 6. Total claim counts distribution of the extended test dataset. EBrak3 Negative Binomial model with traditional factors (left) and without traditional factors (right).

Figure 13

Figure 7. Total claim counts distribution of the extended test dataset. EBrak3 Poisson model with traditional factors (left) and without traditional factors (right).

Figure 14

Table 8. Pricing scheme with a EBrak3 bonus-malus model for a risky driver (profile 1) and a safe driver (profile 2) in the context of a 2-month insurance policy.

Figure 15

Figure 8. Weekly billing process with a 2-month insurance policy for a risky driver (profile 1) and a safe driver (profile 2).

Supplementary material: File

Yanez et al. supplementary material

Yanez et al. supplementary material
Download Yanez et al. supplementary material(File)
File 272.6 KB