Biased-randomized algorithms and simheuristics in finance & insurance






pcod_investigacionoperativa


Abstract

Managerial decisions in the area of finance and insurance can often be modeled as combinatorial optimization problems. It is also frequent that these optimization problems fall into the category of NP-hard ones, which justifies the need for using metaheuristic algorithms when tackling large-sized instances. In addition, decision-making in real-life financial & insurance activities is usually performed in scenarios under uncertainty. Hence, stochastic versions of the aforementioned NP-hard problems have to be considered, and simulation-optimization methods are required in order to obtain high-quality solutions. This paper analyzes how biased-randomized techniques (which transform greedy heuristics into probabilistic algorithms) and simheuristics (hybridization of simulation with metaheuristics) can be employed to efficiently cope with a variety of challenging optimization problems, even those under uncertainty scenarios.

Keywords: Biased-Randomized Algorithms, Finance, Insurance, Metaheuristics, Optimization, Simheuristics.

AMS Subject classifications: 90-10, 90B50, 90B99, 68W20, 68T20.

Introduction

Numerous managerial challenges in the areas of finance and insurance (F&I) can be modeled as combinatorial optimization problems. Traditionally, exact methods have been employed in determining optimal solutions to these problems. This is the case, for instance, of the classical Markowitz model ((Mangram 2013)), which minimizes the risk associated with a portfolio of assets while establishing a minimum threshold for its return value. Exact methods, however, present certain limitations when solving large-sized portfolio optimization problems with richer and real-life constraints (e.g., investor preferences, cardinality restrictions, market frictions, investment bank polices, etc.), which easily become NP-hard in nature. Under these circumstances, many analytical methods require either the use of simplifying assumptions or extraordinarily long computing times. These limitations call for the introduction of metaheuristic algorithms ((Sörensen and Glover 2013)), which do not guarantee optimal solutions but allow us to achieve near-optimal ones in reasonably short computing times ((Nesmachnow 2014)). Recent reviews on the applications of metaheuristics in the F&I arena are provided in (Soler-Dominguez, Juan, and Kizys 2017) and (Doering et al. 2019). In addition to the difficulties already mentioned, uncertainty plays a relevant role in many real-life F&I applications. Hence, it is not surprising that some components of the optimization problem (e.g., investment returns, currencies fluctuations, or inflation rates) are better modeled as random variables, or that the mathematical model makes use of probabilistic constraints (e.g., requesting a minimum level of investment return with a user-defined probability). Solving these stochastic versions of NP-hard and large-scale optimization problems can be troublesome and usually requires the use of simulation-optimization methods ((Better et al. 2008)).

We analyze how biased-randomized algorithms (BRAs) ((Grasas et al. 2017)) and simheuristics ((Juan et al. 2015)) can be employed to efficiently cope with a variety of challenging optimization problems in the F&I field. While the former support massive parallelization and can be used to generate high-quality solutions to deterministic versions of rich optimization problems, the latter can be employed to solve stochastic versions of the same optimization problems. To some extent, both methodologies combine simulation principles with heuristic algorithms. However, while biased randomization techniques ((Grasas et al. 2017)) make use of Monte Carlo simulation to induce an oriented (non-uniform) random behavior in a constructive heuristic (which can also be complemented with different local search procedures and encapsulated inside a multi-start framework following (Martí 2003)), simheuristics deal with uncertainty by integrating a simulation model (of any type) inside a metaheuristic framework ((Chica et al. 2020)). Both approaches have been successfully employed to solve challenging optimization problems, especially in the areas of transportation & logistics as well as manufacturing & production. However, this paper focuses on analyzing their potential in the F&I area. To achieve this goal, the paper reviews recent works on F&I applications of biased-randomization and simheuristics, and infers from their particular results a more general knowledge that covers different optimization problems.

The remainder of the paper is structured as follows: Section 2 provides an updated overview on the applications of metaheuristic algorithms in the finance and insurance fields. Section 3 introduces the fundamental concepts behind the biased randomization techniques, which allow us to extend a constructive heuristic into a probabilistic algorithm, while Section 4 reviews some recent applications of BRAs in the financial area. A similar strategy is followed in Sections 5 and 6 for the concept of simheuristics. Section 7 discusses the manager perspective on how these Operations Research methods can support efficient decision-making in the area. Finally, Section 8 highlights the main findings and contributions of this work and concludes it.

Metaheuristics in Finance & Insurance

Metaheuristics are a class of versatile numerical methods that are conceptually simple, easy to implement and require relatively little computational time, making them attractive for problem-solving in knowledge areas, in which real-time decisions are required. The fast-paced nature, as well as the extraordinary internationalization and integration of financial markets and institutions has caused the decision-making process to become more complex and increasing regulation of the sector has added a non-negotiable set of constraints for practitioners calling for problem-solving approaches that can model these rich optimization problems in banks, central bank, institutional investors, as well as insurance firms. Overviews of financial problems that have been solved using metaheuristics are provided in (Soler-Dominguez, Juan, and Kizys 2017) and (Doering et al. 2019).

In essence, many financial optimization problems can be modeled as enriched variants of the classical portfolio optimization problem ((Markowitz 1952)) and include rich portfolio optimization, index tracking and its enhancement, credit risk assessment, stock investments, financial project scheduling, option pricing, feature selection, as well as bankruptcy and financial distress prediction.

From an institutional standpoint, a second target area for metaheuristic applications evolved: asset and liability management (ALM). ALM is concerned with the optimal allocation of assets and liabilites in a way that not only allows for liabilities to be covered at all times, but also for long-term profit maximization. In a way, ALM would serve as the strategic umbrella framework for the operative decisions in portfolio optimization and credit risk assessment of individual transactions.

Most recently, (Saiz et al. 2022) identify main clusters of research interests for portfolio optimization. A main finding is that once large instances with complex constraints are subject of optimization, metaheuristics are a popular approach. However, there is still a discrepancy between practitioners’ demands and the state of the art in optimization ((Urli and Terrien 2010)). The inclusion of more realistic constraints and components, as well as the reduction in computing times achieved through the application of metaheuristics has broadened the interest in this research field to the F&I community, indicating that a possible gap between theory and practical applications is narrowing.

Biased-Randomized Algorithms

Greedy constructive heuristics are iterative procedures that build a solution from a list of possible candidate movements, which is sorted according to previously specified criteria (e.g., profit, savings, costs, etc.). These algorithms are deterministic, as they construct the same solution at repeated executions. The construction process is based on the list item which yields the best short-term solution component at each step (i.e., the process relies on selecting, at each step, the solution-building item that improves as much as possible the objective value of the incumbent solution). This results in a poor exploration process, unless more complex search techniques, such as local searches or perturbation movements are incorporated into the solution-building process resulting in an increase in computing times. Well-studied examples of such heuristics include the savings heuristic for the vehicle routing problem ((Clarke and Wright 1964)), the path-scanning heuristic for the arc routing problem ((Golden, DeArmon, and Baker 1983)), or the NEH heuristic for the flow-shop problem ((Nawaz, Enscore Jr., and Ham 1983)).

As some authors argue, better solutions are generated through a process called biased-randomization ((Grasas et al. 2017)). It consists of using a skewed probability distribution to assign a weighted probability of selection to each item in the sorted list. The skewness ensures that more promising solutions at the top of the list will more likely be selected, but also guarantees that slightly differing solutions, which are still based on the construction logic of the underlying heuristic, are generated once the algorithm is executed multiple times. As more alternative biased-random variations are generated in this manner, the chance of some of the “near-greedy” solutions outperforming the one generated by the deterministic greedy heuristic increases. Algorithm [alg:BRA] shows a pseudo-code description of a basic BRA.

bestSol \(\leftarrow\) execute the deterministic (greedy) heuristic
bestSol \(\leftarrow\) newSol bestSol

It is important to note that this approach ensures a broad exploration of the solution space. Biased-randomization can be seen as a natural extension of the basic greedy randomized adaptive search procedure (GRASP) ((Resende and Ribeiro 2010)). Whereas the use of empirical probability distributions requires the time-consuming fine-tuning of parameters, the benefit of employing a theoretical probability distribution (e.g., geometric or decreasing triangular) lies in the possibility of quickly generating different variations with few and easy-to-set parameters. Figure 1 illustrates the effect of setting the values for a parameter of a geometric probability distribution (\(p \in \{0.3, 0.7\}\)) on the assigned selection probabilities of the elements of the sorted list during the iterative construction of a biased-randomized solution.

Figure 1: Sampling elements from a list using a geometric distribution.

Thus, for \(p = 0.7\), the probability of being selected next is much higher for those items at the top of the list, bringing the behavior closer to that of the classical heuristic. At its extreme (\(p \rightarrow 1\)), it performs the construction in an equal way as the greedy heuristic. For the other extreme, (\(p \rightarrow 0\)), perfect diversification would be achieved, rendering the ordering of the list superfluous. Every parameter value between those two extremes yields a different degree of randomization. Usually, a trade-off between preserving the original sorting logic and introducing some degree of randomization through choosing a parameter in the middle of the two extreme cases will yield the most promising results.

BRAs in Finance & Insurance

As previously shown, the sorting logic in a greedy heuristic may only incompletely capture factors influencing the quality of the solution. In order explore the wider search space, randomization might be introduced to capture any effects the modeler might be unaware of. While biased randomization algorithms have been increasingly employed in production ((Gonzalez-Neira et al. 2017)), logistics ((Estrada-Moreno et al. 2020)) and transportation ((Almouhanna et al. 2020)), the evaluation of financial and insurance products is a relatively new field of application.

A richer and more realistic version of the portfolio optimization problem is introduced in (Kizys et al. 2019). The authors develop an original algorithm (ARPO) to address it. It is based on the combination of iterated local search (LS), quadratic programming (QP), and a biased randomization strategy. During the portfolio construction, a new asset will be introduced based on a compatibility criterion, which is covariance with those assets already in the portfolio solution. It is expected that this favors portfolio diversification, thus reducing the portfolio risk. Assigning a probability of being selected based on a geometric distribution with parameter \(\beta\), introduces randomization in the construction of the solution. For illustrative purposes, some of the results obtained in (Kizys et al. 2019) are summarized in Figure 2. These results show that the ARPO algorithm (LS+QP in the figure) is able to provide the same or even better results as other state-of-the-art approaches, including the FD+QP and SD+QP algorithms proposed in (Gaspero et al. 2011), the GA+QP algorithm introduced in (Moral-Escudero, Ruiz-Torrubiano, and Suárez 2006), and the TS algorithm described in (Schaerf 2002). Not only that, but ARPO is able to achieve these competitive results in the order of seconds, while other approaches report times in the order of minutes.

Figure 2: A comparison of ARPO with other state-of-the-art approaches.

Parametric catastrophe insurance is a transparent instrument that transfers the financial risk from experiencing a natural catastrophe, such as an earthquake. For seismic activity, the decision whether or not a payment will be made can be dependent on location and a magnitude threshold. This constellation was subjected to different statistical and machine learning techniques in (Calvet, Lopeman, et al. 2017). The definition of the specific magnitude threshold at a given location that maximizes efficiency for the insured subject to a budget constraint is researched in (Bayliss, Guidotti, et al. 2020). By sequentially lowering the threshold of the particular location, which leads to the greatest increase in efficiency in relation to the increase in the trigger rate, while still maintaining all constraints. Biased-randomization is introduced by assigning an individual probability of being selected to each location cube based on a geometric distribution. The parameter to determine the trade-off between diversification and retaining the original sorting logic was found to be most effective for low diversification. The authors went one step further and saved the partial solutions from each step as a means to restart the algorithm with more meaningful initial solutions that were then subjected to the same procedures, thus creating higher quality initial and, in return, final solutions.

Fundamentals of Simheuristics

Financial markets are the epitome of uncertainty characterized by random returns and noisy covariances and retrospective sample statistics for modeling ((Kizys et al. 2022)). It is thus a logical extension of metaheuristic approaches to consider combinations with simulation techniques to address the aspect of stochasticity in one or more components of a combinatorial optimization problem. This can be the introduction of probabilities instead of rigorous constraints (e.g., returns that must be achieved with a given probability) or the consideration of stochastic objective functions (e.g., random revenues), or a combination thereof. Given a set of \(n\) assets, an example of stochastic portfolio optimization problem is given next:



\[\min \displaystyle f(x) = \Theta \left[ \sum_{i=1}^{n}\sum_{j=1}^{n} S_{ij} x_i x_j \right]
\label{eq:objective}\qquad(1)\]

subject to:



\[\sum_{i=1}^{n} x_i = 1
\label{eq:budget}\qquad(2)\]



\[P \left( \sum_{i=1}^{n} R_i x_i \geq r \right) \geq p
\label{eq:return}\qquad(3)\]



\[0 \leq x_i \leq \delta_i, \quad \forall i \in {\{1,2,\ldots,n\}}
\label{eq:delta}\qquad(4)\]



\[x_i \in [0,1], \quad \forall i \in {\{1,2,\ldots,n\}}.
\label{eq:xi}\qquad(5)\]

As declared in Equation [eq:xi], \(x_i \in [0,1]\) represents the weight or fraction of the investment allocated to asset \(i\), \(\forall i \in \{1,2, \ldots, n\}\). Likewise, \(S_{ij}\) represents the stochastic covariance of assets \(i\) and \(j\), while Equation [eq:objective] aims at minimizing the investment risk expressed as a function of the stochastic covariance in the portfolio (e.g., \(\Theta\) could represent the expected value of the aggregated covariance in the portfolio, or any other statistic that the manager wishes to minimize). Equation [eq:budget] simply states that all the available budget is used to build the portfolio. Equation [eq:return] is a probabilistic constraint stating that the probability of obtaining at least a return value of \(r > 0\) is greater than a value \(p \in (0, 1)\) (both \(r\) and \(p\) exemplify parameters defined by the investor). Here, \(R_i\) refers to a random variable modeling the return associated with asset \(i\), \(\forall i \in \{1,2, \ldots, n\}\). Finally, Equation [eq:delta] imposes an additional threshold on the maximum quantity that can be invested in each individual asset. Notice that other realistic constraints might appear, such as: (i) minimum and maximum values for the number of assets to be included in the portfolio; (ii) a threshold on the minimum quantity that can be invested in any given asset if it is selected; or (iii) a subset of mandatory assets that need to be included in the portfolio. These rich constraints make the optimization problem to become NP-hard, even in its deterministic version ((Kizys et al. 2019)).

As the other methodological approaches introduced in this paper, being a heuristic method this simulation-optimization approach does not guarantee finding the optimal, but will find a robust, high-quality solution. Simheuristic approaches rely on two important assumptions. Firstly, the stochastic version of an optimization problem can be considered a generalized version of the deterministic one, given that the deterministic is one distinct instance, in which the variance of the stochastic variables equals zero. Secondly, it is assumed that in cases with moderate uncertainty, the metaheuristic approaches to solve well-studied deterministic optimization problems yield high-quality solutions that are likely also good-quality solutions for the stochastic problem formulation. This suggests the intuitive approach of extending an existing metaheuristic framework with simulation techniques to account for added uncertainty should with reasonable certainty yield satisfactory results. However, in environments with extreme uncertainty levels, the classical aim of maximizing traditional economic measures (e.g., maximizing return on investment) may lead to extremely diverse individual outcomes, such that it might be reasonable to instead focus the search on finding robust solutions in the face of increased uncertainty. Concluding, in environments with low to medium levels of uncertainty, the approach can be summarized as follows: (i) determine the deterministic version of a stochastic problem by replacing all stochastic variables by their expected values; and (ii) develop a metaheuristic framework and iteratively explore the solution space efficiently. This should yield a set of promising solutions. The algorithm must also evaluate both the quality and the feasibility of these solutions in the realm of uncertainty. Simulation methods offer the possibility to model each random variable using a theoretical or empirical best-fit probability distribution so as to not depend on the assumption of normal or exponential behavior. A feedback cycle between the metaheuristic and the simulation component consists of the following logic. In a first step, the promising solutions from the deterministic environment are sent to the simulation for a quick evaluation employing a reduced number of replication runs. This serves two main purposes: on the one hand, promising solutions for the stochastic problem can be ranked; on the other, the information of a promising solution can provide feedback to the metaheuristic to further and more intensely explore a certain search space. The extensive search by the metaheuristic is possible because the computational effort of the initial simulation, through the reduced number of iterations, is kept to a minimum. In a second simulation stage, estimates of higher accuracy and precision are obtained only for the most promising solutions found via more extensive simulation runs ((Rabe, Deininger, and Juan 2020)). The simulation-optimization process in a simheuristic is summarized in Algorithm 3.

image

It was already established that the inclusion of uncertainty also introduces the new dimension of robustness into decision-maker’s consideration from a risk management standpoint. With a stochastic objective function, she might be interested in comparing a set of solutions with a similarly high expected value with regards to the probability distribution of said value. Simulation runs can be employed to deduce information on the probability distribution of the quality of a solution. This capability of additional risk analysis through the natural combination of identifying a wide spectrum of promising solutions in the metaheuristic component and then evaluating it during the simulation stage is a major strength of simulation-based approaches in general, and simheuristics in particular.

Another aspect to consider is the potential additional use of the best solution found by the metaheuristic for the deterministic version of the optimization problem. In many real-life systems, increasing the uncertainty level might generate additional costs that will eventually increase the overall system expected cost. Thus, for instance, increasing the variance in random variables such as investment returns or future incomes might lead to random observations falling short of the allowed minimum return or failing to guarantee the coverage of future liabilities, thus causing penalty costs. In those cases, it is possible to use the value \(det(s^*)\) of the near-optimal solution \(s^*\) for the deterministic version of the problem as a lower bound for the value \(stoch(s^{**})\) of the optimal solution \(s^{**}\) for the stochastic version. Whenever \(s^*\) is applied in a stochastic environment with the goal of minimizing costs, its value \(stoch(s^*)\) is an upper bound of the optimal solution for the stochastic version, i.e.: \(det(s^*) \leq stoch(s^{**}) \leq stoch(s^*)\).

Figure 4 compares different simulation and optimization methods and their performance with respect to five considered dimensions: (i) capacity to generate optimal solutions (optimality); (ii) flexibility in modeling complex systems (modeling); (iii) capacity for modeling uncertainty (uncertainty); (iv) computing time required to provide the requested output (computing time); and (v) capacity for dealing with large-size instances (scalability). Guaranteed optimality of a solution can solely be achieved through exact methods, which however might require unreasonable computing times for large-scale NP-hard problems. Metaheuristics address this and can find near-optimal solutions for these large-scale NP-hard problems in relatively short computing times, but fail to accurately depict the intricacies of system interactions, particularly when uncertainty is involved. Individually considered, simulation methods offer a plethora of techniques to model uncertainty, but they lack the optimization capabilities of exact and metaheuristic methods. Thus, by extending the strengths of metaheuristics with the capabilities of modeling uncertainty of simulation techniques, simheuristics perform well across all five dimensions as they can also outperform exact methods for large-scale instances of NP-hard optimization problems with regards to computing times and scalability ((Juan et al. 2021)).

Figure 3: Comparison of different methodologies.[fig:dimensions]

Pseudocode_Algorithm1.png

Simheuristics in Finance & Insurance

As previously mentioned, heuristics and metaheuristics have been shown to be an excellent method for solving various F&I problems of interest very precisely and with a very tight use of computational resources. However, the conceptual framework of the financial world is essentially driven by uncertainty and thus stochasticity. Therefore, we should not be satisfied with a deterministic solution if it cannot be sufficiently reasonable. On the one hand, focusing on our problem from a deterministic point of view limits our ability to model reality, since the solutions to complex situations can escape optimization approaches that do not consider uncertainty. On the other, all variables that come into play in a financial system have random behavior. Therefore, metaheuristic algorithms must necessarily be completed with simulation techniques in the F&I field. Only then we will be able to obtain solutions that respond efficiently to realistic situations. In the scientific literature, one can find excellent examples of the success of these combined techniques, which cover a wide spectrum. Hence, for example, a project portfolio selection problem is analyzed in (Panadero et al. 2020). A series of restrictions are established, such as determining a minimum amount of budget for each project, forced selection of specific projects, a minimum number and a maximum number of selected projects, and a behavior random associated with the generated cash flows. The solution to problem is obtained through the application of a simheuristic based on a variable neighborhood search metaheuristic. Similarly, a stochastic version of the classical portfolio optimization problem is analyzed in (Kizys et al. 2022). The authors adopt a realistic assumption of uncertainty surrounding the inputs. In particular, it is considered that both the returns observed in the past, as well as the correlations between financial assets, follow a stochastic behavior. This behavior is modeled by incorporating noise. A metaheuristic algorithm is applied to generate promising candidate solutions, which are then processed by a simulation component in order to obtain Pareto non-dominated solutions. A novel ALM model is introduced in (Bayliss, Serra, et al. 2020). Here, the optimal asset-liability assignment of an insurance firm is investigated through efficiently aggregating fixed income assets to match the outstanding long-term liabilities, so that the firm’s overall benefit at the end of the planned horizon is maximized. Uncertainty is incorporated on both the asset and liabilities sides. This invites to use simulation together with a biased-randomized heuristic. A safety margin and a minimum reliability threshold are also considered. Likewise, another study defines a multi-period portfolio optimization problem for which obligations are added over time, so that it can be understood and formulated as an ALM problem where assets are represented by equities ((Nieto et al. 2022)). A simheurisitic algorithm determines what purchases and sells must be made in the future to fulfill the obligations and to maximize the terminal wealth, given a specific level of risk aversion. The fact that assets on balance may never be negative and prices evolve randomly are tackled by incorporating simulation in the evaluation of the objective function, while the optimization component is based on a genetic algorithm (GA) ((Mirjalili 2019)). Figure 5 summarizes some of the results obtained in (Nieto et al. 2022). For one of the key performance indicators considered, the ratio between deviation and utility, can be enhanced (reduced in this case) by incorporating a simheuristic algorithms into an already existing GA.

Figure 4: Enhancing a GA with a simheuristic for the stochastic ALM problem.[fig:boxplot]

Managerial Insights

Financial institutions have the purpose of efficiently managing the economic resources that are available to them. This purpose has two implications. On the one hand, the fact that they are entities that manage financial resources requires the search for the maximum possible profitability. On the other hand, the fact that they are financial institutions implies that these resources have an external origin, that is, the entity must respond to third parties for the outcome of the management’s decisions. This is common in the three possible areas of the sector: banking, collective investment institutions or mutual funds, and insurance companies. The problem of integrating assets and liabilities in management is a recurring problem whose first analysis dates back to 1938 ((Macaulay 1938)). Hence, some of the first studies are based on the duration of a cash flow and focus on the time-weighted average of the cash flow. The underlying idea is that in the event of a slight change in interest rates, the present value of the cash flow basically depends on the duration. Therefore, if the assets and liabilities coincide in their duration, the final value of a balance sheet is immune to disturbances in the interest rate. This approach, although it is still used today, is far from comprehensive enough when obligations to third parties have to be met. It can be stated that it is a solution to the valuation of the entity, but not to that of its management. Furthermore, the market rate has shown tremendous volatility in the long term. For this reason, over the last decades alternatives that allow for active management in a stricter sense have been emerging.

For example, in the case of ALM we talk about matching cash flows, that is, what financial plan should the manager follow to be able to cover her obligations to third parties and also to obtain the maximum possible return. The problems to balance both objectives are quite varied. The term is always a challenging issue because the conditions of the capital market are usually shorter than the obligations acquired by financial institutions, and especially by insurers. The capital market also has a relevant restriction: liquidity. This is a challenge when looking for the best options to match the returns on the investments with the liabilities in the medium and long term. The credit quality of fixed income issuers, or its counterpart in equities (volatility), constitute a stochastic condition that must be taken into account. It makes no sense trying to allocate investments with a certain degree of uncertainty to obligations with minimum guaranteed conditions. On the contrary, one should match the grade of certainty or uncertainty of both investments and obligations. A similar argument can be done for other financial challenges like the portfolio optimization problem or, in general, any risk management problem. There is another actor in this puzzle, the legislator. Naturally, financial entities are subject to strict control by supervisory bodies. In Europe, this role is played by the European Insurance Regulator (EIOPA) in the area of insurance operations, and by the European Banking Authority (EBA) in the area of banking and investment. Each regulator has its requirements, its limitations, and although it is not disputed that they all respond to the political or socio-economic needs of the market they regulate, from the point of view of building a model they can seem very capricious. The legislator also adds an additional uncertainty: it is not stable over time and it is not predictable. Finally, another element to consider is the agreed conditions that define our obligations, i.e., the commercial conditions or the requirements defined by the risk profile of our client. In short, when it comes to solving a management problem of this nature, we find ourselves with many limitations of a practical nature. Most deterministic models are too simple and will not respond to these real-life needs. At the same time, some classical solving techniques are not capable of solving stochastic models that meet a good part of our needs.

It seems obvious that the only way out is to settle for a technique that, although it does not give us the exact value that we can consider optimal, can provide us an operationally good approximation. In fact, in the financial market, having a high-quality result in a short computing time can be considered as a near-optimal situation, since this market changes every second. Recent advances in heuristic optimization combined with simulation seem promising, since these techniques allow managers to propose realistic models that can solve a large part of the real-life challenges, thus guaranteeing an efficient management. As a result, managers can reduce typical transaction costs due to portfolio re-adjustments and risks that otherwise cannot be avoided. In particular, the development of simheuristics in the F&I field can improve the overall efficiency of the sector since the legislator could eliminate certain restrictions that are only understood by the absence of reliable calculation methods. It is evident that the existence of optimization-simulation methods capable of dealing with rich and real-life F&I challenges allows the legislator to impose less severe conditions on the firms.

Conclusions & Future Work

This paper has discussed how biased-randomized algorithms and simheuristics are increasingly being used in financial applications. Biased-randomization techniques allow us to easily transform a greedy heuristic into a probabilistic algorithm, which is achieved by employing a skewed probability distribution. These randomized algorithms can then be run in parallel to obtain high-quality solutions, in short computing times, even for challenging optimization problems. They can also be employed inside more complex multi-start or metaheuristic frameworks if more time is available to perform the computations. In addition, simheuristics allow managers to include uncertainty into their optimization models. This is accomplished in a natural way by integrating simulation inside a metaheuristic framework. Both methodologies can also be used together, so many published simheuristics also employ biased-randomized strategies.

Some of the financial challenges where the aforementioned methodologies have been employed so far include rich and stochastic versions of the portfolio optimization problem, or the asset-liability management problem. In different computational experiments, the benefits of using these optimization-simulation approaches have been shown. In particular, when combined with parallel computing, biased-randomized algorithms can be used to quickly generate high-quality solutions in situations in which the existence of dynamic conditions demand re-optimizing the problem every now and then (agile optimization in (Martins et al. 2021)). Likewise, simheuristics can provide noticeable improvements over a more classical approach in which optimal or near-optimal solutions to the deterministic version of an optimization problem are considered in a real-life situation where stochastic uncertainty is present.

Since many real-life financial challenges can be related to portfolio optimization problems, risk management problems, and asset-liability problems, there is yet a vast area to cover regarding the use of the proposed methodologies. In particular, some future research lines for scientists and practitioners working on the intersection between Finance and Operations Research are described next: (i) both, biased-randomized algorithms and simheuristics, can also be combined with machine learning methods to tackle financial optimization problems with dynamic inputs (e.g., dynamic correlations between assets that might depend upon the current status of the portfolio), thus leading to learnheuristics ((Calvet, Armas, et al. 2017)); and (ii) simheuristics can also be combined with fuzzy logic, so they not only consider stochastic uncertainty, but also uncertainty of non-stochastic nature ((Oliva et al. 2020)), which might be really useful to include expert predictions in optimization models.

Acknowledgments

This work has been partially supported by the collaboration agreement between Divina Pastora Seguros and the Universitat Oberta de Catalunya.

Referencias

Almouhanna, A., C. L. Quintero-Araujo, J. Panadero, A. A. Juan, B. Khosravi, and D. Ouelhadj. 2020. “The Location Routing Problem Using Electric Vehicles with Constrained Distance.” Computers & Operations Research 115: 104864.
Bayliss, C., R. Guidotti, A. Estrada-Moreno, G. Franco, and A. A. Juan. 2020. “A Biased-Randomized Algorithm for Optimizing Efficiency in Parametric Earthquake (Re) Insurance Solutions.” Computers & Operations Research 123: 105033.
Bayliss, C., M. Serra, A. Nieto, and A. A. Juan. 2020. “Combining a Matheuristic with Simulation for Risk Management of Stochastic Assets and Liabilities.” Risks 8 (4): 131.
Better, M., F. Glover, G. Kochenberger, and H. Wang. 2008. “Simulation Optimization: Applications in Risk Management.” International Journal of Information Technology & Decision Making 7 (04): 571–87.
Calvet, L., J. de Armas, D. Masip, and A. A. Juan. 2017. “Learnheuristics: Hybridizing Metaheuristics with Machine Learning for Optimization with Dynamic Inputs.” Open Mathematics 15 (1): 261–80.
Calvet, L., M. Lopeman, J. de Armas, G. Franco, and A. A. Juan. 2017. “Statistical and Machine Learning Approaches for the Minimization of Trigger Errors in Parametric Earthquake Catastrophe Bonds.” SORT-Statistics and Operations Research Transactions, 373–92.
Chica, M., A. A. Juan, C. Bayliss, O. Cordón, and W. D. Kelton. 2020. “Why Simheuristics? Benefits, Limitations, and Best Practices When Combining Metaheuristics with Simulation.” SORT 44 (2): 311–34.
Clarke, G., and J. W. Wright. 1964. “Scheduling of Vehicles from a Central Depot to a Number of Delivery Points.” Operations Research 12 (4): 568–81.
Doering, J., R. Kizys, A. A. Juan, A. Fito, and O. Polat. 2019. “Metaheuristics for Rich Portfolio Optimisation and Risk Management: Current State and Future Trends.” Operations Research Perspectives 6: 100121.
Estrada-Moreno, A., A. Ferrer, A. A. Juan, A. Bagirov, and J. Panadero. 2020. “A Biased-Randomised Algorithm for the Capacitated Facility Location Problem with Soft Constraints.” Journal of the Operational Research Society 71 (11): 1799–1815.
Gaspero, L. D., G. D. Tollo, A. Roli, and A. Schaerf. 2011. “Hybrid Metaheuristics for Constrained Portfolio Selection Problems.” Quantitative Finance 11 (10): 1473–87.
Golden, B. L., J. S. DeArmon, and E. K. Baker. 1983. “Computational Experiments with Algorithms for a Class of Routing Problems.” Computers & Operations Research 10 (1): 47–59.
Gonzalez-Neira, E. M., D. Ferone, S. Hatami, and A. A. Juan. 2017. “A Biased-Randomized Simheuristic for the Distributed Assembly Permutation Flowshop Problem with Stochastic Processing Times.” Simulation Modelling Practice and Theory 79: 23–36.
Grasas, A., A. A. Juan, J. Faulin, J. De Armas, and H. Ramalhinho. 2017. “Biased Randomization of Heuristics Using Skewed Probability Distributions: A Survey and Some Applications.” Computers & Industrial Engineering 110: 216–28.
Juan, A. A., J. Faulin, S. E. Grasman, M. Rabe, and G. Figueira. 2015. “A Review of Simheuristics: Extending Metaheuristics to Deal with Stochastic Combinatorial Optimization Problems.” Operations Research Perspectives 2: 62–72.
Juan, A. A., P. Keenan, R. Martí, S. McGarraghy, J. Panadero, P. Carroll, and D. Oliva. 2021. “A Review of the Role of Heuristics in Stochastic Optimisation: From Metaheuristics to Learnheuristics.” Annals of Operations Research, 1–31.
Kizys, R., J. Doering, A. A. Juan, O. Polat, L. Calvet, and J. Panadero. 2022. “A Simheuristic Algorithm for the Portfolio Optimization Problem with Random Returns and Noisy Covariances.” Computers & Operations Research 139: 105631.
Kizys, R., A. A. Juan, B. Sawik, and L. Calvet. 2019. “A Biased-Randomized Iterated Local Search Algorithm for Rich Portfolio Optimization.” Applied Sciences 9 (17): 3509.
Macaulay, F. R. 1938. Some Theoretical Problems Suggested by the Movements of Interest Rates, Bond Yields and Stock Prices in the United States Since 1856. National Bureau of Economic Research, New York.
Mangram, M. E. 2013. “A Simplified Perspective of the Markowitz Portfolio Theory.” Global Journal of Business Research 7 (1): 59–70.
Markowitz, H. M. 1952. Portfolio selection.” The Journal of Finance 7 (1): 77–91.
Martí, R. 2003. “Multi-Start Methods.” In Handbook of Metaheuristics, 355–68. Springer.
Martins, L. do C., D. Tarchi, A. A. Juan, and A. Fusco. 2021. “Agile Optimization for a Real-Time Facility Location Problem in Internet of Vehicles Networks.” Networks.
Mirjalili, S. 2019. “Genetic Algorithm.” In Evolutionary Algorithms and Neural Networks, 43–55. Springer.
Moral-Escudero, R., R. Ruiz-Torrubiano, and A. Suárez. 2006. “Selection of Optimal Investment Portfolios with Cardinality Constraints.” In 2006 IEEE International Conference on Evolutionary Computation, 2382–88. IEEE.
Nawaz, M., E. E. Enscore Jr., and I. Ham. 1983. “A Heuristic Algorithm for the m-Machine, n-Job Flow-Shop Sequencing Problem.” Omega 11 (1): 91–95.
Nesmachnow, S. 2014. “An Overview of Metaheuristics: Accurate and Efficient Methods for Optimisation.” International Journal of Metaheuristics 3 (4): 320–47.
Nieto, A., M. Serra, A. A. Juan, and C. Bayliss. 2022. “A GA-Simheuristic for the Stochastic and Multi-Period Portfolio Optimisation Problem with Liabilities.” Journal of Simulation. https://doi.org/10.1080/17477778.2022.2041990.
Oliva, D., P. Copado, S. Hinojosa, J. Panadero, D. Riera, and A. A. Juan. 2020. “Fuzzy Simheuristics: Solving Optimization Problems Under Stochastic and Uncertainty Scenarios.” Mathematics 8 (12): 2240.
Panadero, J., J. Doering, R. Kizys, A. A. Juan, and A. Fito. 2020. “A Variable Neighborhood Search Simheuristic for Project Portfolio Selection Under Uncertainty.” Journal of Heuristics 26 (3): 353–75.
Rabe, M., M. Deininger, and A. A. Juan. 2020. “Speeding up Computational Times in Simheuristics Combining Genetic Algorithms with Discrete-Event Simulation.” Simulation Modelling Practice and Theory 103: 102089.
Resende, M. G. C., and C. C. Ribeiro. 2010. “Greedy Randomized Adaptive Search Procedures: Advances, Hybridizations, and Applications.” In Handbook of Metaheuristics, 283–319. Springer.
Saiz, M., M. A. Lostumbo, A. A. Juan, and D. Lopez-Lopez. 2022. “A Clustering-Based Review on Project Portfolio Optimization Methods.” International Transactions in Operational Research 29 (1): 172–99.
Schaerf, A. 2002. “Local Search Techniques for Constrained Portfolio Selection Problems.” Computational Economics 20 (3): 177–90.
Soler-Dominguez, A., A. A. Juan, and R. Kizys. 2017. “A Survey on Financial Applications of Metaheuristics.” ACM Computing Surveys (CSUR) 50 (1): 1–23.
Sörensen, K., and F. Glover. 2013. “Metaheuristics.” Encyclopedia of Operations Research and Management Science 62: 960–70.
Urli, B., and F. Terrien. 2010. Project portfolio selection model, a realistic approach.” International Transactions in Operational Research 17 (6): 809–26.

Más BEIO

Uso de app’s para recogida de datos en la estadística oficial

Los institutos oficiales de estadística europeos han realizado un gran esfuerzo durante los últimos años para adaptarse al avance de las nuevas tecnologías estableciendo un nuevo canal de recogida de datos basados en cuestionarios web de auto-cumplimentación. Eustat, el Instituto Vasco de Estadística, lleva trabajando desde el año 2017 en el desarrollo de app’s para teléfonos móviles.

New advances in set estimation

Some recent advances in Set Estimation, from 2009 to the present, are discussed. These include some new findings, improved convergence rates, and new type of sets under study. Typically, the theoretical results are derived under some shape constrains, such as r-convexity or positive reach, which are briefly reviewed, together with some other new proposals in this line. Known constraints on the shape, such as r-convexity and positive reach, as well as recently introduced ones are discussed. The estimation of the home-range of a species, which is closely related to set estimation, is also explored, and statistical problems on manifolds are covered. Commentary and references are provided for readers interested in delving deeper into the subject.

Problemas de Elección Social en el Contexto de los Problemas de Asignación

En este trabajo proponemos un método de elección social basado en el problema de asignación de la investigación de operaciones, en particular consideramos un proceso de votación donde los votantes enumeran según sus preferencias a cada uno de los n candidatos disponibles, luego entonces nosotros construimos una matriz de asignación donde las “tareas” por realizar son los puestos 1,2,…n; siendo el puesto número 1 el principal y el n-ésimo el de menor jerarquía. El valor de la posición ij de la matriz se obtiene considerando el número de veces que el candidato i fue seleccionado para “ocupar” el puesto j. Así obtenemos una matriz de rendimiento y se busca la mejor asignación. Usamos bases de datos obtenidos de algunos procesos de elección en los Estados Unidos de América y comparamos los resultados que se obtendrían con nuestra propuesta, adicionalmente se construyen ejemplos para demostrar que nuestro método no es equivalente a los métodos de Borda, Condorcet y mayoría simple.

Técnicas de diferenciabilidad con aplicaciones estadísticas

En esta tesis doctoral se han explorado diferentes aplicaciones del conocido Método delta (Capítulo 2). En concreto, se han calculado las derivadas de Hadamard direccional de diferentes funcionales de tipo supremo en diferentes contextos. A continuación, se han investigado aplicaciones a inferencia no-paramétrica (Capítulo 3), a los problemas de dos muestras u homogeneidad (Capítulo 4) y a la metodología de k-medias (Capítulo 5).

Relevance and identification of biases in statistical graphs by prospective Primary school teachers

El enorme poder de visualización de la información basada en datos representada mediante gráficos estadísticos, hace especialmente interesante el estudio del entendimiento de dicha información por parte de los ciudadanos que se enfrentan a ella día a día. Al mismo tiempo, en el ámbito de didáctica de la estadística se investiga para conocer cómo se produce la transferencia de conocimiento estadístico en la escuela. Así, aunando ambos fines, el propósito del presente estudio exploratorio es observar el grado de alfabetización estadística que poseen los futuros maestros en base a la evaluación de los gráficos estadísticos, frecuentemente utilizados en los medios de comunicación, y la identificación de los sesgos que debido a su visualización selectiva de los datos a veces estos presentan. Los resultados muestran, de forma implícita, una aceptable identificación de convenios para cada gráfico estudiado mientras que evidencia una muy pobre identificación de sesgos o errores en dichas imágenes. Con ello se deduce una necesidad de refuerzo educativo en cuanto a la enseñanza y aprendizaje de la estadística, concretamente, en los estudiantes del Grado de Educación Primaria para, mediante ello, conseguir ciudadanos con una alfabetización estadística funcional desde la escuela.

Learning to build statistical indicators from open data sources

The paper presents the building of several statistical indicators from different Open Data sources, all of them using a common methodological approach to estimate changes across time. The purpose is to show the problems that must be addressed when using these data and to learn about the different ways to cope with them, according to the type of information, the data available and the aim of the specific indicator. The raw data come from diverse secondary sources that make it publicly accessible: traffic sensors, multichannel citizen attention services, Twitter messages and scraped data from a digital newspapers’ library website. The built indicators may be used as proxies or lead indicators for economic activities or social sentiments.