Regulatory learning; what framework can be proposed to ensure that the financial environment can be controlled?
Bertrand Hassani
University College London, UK
Paris 1 University, France
: J Comput Eng Inf Technol
Abstract
The arrival of big data strategies is threatening the latest trends in financial regulation related to the simplification of models and the enhancement of comparability of approaches chosen by the various entities. Indeed, the intrinsic dynamic philosophy of big data strategies is almost incompatible with the current legal and regulatory framework in particular the one related to model risk governance. Besides, as presented in our application to credit scoring, the model selection may also evolve dynamically forcing both practitioners and regulators to develop libraries of models, strategies allowing to switch from one to the other and supervising approaches allowing financial institutions to innovate in a risk mitigated environment. The purpose of this paper is therefore to analyse the issues related to the big data environment and in particular to machine learning models and to propose solutions to regulators, analyzing the features of each and every algorithm implemented, for instance a logistic regression, a support vector machine, a neural network, a random forest and gradient boosting.
Biography
Bertrand Hassani is an Associate Researcher at Paris 1 University and University College London. He wrote several articles dealing with Risk Measures, Data Combination, Scenario Analysis and Data Science. He was the Global Head of Research and Innovations of Santander Group for the risk division and is now the Chief Data Scientist of Capgemini Consulting. In this role, he aims at developing novel approaches to measure risk (financial and non-financial), to know the customers, to improve predictive analytics for supply chain (among others) relying on methodologies coming from the field of data science (data mining, machine learning, A.I., etc.).