If only statistical, supervisory and resolution data are in scope, then ECB, EBA, SRB, NCBs, NCAs and NRAs and the industry (including banking and financial associations) should be considered as a minimum.
Regular harmonised statistical, supervisory and resolution reporting have to be included for sure. In addition, regular national statistical, supervisory and resolution requirements and ad hoc data collections should also be considered. However, the latter should be treated following a “lighter” approach, as national authorities need to accomplish their tasks in a timely manner without unnecessary administrative burdens. In our opinion, national requirements (and to some extent ad hoc requests) can be in scope of the study, but not with the aim of regulating them, something that the EU could not do as national requirements respond to needs related to tasks which are of national and not European competence. Therefore, their inclusion in the study should be related to the objective of taking them into account for the purposes of an overall analysis of the burden of the banks (banks themselves have complained a lot about that) and for the purposes of a possible (voluntary) harmonization at least at semantic level; in other words, it would be the same objective that the IReF group is pursuing when developing the technical layer for national requirements. The underlying idea is that the single European data dictionary could be used to classify and define also national requirements, thus facilitating banks even in the absence of their real inclusion in EU harmonized schemes.
With regard to some specific issues, such as the choice of a unique data model to be adopted in Europe and the data sharing among Authorities, the scenario considered in the DP should be enlarged in order to consider also the choices and the practices of the European Statistical System and other international institutions (e.g. IMF, BIS, FSB, WB). Although this enlargement of the scope may be considered beyond the provisions of article 430c of the CRR, the design of such an important change towards the integration of the European banking reporting and a new model of collaboration among Authorities should be done by considering these important element of the “overall data ecosystem”. As an example, when debating the alternative between Data Point Model/ XBRL and SDMX, as remarked above it is crucial to consider also what Eurostat and the European National Statistical Institutes, as well as other international institutions, are doing. Moreover, it is important to consider that data requirements stemming from international organisations (BIS, IMF, etc.) may also have an impact on banks.
Finally, the DP does not consider how the system is working with regard to some important transactional data, such as the EMIR and SFT “data repositories”, and other unstructured data collections.
Not relevant
Somewhat relevant
Relevant
Highly relevant
Training / additional staff (skills)
X
IT changes
X
Changes in processes
X
Changes needed in the context of other counterparties / third-party providers
X
Time required to find new solutions
X
Other (please specify)
X
Highly agree
Agree
Somewhat agree
Don’t agree
Data Dictionary - Semantic level
X
Data Dictionary - Syntactic level
X
Data Dictionary - Infrastructure level
X
Data collection - Semantic level
X
Data collection - Syntactic level
X
Data collection - Infrastructure level
X
Data transformation - Semantic level
X
Data transformation - Syntactic level
X
Data transformation - Infrastructure level
X
Data exploration - Semantic level
X
Data exploration - Syntactic level
X
Data exploration - Infrastructure level
X
It should be clarified that the data dictionary should be “decentralized” at both physical and organizational level.
In terms of steps of the statistical data processing, we believe that data exploration is not in the scope of Article 430c of the CRR. However, considering that at a later stage it could be convenient to widen the coverage of the system also to that specific stage, the discussion paper could cover it, although with less emphasis than that given to the other stages (data definition, data collection and data transformation).
The BoI has developed its own data dictionary covering all kind of statistical, supervisory and resolution reporting. With specific reference to EBA-ITS, we have hosted the DPM in our dictionary, thus adapting it to our meta-model. However, from the semantic and syntactic point of view, it represents a separated partition of our dictionary. This “double” solution is in our opinion not efficient. We strongly believe that a deep analysis should be carry out as soon as possible in order to choose a unique data model to adopt for all the banking reporting in Europe.
In our opinion the adoption of a common data dictionary is a necessary precondition for moving the European data collection towards a complete integration. This is why we believe that the DP should attach more importance to the data dictionary rather than to the development of a central data collection point (CDCP), which eventually may be created only once the common data dictionary is in place.
The DP claims that the data dictionary must be unique and can be shared (at least in part); however, it is not explicitly clarified that uniqueness must be meant only at a logical level, not at a physical or organizational level. Moreover, it is implicit that the idea of a "single dictionary" goes hand-in-hand with that of its "centralization". In our opinion, the idea that the single dictionary should be centralized is wrong, because centralization would result in a straitjacket for the entire system: anyone who needs a definition should go through the centralized unit that manages the dictionary or through the centralized IT system in which the dictionary is located. As an alternative, we support the view of a decentralized dictionary, as long as its components follow the same meta-model and they can communicate through appropriate interfaces. A properly governed physical and organizational distribution of the dictionary would give flexibility, robustness and better performance to the overall system.
Data integration can be better characterized in terms of common syntax, concepts and definitions across different surveys and domains, covering statistical, supervisory and resolution ones, rather than, strictly speaking, with respect to the integration of different information requirements into a single survey. In order to implement the broader perspective we recommend, the key ingredient is the adoption of a common data dictionary, which would contain information (metadata) about the content of each data and each variable. The data dictionary also describes the hierarchy of the different domains and sub-domains, their combination, the relevant transformation rules, the bridging among different concepts. In this way (conceptual), integration of data from multiple domains is achieved, using a common data model, even if the reporting is not fully integrated.
The main advantage of the above approach and the reason why we have always advocated its adoption is that European authorities entitled to impose reporting obligations to the banking industry would be “only” required to jointly compile and share a single data dictionary. This presents two main advantages. First, this approach is already feasible under the current legal and organisational framework. Second, the existence of a single data dictionary, shared among European authorities, would bring some immediate advantages to reporting agents as well, since it would prevent the introduction either of new reporting obligations that duplicate already collected information or of slightly different concepts/definitions between data collected for statistical, supervisory and resolution purposes where they are deemed unnecessary. As a by-product, this approach would also make evident some current overlaps between the three domains (i.e. data with the same semantic meaning across different databases), thus stimulating the rationalization and the future dismissing of some of the current reporting obligations.
Semantic integration would represent a powerful device to stimulate an effective and closer cooperation among authorities in the field of reporting requirements, at a relatively low cost; this would allow to give “concrete answers” to the banks’ complaints in a reasonable period of time. The value added associated to the development of a common data dictionary will be even larger if the latter goes hand-in-hand with the development of a Banking Integrated Reporting Dictionary (BIRD). Indeed, both projects would provide a crucial support to the banking reporting activity by logically and methodologically bridging the Authorities’ reporting requirements with the reporting agents’ internal databases. As a matter of fact, users would also benefit from having datasets that follow the same standards. On the other hand, it is necessary to design and implement an organizational process in order to ensure flexibility, efficiency and timeliness of changes.
Significantly
Moderately
Low
Understanding reporting regulation
X
Extracting data from internal system
X
Processing data (including data reconciliation before reporting)
X
Exchanging data and monitoring regulators’ feedback
X
Exploring regulatory data
X
Preparing regulatory disclosure compliance.
X
Other processes of institutions
X
Highly important
Moderately costly
Moderate cost reductions
High cost reductions
statistical
Statistical reporting area is the one where the use of granular data is more feasible and can help to reduce redundancies in reporting. This is due to the fact that the derivation of aggregates from granular data is mainly achieved through simple steps of calculation. As far as supervisory and resolution reporting is concerned, it is sometimes of immediate practical use to have aggregated data, even if it is recognized that granular data allows greater flexibility of analysis.
option 2
Redefining all the existing statistical, supervisory and resolution requirements according to a single dictionary and eliminating the redundancies embedded in a template-based representation of users’ requirements.
It is advisable to explore the possibilities of deriving some FINREP aggregates from the existing granular statistical reports, such as AnaCredit and SHSG. However, this is possible only for FINREP at individual level, as the statistical reports (at least for the time being) are not collected at the consolidated level.
Deriving the aggregated figures from data at the highest level of granularity implies the development of very complex transformations. In addition, as in Option2 the derivation of consolidated reports would require input information from all the legal entities of the banking group plus the development of transformation rules to produce consolidated figures that would be up to the authorities.
A first attempt could be done by reusing the existing granular information collected in AnaCredit and SHSG and integrating it with additional accounting information to produce the breakdown of loans and securities requested in FINREP.
Highly (1)
Medium (2)
Low (3)
No costs (4)
Collection/compilation of the granular data
X
Additional aggregate calculations due to feedback loops and anchor values
X
Costs of setting up a common set of transformations*
X
Costs of executing the common set of transformations**
X
Costs of maintaining a common set of transformations
X
IT resources
X
Human resources
X
Complexity of the regulatory reporting requirements
X
Data duplication
X
Other: please specify
X
Highly (1)
Medium (2)
Low (3)
No benefits (4)
Reducing the number of resubmissions
X
Less additional national reporting requests
X
Further cross-country harmonisation and standardisation
X
Level playing field in the application of the requirements
X
Simplification of the internal reporting process
X
Reduce data duplications
X
Complexity of the reporting requirements
X
Other: please specify
X
There could be an issue with small and non-complex institutions which have a much lower budget and could have no interest in changing the status quo. They would see only the costs to be borne soon and not see the longer term benefits.
The authorities
Harmonised and standardised, ready to be implemented by digital processes (fixed)
If the responsibility for defining transformations lies with the Authorities it will prevent an additional burden on reporting agents.
All the aggregated figures which imply complex calculations (i.e. total/subtotal should be avoided).
Very important
The data exchange between the ESCB, the SSM, the Single Resolution Mechanism (SRM) and the EBA is cumbersome. The resolution and prudential data follow separate sequential processes through the national resolution authorities (NRAs) and the national competent authorities (NCAs) to the SRB and ECB, respectively, and then onto the EBA. Removing data model inconsistencies and creating a proper IT setup would allow authorities to share data with each other and to have immediate and simultaneous access to the data to which they are legally entitled in line with the applicable legal framework. BoI is of the opinion that it is important that protocols and MoUs among Authorities on data sharing are defined in order “to ensure that data are reported only once, authorities will need to share data with each other and to have immediate and simultaneous access to the data to which they are legally entitled in line with the applicable legal framework, as also pointed out by the banking industry”, as recognized by ESCB in its input into the EBA feasibility report. This is key if we want to move towards a genuinely efficient system of data collection. In this respect, the development of a shared repository would make an efficient data sharing between authorities easier.
In order to achieve the highly desirable interoperability among different sets of supervisory, prudential and statistical data managed by European authorities, the following initiatives could be undertaken:
(i) enabling the interoperability of different European authorities’ informative systems, for example via the creation of IT networks facilitating the data sharing;
(ii) adopting common registers, e.g. master data concerning the reporting population and the relevant obligations, reference data concerning the counterparties of the banking operations, databases concerning securities and derivative transactions;
(iii) standardizing the data exchange formats;
(iv) adopting a common data dictionary describing definitions and concepts used in prudential and statistical reporting;
(v) increasing the use of common identifiers for counterparties (such as LEI) and securities;
(vi) exploiting new technologies.
The development of a “shared (ECB, EBA, SRB) repository of data”, included in the concept of “central data collection point”, is completely different from that of a “single data collection platform”, which we do not support. A shared repository would represent a more effective and efficient solution to the interoperability goal, as it would facilitate the synchronization of data uses by all authorities.
Differently, any possible action envisaging changes on the current reporting processes (including architecture, governance and operating model) should be assessed against their impact on legacy national data systems and with the need to ensure the latter continue to ensure the fulfilment of national duties. For example, the IT infrastructure that supports a common/shared data register and a common data dictionary should take into account the existing national platform. Moreover, it should allow national authorities to maintain (at least on a voluntary basis) their current responsibilities in the collection of banks’ reporting; their proximity to intermediaries and deep knowledge of local banking systems (also in terms of well-rooted relationships with people who are actually in charge of statistical reporting within the various institutions) represent an important “value added”, that contributes to guarantee a high quality of the data collected and made available to the authorities.
The BoI does not agree with the exercise of presenting the different connection topologies of IT systems as the basis of reasoning for choosing the architecture of an integrated system. In our opinion, what does matter at this stage is not the implementation of a computer network, but rather that of a “data processing process”, including roles and responsibilities of various bodies, nations and institutions. Many of the arguments that have been raised in order to extend network topologies to data processing processes are not convincing. For example, "service bus", "bus topology" and "hub and spoke" are only methods of connecting processing systems; they do not identify at all the tasks of those who use them. Comparing the various configurations of the system on the basis of theoretical considerations seems to us quite inappropriate at this stage. The actual configuration of the future integrated system should be done on the basis of elements (requirements, data, etc.) that are still undefined. In the analyses on the configuration of the integrated system, it is also necessary to distinguish various plans (e.g. decision-making, methodological, organizational, technical); indeed, for each of them the convenience of adopting a centralized approach may be different and this issue should be explored carefully.
All in all, we believe that first the legal aspects as well as the competences and needs of the national and international authorities should be analysed. Then, moving from them the general principles to be respected should be derived, first of all in organizational (who decides/can do what) and economic (who pays for shared things) terms and what should be the requirements of the future system, as well as the constraints to be taken into account. Only at the end of this analysis one should deal with the (possible) configurations of the processes and the related IT solutions. In sum, we recommend to concentrate the initial efforts on the definition of a common organization between the entities participating in this endeavour, something that is not even mentioned in the DP.
Finally, as a national authority despite the importance we traditionally attach to the integration of statistical reporting, it is crucial that the new system preserves the necessary flexibility to accommodate data requests deriving from national legislation.
To conclude, it cannot be taken for granted that a centralized system is the best solution; depending on how the whole system is organized, it can be argued that a decentralized solution might perform even better. The elements available at the moment do not allow to define what will be the most convenient solution. The definition of a technical solution must necessarily follow that of the (common) governance, i.e. the body that decides the planning and execution of the various steps of the project and, in the running phase, manages the business (metadata definitions and data processing) and the technical activities.
No
not valuable at all
valuable to a degree
valuable
highly valuable
Data definition – Involvement
X
Data definition – Cost contribution
X
Date collection – Involvement
X
Date collection – Cost contribution
X
Data transformation – Involvement
X
Data transformation – Cost contribution
X
Data exploration – Involvement
X
Data exploration – Cost contribution
X
Data dictionary – Involvement
X
Data dictionary – Cost contribution
X
Granularity – Involvement
X
Granularity – Cost contribution
X
Architectures – Involvement
X
Architectures – Cost contribution
X
Governance – Involvement
X
Governance – Cost contribution
X
Other – Involvement
Other – Cost contribution
Data definition: The industry should be involved somehow by area of reporting. For each area work should be focused on harmonising the existing frameworks and assess the feasibility and options of new information requirements. The contribution of the experts requires an high FTE especially at the initial implementation stage.
Governance: Data governance, which entails defining, implementing and monitoring strategies, policies and shared decision-making over the management and use of data, implies a full contribution of all stakeholders, including the industry: decision-making process, implementing process, operating process.
A push approach
Costs: Feeding a granular repository in real time is not less burdensome for respondents than pushing aggregated reports at certain deadlines. Data repository would be in any case standardized across institutions and would require some transformations to be applied to operational data. Feeding such repository in real time or on a daily basis would imply very high costs. In addition, it would be very costly also for authorities, which would be in charge of developing, maintaining, disclosing and executing the transformation rules to build the needed aggregates. Though fascinating in principle and with potential benefits, this approach is very costly for both sides, authorities and institutions.
Benefits: Much more flexibility in the use of data.
Obstacles/challenge: The main obstacle is its impact on legacy national data systems and the need to guarantee that the latter continue to ensure the fulfilment of national duties. For example, the IT infrastructure which supports a common/shared data register and a common data dictionary should take into account the existing national platform.
Possible solutions: Voluntary use of the CDCP. Countries may opt to use their national platform for data collection and validation (though using the common data dictionary) and to feed only a common repository of data.
Obstacles/challenge:
• The legal frameworks may restrict data-sharing to support only specific tasks or uses
• The legal frameworks may allow data-sharing but limit it to specific conditions
• The legal frameworks may give rise to a variety of interpretations, resulting in a legal vacuum or in a reluctance to share data
• Some legal frameworks may foresee restrictive “blanket” confidentiality rules that cover all types of information, irrespective of whether there are real risks of a breach of confidentiality
• Apart from legal and confidentiality constraints, various organisational, cultural and technical factors can represents important obstacles to an effective data-sharing.
Possible solutions:
A legal act from the Commission to authorize data sharing among authorities
Policy and protocols on data access and data sharing among authorities should be defined
The suggested coordination mechanism, as described in the DP, could be an obstacle for authorities to accomplish their tasks in a timely manner. It should not be mandatory for national authorities to provide the overall requirements (instructions, reporting schemes, new concept, etc.) and related cost/benefit analysis to the CDCP for its review. As a general remark, the CDCP should act as a sounding board aiming at granting the non-redundancy and consistency of any new/amended reporting requirements to be included in the integrated system issued by the NCAs. The opinions and recommendations issued during its assessment process should not be binding per se.
Some further coordination activities are feasible and could help avoiding redundancies in the longer term:
• storing in a central data inventory the description of each information requirement to increase transparency of data requests;
• opting, on a voluntary basis, for the inclusion of data definitions into the EU common data dictionary; to this end, providing the authorities with some guidelines and standards for the modelling of definitions and formats would be highly beneficial.
Data definition
Data collection
Data transformation
Data exploration
Data definition
Also Data collection and Data transformation would benefit from RegTech development as much as Data definition.
Frequent changes in national and international regulations and integration with legacy systems are surely important challenges for reporting agents (i.e. institutions). However, reporting agents have to cope with those as they seem to be unavoidable constraints. Initiatives on regulatory data standardisation and data integration are key priorities in order to support the uptake of more efficient reporting RegTech solutions. To this end, we consider the European integrated system a crucial step in that direction even if there are some significant initial costs to bear for both authorities and reporting agents.
in-house
Banca d’Italia has already its own IT platform (internally developed) for end-to-end data processing.
Technological solutions would enable digital processing of data and metadata.
Yes
The choice of a meta-model and the implementation of a data dictionary should be the first two decisions that the future Joint Committee should take in order to progress in the direction of integration. Proper IT tools would be crucial to proceed in an effective and efficient way.