Primary tabs

SKS Unternehmensberatung GmbH & Co. KG

In addition to the authorities (EZB; EBA; NCBs) and to include a relevant subset of the market participants, participants should be chosen according to size (big, medium, small), legislation (from different EU Member States) and market function (private banks, public banks, national special banks like "Sparkassen" or "Genossenschaftsbanken", state development banks,...).
As the feasibility study is following an holistic approach, all data collections in prudential, statistical and resolution data areas should be taken into account. However, the complexity will, of course, rise with any data colleciton added to the scope of the study. On the other hand, only with a full integration of all reporting requirements a real cost release can be realised for the industry.
In addition, there needs to be differentiated between data collections by NCAs, the ECB or the JST. While it should be the aim of the feasibility study that the data collection of the ECB and the JST will be fully integrated, the integration of the NCB data collections should be integrated to the best cost-benefit possibilities.
Figure 8 (Process chain and the three levels of abstraction) gives an overview of the reporting world and the areas, the discussion paper is about. From a static perspective the approach and the included areas seem o.k. to us. But from a dynamic perspective, the most important area of concern, the change process, is not adressed.

The discussion paper implies, that the definition of a data model, the architecture of the infrastructur, the level of granularity etc. are activities which have to be done once and for all. But this is noch the case and to be very clear: An appropriate handling of constant change is the most critical issue for all data integration processes and projects. And the most important reason for the perceived characteristic of each and every datawarehouse, beeing too slow, too inflexible and too expensive, is the fact that the change process ist not handled appropriately.
Appropriately means to us, that implementing change has to be supported by an automated documentation, analysation and test process. The usage of naming conventions, standardised documentation methods, automated evaluable mappings and links between legal sources, information warehouses, interfaces and legacy system data is a prerquisit.

Whatever solution you will go for, from day one on, it will be not complete, partly outdated, error prown, in one word: imperfect. From day one on the need for change is an inevitable companion of the solution.
Not relevantSomewhat relevantRelevantHighly relevant
Training / additional staff (skills)X   
IT changesX   
Changes in processesX   
Changes needed in the context of other counterparties / third-party providersX   
Time required to find new solutionsX   
Other (please specify)X   
Non-applicable
Non-applicable
Highly agreeAgreeSomewhat agreeDon’t agree
Data Dictionary - Semantic level X  
Data Dictionary - Syntactic level X  
Data Dictionary - Infrastructure level X  
Data collection - Semantic level X  
Data collection - Syntactic level  X 
Data collection - Infrastructure levelX   
Data transformation - Semantic levelX   
Data transformation - Syntactic levelX   
Data transformation - Infrastructure level X  
Data exploration - Semantic level  X 
Data exploration - Syntactic level  X 
Data exploration - Infrastructure level  X 
- Data Dictionary / Semantic level: A common understanding of which attributes need to be reported and what they mean are a crucial basis for an integrated reporting framework. In addition, it will uncover overlaps between reportings and hopefully reduce those for the institutes
- Data Collection / Syntatic level: Technical standards is more than just the format of the final report - it's also about the technical specifications for the input data as such (Logial and Physical Data Model) - these need to be harmonized as well.
- Data collection / Infrastructure level: Harmonization of the infrastructure used reduces breaches in media and also benefits the interchangability between institutes and authorities as well as within different authorities
- In general a balance between the benefits and disadvantages of a holistic approach need to be found. A holistic approach can also tend to seek for perfectionism, leaving the industry with a more complex system as before - this needs to be prevented in any means. Also, the period with two systems running in parallel need to be limited to a minimum to not burden the industry even more than before the restructuring.
Areas with the most cost relief are in the harmonisation of data definition and data transformation. At the moment high investments need to be done every year to keep the transformation rules up-to-date and let several transformations run in parallel to fulfill all reporting requirements. If this burden can be relieved, it would bring great benefit to the industry. Especially the use case of using more granular data should be further investigated on, especially with the view on the question, which data still needs to be sent in an aggregated form. If too many aggregations need to be sent, it might lead to the opposite effect, leaving the institutes with no reliefs (aggregates still need to be calculated) but with additional burdens (additional granular data needs to be generated and provided.)
Non-applicable
Non-applicable
Non-applicable
Non-applicable
Non-applicable
Non-applicable
Non-applicable
Non-applicable
Non-applicable
Non-applicable
Non-applicable
Non-applicable
Non-applicable
Requirements of the user (typicaly banks) to be able to use a common data dictionary
Relevance / Reference: The specification is integrated into the legislative process and reflects the current status of the requirements (e.g. actual version of DPM Taxonomy). The concrete legal requirements are clearly and granularly referenced from the specifications.
Functional quality: The specification is qualitatively mature, completely covers regulatory and statistical reporting as well as the resolution reporting system and comprehensively takes into account the banking products found in Europe.
Readability: The specifications are readable and understandable for the professional user.
Structure: The specification is structured appropriately in terms of business and technical aspects. Among other things, technical and functional links/derivations are shown separately.
Automated transfer: An automated transfer of specifications (data structures, transformations, validations etc.) into the bank's systems is supported. The specification is implementation-neutral, i.e. the bank has a free hand in choosing the implementation tools as well as the database technology.
Documentation: Data structures (especially input/output) and transformations are fully documented. The description of functional content on input level uses general banking language and is independent of any concrete regulatory reporting requirement. Market standards (e.g. ISO) are considered.
Benefit: The benefit of using the data dictionary must be clearly recognizable to the banks. Concrete examples with reference to the actually existing banking environment should be available.
Compliance: It is transparent which conditions banks have to fulfill with regard to the specification and which proofs they have to provide. The consequences of non-compliance are also transparent. It is transparent which parts of the data dictionary are binding.
Future development: The process for the continuous development of the standard is transparent and suitable for meeting the requirements in the long term. There are processes in place to ensure adequate involvement of the banking industry.
For any kind of joint activity in the environment of information and information processing, a uniform understanding of terms, models, attributes/properties, rules, etc. is an absolute prerequisite. The first goal of a data dictionary should therefore be to create clarity and transparency.
SignificantlyModeratelyLow
Understanding reporting regulation  X
Extracting data from internal systemX  
Processing data (including data reconciliation before reporting)X  
Exchanging data and monitoring regulators’ feedback X 
Exploring regulatory data X 
Preparing regulatory disclosure compliance. X 
Other processes of institutions X 
A standard data dictionary helps the institutes by:
- harmonizing transformation rules and streamlining transformation for reportings ("Meldewesenvorverarbeitung")
- reducing costs by reducing amounts of transformations
- helping authorities to easen a comparability between reportings by different institutions
- helping institutions to compare data sent in different reporting requirements (including data reconciliation before reporting)
Non-applicable
Not applicable
Non-applicable
Non-applicable
Non-applicable
Non-applicable
By reporting more granular data, the principles of BCBS239 (Data Ownership, Data Quality Assurance) are becoming more crucial, as more granular data is reported and checked by the authorities. Gaps in the Data Quality need to be closed / explained to the authorities in a more frequent manner, which might lead to (mid-term) higher effort. However, by better establishing the principles of BCBS239, also the data will have a higher quality for institutions and authorities
  • statistical
  • resolution
  • prudential
Granular data is allways the best base for a flexible, sustainable, promising solution. But whether it is necessary, is a decision only the adressee of the data can take based on his requirements. As long as the paradigm '...banks have to remain responsible for all the data...' (pragraph no.200) holds and is relevant for all results of prudential calculations, the additional value of granular data might be very limited in the resolution area. But: Enabeling big steps forward requires rethinking of paradigms. If you allow the structural coexistence of aggregated and non-aggregated data, coming from the same sources, cost and effort on both sides, industry and authorities, will increase.
  • option 2
  • option 3
Potential Challenges:
- no relief for institutes as calculation of aggregates still needs to be performed; in addition, requirements for granular data need to be fulfilled
- responsibility for calculation of aggregates should be clarified, however, there is no contradicition between calculations made by authorities and responsibility for those still hold by the insitutes
- in some areas (e.g. track records of bookings) the use of granular data can be considered as non-feasible - further investigation needed.
As pointed out in the answer to question 18, the coexistence of aggregated and non-aggregated data, which are linked to each other, will inrease the effort and cost for both banks and authorities. More data and alligned transformations in both systems (including documentation, change process, reconciliation) will not add big value but drive the effort.
Highly (1)Medium (2)Low (3)No costs (4)
Collection/compilation of the granular data X  
Additional aggregate calculations due to feedback loops and anchor valuesX   
Costs of setting up a common set of transformations*X   
Costs of executing the common set of transformations**  X 
Costs of maintaining a common set of transformations  X 
IT resourcesX   
Human resourcesX   
Complexity of the regulatory reporting requirements  X 
Data duplication  X 
Other: please specify  X 
Highly (1)Medium (2)Low (3)No benefits (4)
Reducing the number of resubmissionsX   
Less additional national reporting requestsX   
Further cross-country harmonisation and standardisation  X 
Level playing field in the application of the requirements  X 
Simplification of the internal reporting process X  
Reduce data duplications X  
Complexity of the reporting requirements X  
Other: please specifyX   
If aggregate values still need to be produced and reported by the insitutes, the costs for the institues will in the end rise: The institutes will need to keep their "old" reporting systems and IT-transformations and in addition report granular data. The potential benefits can only be unlocked, if a reduction in calculation, IT transformations and human ressource capital can be achieved. The question whether the institutes still need to report aggregated values and if yes, which aggregations still need to be reported, needs to be analyzed in detail, especially with the focus question, what this would mean for the cost side of the institues.
The authorities
Harmonised and standardised, ready to be implemented by digital processes (fixed)
Non-applicable
Non-applicable
Non-applicable
Non-applicable
Non-applicable
Non-applicable
Non-applicable
Feedback loops could include, but are not limited to:
- validation errors
- calculaiton results of aggregations done by the authorities
- assessments of the regulator based on the data provided
- feedback to quality of data
Non-applicable
Not applicable
Not applicable
Non-applicable
Non-applicable
Non-applicable
Non-applicable
A CDCP should be a standardized platform with standardized data definition (semantic and syntactic), which enables data sharing between the authorities and enables insitutes to send data requirements in one format and only once.
Non-applicable
Non-applicable
Non-applicable
not valuable at allvaluable to a degreevaluablehighly valuable
Data definition – Involvement    
Data definition – Cost contribution    
Date collection – Involvement    
Date collection – Cost contribution    
Data transformation – Involvement    
Data transformation – Cost contribution    
Data exploration – Involvement    
Data exploration – Cost contribution    
Data dictionary – Involvement    
Data dictionary – Cost contribution    
Granularity – Involvement    
Granularity – Cost contribution    
Architectures – Involvement    
Architectures – Cost contribution    
Governance – Involvement    
Governance – Cost contribution    
Other – Involvement    
Other – Cost contribution    
Non-applicable
Non-applicable
Non-applicable
A mixed (pull and push) approach
Please note that the definition and understanding of Push and Pull in the discussion paper is not fully correct (please refer to answer of question 41 for further reasoning). However, a mixed Push and Pull approach is most beneficial from our point of view, as huge and standardized data sets can be pushed to the authorities while smaller, more ad-hoc needed requests can be pulled by the authorities directly from the collection layer (not the source systems of the bank, like described in the Discussion Paper!)
First, let's get a common understanding of what is Pull and Push approach. Pull and Push is basically an apporach of the transfer of data, where in the first option (Pull) the requster can start the transfer of data by himself while in the second option (Push) the sender sends the data. However, the understanding described in the Discussion Paper with option 1 (Pull), that authorities could request data from the source systems of the institutes is unfeasible: Institues have numerous systems to collect and transform the data they need for their business and regulatory purposes. Besides the fact, that many of them are not suitable for API technology and thus would make a Pull approach unfeasible, the data format would differ between every institute, many systems and interfaces, which would lead to an enourmous effort in harmonizing the data on the side of the authorities. Furthermore, Pull approaches are most suitable for small data amounts, as the technology is focused on that as of now. Therefore, the following can be suggested: Institues will gather their data on an harmonized, API-ready platform in the semantic and syntactic definitions of the new integrated reporting. Standardized reportings (with high volumes of data) are send via Push to the authorities at predefinied points in time. In addition, authorities have the possibility to request data by Pull for ad hoc request (mostly lower volumes of data) from the harmonized data platform. Only one interface (instead of dozns) per institute would be required, combining the advantages of both Push and Pull approaches.
Non-applicable
Non-applicable
Non-applicable
Non-applicable
The approach depicts a feasible approach. However, in the day-to-day life and execution of the coordination mechanism it needs the be checked, if the approach is able to work in a timely manner, meaning that the processing of new data requests and the definition and implemenation of the same are feasible to do in an appropriate time manner. An "overengeneering" of processes and procedures should be prevented.
See also answer to questions 3
To handle the change process in an appropriate way, different measures have to be implemented and followed from the beginning. It starts with naming convention, documentation standards, automised analysis and testing, separation of layers etc. It is a specific process modell comprising an information model, standards, conventions, automated reporting, automated testing etc. If you want to learn more about our approach, we look forward to giving you more information about it.
From an organisational perspective, there must be a central unit, beeing responsible for the solution, allignment with the process model and especially coordinating requirements and change.
Non-applicable
Non-applicable
Non-applicable
Non-applicable
Data definition
Data definition and data exploration as these can be most harmonized and thus, could benefit the most from standardized technology, which can be applied to a broader customer base
We can confirm that a standardization of interfaces to the authorities as well as consolidation and standardization of data types and elements requested by the authorities would solve some of the obstacles highlighted in the discussion paper. Obstacles like the integration of RegTech into the banks IT landscape would still hold true, however, the standardization would help the insititutes to adopt automation in their processes themselves, making them also more independent of service and IT providers in general.
As a further obstacle the distribution of responsibilities for calculations and aggregations can be named. if the institutes need to keep their infrastructure for the calculations up and running, cost benefits could shrink to non-existenece or even switch into a further cost-disadvantage.
Another obstacle could also be data quality issues and/or special instruments, which don't fit in the standardized data model. For those are currenly mostly (manual) workarounds in place in several institutes to fit the instruments into the reporting requirement. The solution developed under this initaitve should also take these challenges into account.
Non-applicable
If the requirements of prudential, statistical and resolution reporting are integrated and harmonized in a machine-readible way, RegTechs could help to facilitate and speed up the implementaiton of the new reporting framework for all institutes. Looking further in the future, RegTechs could also help insitutions or authorities in the data explotaion by analyzing the reported data, uncovering anomalies etc.
No
Without doubt data standardisation is an enabler for the application of RegTech as standardisation is always an enabler for the use of technology in general. However, it is not the "first necessary step" as also legal questoins (data protection, question about responsibility etc.) are limiting the use of RegTech as well as standard make-or-buy questions. In addition, some services provided by RegTechs are either already covered by reporting systems or internal departments of the bank.
Simone Lehnert
S