Primary tabs

Sveriges Riksbank

Given the mandate given by the Article 430c, the institutions which should be considered by the feasibility study should logically be those reporting "statistical, prudential and resolution data". Since the Article 430c forms part of the Capital Requirements Regulation EU No 575/2013 however, it may not be appropriate nor legally possible to extend the feasibility study to institutions that are not in scope of the CRR. The institutions that should be considered by the feasibility should therefore most probably be the institutions that are in scope of the CRR.
According to Art 430c, the data collections to consider should be of “statistical, prudential and resolution” nature. This means that transaction reporting emanating from ESMA cannot be considered in the feasibility study, even though it could have made sense to include these frameworks.

Regarding statistical frameworks specifically (which are under the responsibility of our institution):
1) The statistical regulations emanating from the ECB are not strictly mandatory for the EU countries outside the euro area. Therefore, it may not be appropriate nor legally possible to include statistical data collections delivered by national central banks outside the euro area to the ECB on a gentleman agreement basis. Such collections seem to be classified as “category 2” in Annex I. This point needs clarification from EBA, possibly as part of the feasibility study;

2) In addition, statistical data collections from institutions within the EU are not limited to frameworks emanating from the ECB, but include international banking statistics to the BIS, IMF frameworks etc. In theory, it would be beneficial and sensible to include all frameworks in a future integrated reporting but the Art 430c seems to consider only the ESCB in the cooperation around the feasibility study when it comes to statistical requirements;

3) Last, NCBs collect statistical data also for a national use based on national acts or simply on their institutional mandate, on a regular and ad hoc basis, and integrating all these different national requirements does not seem realistic, notably due to legal and language constraints, and due to the fact that ad hoc requests and integration are by essence difficultly compatible. However, a coordination mechanism as described in the “Governance” section, or as envisaged by the “extended layer” of the IReF could help minimise the number of national and non-integrated requirements by seeking commonalities across regulatory requests.

In conclusion, and considering the limitations described above, it may be wise to limit the feasibility study to:

1) the statistical frameworks which will be included in the future IReF (Integrated Reporting Framework) currently developed by the ECB, and which, if adopted, will replace a number of current statistical regulations (BSI, MIR, SHS, AnaCredit), but will only apply to euro area countries;

2) Payment Statistics, for which an effort to integrate data flows between EBA and the ECB already exists (fraud statistics)
The issues are relevant but not complete. For example, the following issues should be part of, or developed further, in the feasibility study:
• The section about data dictionary should cover data models
• The section about central data collection point should cover legal aspects about data sharing between authorities
• Some important aspects, such as reporting frequencies and remittance dates, revision policies, derogation schemes, validations, data quality frameworks are mentioned but not elaborated upon.
• Elaborated models for cost-sharing between authorities regarding the implementation of integrated reporting are missing
Not relevantSomewhat relevantRelevantHighly relevant
Training / additional staff (skills)  X 
IT changes   X
Changes in processes   X
Changes needed in the context of other counterparties / third-party providers X  
Time required to find new solutions  X 
Other (please specify)   X
For our institution, the main obstacles to develop an Integrated Reporting Framework are:
• High immediate costs to put in relation with longer term benefits. Costs are not easily perceived as investments by the higher management;
• The legacy IT infrastructure used by our NSI which is in charge of producing and delivering most of our statistics;
• The non-compulsory character of the future ECB Integrated Reporting Framework since we are outside the euro area ;
• A general lack of interest and understanding about metadata management outside the Statistics division;
• Lack of dedicated resources;
• Legal obstacles to develop unlimited data access vis-à-vis other authorities;
• Organisational obstacles since different frameworks are under the responsibility of different sections or divisions
Yes, the findings are generally correct. A few comments:

1) Section 36: “based on ECB regulation, institutions in the euro area regularly report statistical data necessary to carry out the tasks of the ESCB.” contrasts with section 52: “This reporting is harmonised across EU”. In fact, the ECB regulations apply to the institutions belonging to the euro area, but in practice, the ECB also receives data from non-Euro area countries on gentleman agreement basis or on the basis of a recommendation. However, the reporting cannot be described as harmonised across EU, since the principle of maximum harmonisation does not apply (this should change with the IReF). NCBs collect the data from institutions on the basis of national acts, according to their own dictionaries, remittance dates, technical standards etc. and transform the data before delivering them to the ECB. This means that a cross-border institution consisting of entities located in different EU countries has to report statistical data with very different levels of aggregations, codes and technical formats etc. depending on the national requirements.

2) Section 53: ”Regarding the level of granularity, data collections comprising two thirds of the data points have both aggregated and granular aspects, while the rest are aggregated and only a very small percentage is reported only on a granular basis.” This is due to the methodology used according to which one attribute = one time series = one data point. This creates an unfortunate bias, since the number of times series / data points which can be created from 1 attribute equals the number of allowed values in the code list (domain) associated to the attribute. In turn, the number of data points that can be obtained from several attributes equals the product of the numbers of allowed values in every code list associated to these attributes. In a framework consisting of 50 categorical attributes associated to code lists with 5 allowed values each (which is much less than the AnaCredit regulation), the number of possible data points would already be 5^50.
Highly agreeAgreeSomewhat agreeDon’t agree
Data Dictionary - Semantic level X  
Data Dictionary - Syntactic level X  
Data Dictionary - Infrastructure level X  
Data collection - Semantic level X  
Data collection - Syntactic level X  
Data collection - Infrastructure level X  
Data transformation - Semantic level X  
Data transformation - Syntactic level X  
Data transformation - Infrastructure level X  
Data exploration - Semantic level X  
Data exploration - Syntactic level X  
Data exploration - Infrastructure level X  
1) Different steps of the reporting process chain
The regulatory data lifecycle (figure 5) reminds of the GSBPM (General Statistical Business Process Model), which provides a standardised model for statistics production process, albeit in a much less detailed way. Maybe the data processes and the models used to describe them should also be harmonised as part of the reporting integration?
(General Observations-06-T-GSBPM v1.0_1.pdf (europa.eu))
2) Different levels of integration
The “syntactic integration” (which could be also called methodological for the sake of clarity) is most probably only relevant for authorities, not for institutions for which it is very far from their legitimate concerns. “Syntactic integration” is a container for taxonomies or frameworks, and probably a necessary step before the semantic integration of these frameworks, but institutions need to focus on the content in order to prepare the data required accordingly. For authorities willing to work with data integration however, a robust information model / metamodel, able to cater for frameworks using different data models and levels of granularity is a necessity.

With this background, it is important to have semantic integration as the end-vision and syntactic integration as an intermediary step in order to achieve the end-vision. Semantic integration itself needs to be done in two steps: 1) translation / mapping of concepts, 2) alignment of concepts, even if it means alteration by simplification, as the end result will nevertheless be of better quality given the minimised reporting burden.

This last point is very well explained by David Bholat in his paper “Modelling metadata in central banks” (ECB Statistics Paper Series). In conclusion, D. Bholat suggests to focus on similarities rather than on differences: “the trick is to see commonalities underlying superficial differences”. Therefore, the argument put forward as an obstacle against semantic integration in section 246 - “one has to make sure that the ‘loan’ concept defined in statistics is actually the same as the ‘loan’ concept that is defined in FINREP” – needs to be revisited. The anomaly here is that authorities cannot agree about a unique definition of a loan and sometimes do not realise the costs involved by such discrepancies for institutions.

The position of the discussion paper regarding whether semantic integration is the end-vision or not is not clear. Section 74: “Semantic integration is considered feasible, highly desirable and one of the main steps in order to achieve integration.” Vs. Section 83: “The integration of the data definition at the syntactic level could be achieved without integration at the semantic level, as no alignment of concepts from the business point of view is needed.” This statement is highly debatable. A syntactic-only integration may be beneficial for authorities and facilitate the technical integration, but it will have an unnoticed impact on institutions that will still need to analyse the concepts and bear the costs for concepts mapping and data transformations to multiple semantic dictionaries. As a result, data quality will not improve and costs will remain identical.

What is missing is the legal integration in section 67. As mentioned in the discussion paper, different levels of harmonisation principle lead to different integration results. Legal integration is in fact a prerequisite and a similar legal framework should ideally be used for all data collections included in the integrated reporting (such as European regulation, implementing technical standards).
NA
General comment for all sections of this question: It is not possible to answer this question without knowing which authorities would be in charge of implementing the selected metamodel, content and infrastructure, and without knowing whether costs would be shared among authorities participating to the data collection and/or dissemination and consumption. For our institution which does not have any metamodel nor integrated data collection, it would be very costly to implement an integrated reporting on national level. The benefits would be minimised burden for the large institutions, reconciliation between frameworks “at design” instead of “at the end” resulting into better data quality, streamlined validation processes, as well as enhanced and faster data exploration, analytics and visualisation.
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
If we interpret “data dictionary” as a “metadata repository” as in the discussion paper, we do not have any metamodel in production, but we are currently implementing one, based on the ECB SDD with some simplifications.
If we interpret “data dictionary” according to its traditional meaning, i.e. a list of terms and corresponding definitions, we use around 3 dictionaries: one using SDMX codes for our internal time-series database, one implemented by our NSI and inspired by SDMX in the collection of our main statistical frameworks, and SDD codes and variables for the most recent frameworks (national versions of AnaCredit, SHS, RIAD).
A metadata repository should contain core elements allowing the definition of entire frameworks/taxonomies i.e. variables, domains, members as well as their corresponding definitions and relationships (variables to domains, domains to members etc.), as well as the models according to which the variables and data sets to which they belong relate to each other. If a framework is described according to an entity-relationship model, the metadata repository needs to describe the relationships between cubes and the identifiers used for the relationships. For frameworks using templates and data points, the metadata repository should also contain combinations of values taken by the variables to produce the data points (or time series) and a rendering package to produce the templates. The metamodel should further include the transformation rules needed to transform the data by way of mapping, aggregation according to hierarchies defined in the dictionary, as well as combinations to define the permutations necessary to obtain the data points. Last, the dictionary should contain a placeholder for definitions and legal references.
The data requests contained in the metadata repository should be organised at least conceptually and logically. A physical level is more difficult to include in a metamodel since authorities and institutions use different databases. However, if the technical standards (such as XBRL or SDMX) are part of the integrated reporting scope, the schemes used for reporting could be included in the dictionary as well.
The syntactic integration is a necessary prerequisite before the first step of the semantic integration i.e. translation and mapping of concepts before the final step which is alignment of concepts and removal of data duplications. Semantic integration is crucial to reduce the reporting burden for reporting institutions and achieve better data quality for authorities. The value chain would be improved since:
1) reporting institutions would focus on the data quality at source with minimal transformations and duplications whereas
2) authorities would focus on the compilation (aggregation, validation etc.), which could be achieved automatically through the data dictionary.
SignificantlyModeratelyLow
Understanding reporting regulationX  
Extracting data from internal systemX  
Processing data (including data reconciliation before reporting)X  
Exchanging data and monitoring regulators’ feedbackX  
Exploring regulatory dataX  
Preparing regulatory disclosure compliance.X  
Other processes of institutionsX  
Highly important
Highly costly
The initial implementation would be highly costly, but on the long run, it would result into significant cost reductions.
High cost reductions
The initial implementation would be highly costly, but on the long run, it would result into significant cost reductions.
Small cost reductions
Ad hoc reporting is usually associated to specific data requests during a limited period of time with a short notice. There is a risk that costs required to integrate such requirements in to a common dictionary exceed benefits, unless the requests are likely to be re-used later.
Section 184 seems to restrict the data dictionary to a metadata repository (“the implementation of a common and unique syntactic data dictionary”). If no semantic integration is envisaged, the benefits for institutions will probably be very limited, as explained above. Costs and benefits for authorities are well described in the section.
Granular data reporting would help institutions meet principle 3 (Accuracy and Integrity) 4 (Completeness), 6 (Adaptability) and 8 (Comprehensiveness). Indeed, granular reporting implies that institutions have modelled and sourced all their risk data according to all relevant categories, themselves being defined according to granular sets of values. This enables quick adaption to new requests with complete and comprehensive data.
  • statistical
  • resolution
  • prudential
Granular data may be used for all statistical frameworks except possibly payments due to extremely high volumes (to be discussed?), provided that data requests comply with GDPR for data related to households. Data of other nature (prudential and resolution) could be collected granularly as well, provided that
1) institutions have resolved the question of manual adjustments, and
2) except when institutions need to be in full control of data on aggregated levels which are delivered to authorities after internal sign-offs.
This is the case when aggregated values are reconciled and audited against financial statements and when internal models are used, but this cannot objectively concern the majority of the thousands of data points prudential frameworks consist of. Banks do not have the possibility to sign-off every single data point individually in practice, but rather focus on the crucial aggregates, which can be reconciled with other frameworks. In this respect, the argument developed in section 210 (“It should be recalled that for statistical purposes it is possible for banks to delegate the data aggregation to authorities, while according to the CRR and BCBS 239 principles, banks should remain responsible for the aggregated data reported to authorities”) seems to close the debate about granularity. In fact, even when national authorities aggregate statistical data in order to compile time series which are further delivered to the ECB, reporting agents remain fully responsible for the contents of the series (or data points) and need to comply with quality standards as defined in the regulations, and provide explanations and revisions when required. It is difficult to see any fundamental difference here in the processes put in place by statistical authorities on one side and supervisory on the other.

Regarding the question of manual adjustments: it is, together with granularity a “chicken and egg” problematic. Granularity is of course impaired by the persistence of manual adjustments (as stated in section 205), but granularity can also force institutions to review their business processes and remove manual adjustments.

Statements contained in Section 196 are subject for discussion: “Concepts defined for statistical reporting have more straightforward definitions (less complex concepts compared to prudential and resolution concepts) are harmonised by international and EU standards (e.g. System of National Accounts (SNA) and European System of Accounts (ESA) 2010).”
In fact, national authorities often use definitions deviating from EU standards. For example, national sector classifications are usually far more granular and therefore aggregated in the delivery to the ECB. Moreover, statistical frameworks use themselves different sources as legal references within single concepts: for example, loans in AnaCredit regulation use definitions from BSI, ESA but also CRR. Last, statistical frameworks make a direct use of prudential concepts and definitions as well such as default, performing status, trading book and some statistical data can be used for supervisory purposes according to their regulations.

In summary: granular data reporting should not be excluded a priori, based on the nature of the reporting, on "ideological" grounds. There is a lot of overlaps between statistical, resolution and prudential reporting. Therefore, if granular reporting can be used successfully for statistical reporting, the same data can most probably serve other purposes.
option 2
Option 1 is a quasi status quo (the IReF cannot be considered as a part of the EBA feasibility study itself, since its foundations were laid a few years ago). As such it does not represent any noticeable improvement. Institutions will still need to transform same (or very similar) data in different ways for different frameworks. As a result, no efficiencies will be observed, data quality will not improve, and costs for institutions will not be reduced. The fact that all frameworks will be contained in the same data dictionary (or rather, a metadata repository) according to a synctatic-only integration will not minimise the reporting burden of the institutions.
Option 2 would represent a major improvement compared to the current situation since a lot of data duplications would gradually be removed through 1) a syntactic integration 2) a common collection layer 3) alignment of concepts whenever possible 4) transformations done by authorities wherever the responsibility of aggregation can be removed from institutions. The challenges will be to review all definitions of the thousands data points in detail in order to determine 1) whether alignment of concepts and aggregation by authorities is possible 2) exact transformation rules to apply in that case.
Option 3 is a conceptual dream which may not ever come true – an utopia - due to obstacles of different nature such as (not exhaustively):
1) Those already mentioned in the discussion paper (manual adjustments, responsibility and ownership over aggregate figures, internal models)
2) Legal constraints within banking groups which prevent the flow of granular data and the transmission of households data within the group (this can be especially problematic when entities part of the banking group belong to jurisdictions outside EU)
3) Consolidation rules which would require authorities to have access to the full banking groups hierarchies perfectly up to date at any time
4) Other rules such as repos and derivatives offsetting which would require authorities to have access to master netting agreements in place at reporting institutions
5) Internal resistances within institutions which accounting and risk teams need to visualise predefined aggregate figures in order to provide a sign-off; fear to transform regulatory reporting into a black-box, which would result into a transfer of “power” and influence from business to IT
6) The fact that granularity is a bottomless pit and that the current level used in statistical frameworks may not suffice for some prudential frameworks anyway, in which case it would require a complete re-foundation of the entire collections (see details in question 28).
NA since we are not a reporting institution.
Highly (1)Medium (2)Low (3)No costs (4)
Collection/compilation of the granular dataX   
Additional aggregate calculations due to feedback loops and anchor valuesX   
Costs of setting up a common set of transformations*X   
Costs of executing the common set of transformations** X  
Costs of maintaining a common set of transformations  X 
IT resources X  
Human resources X  
Complexity of the regulatory reporting requirementsX   
Data duplication  X 
Other: please specify  X 
Highly (1)Medium (2)Low (3)No benefits (4)
Reducing the number of resubmissions X  
Less additional national reporting requestsX   
Further cross-country harmonisation and standardisationX   
Level playing field in the application of the requirements X  
Simplification of the internal reporting processX   
Reduce data duplicationsX   
Complexity of the reporting requirementsX   
Other: please specifyX   
Reconciliations and validations would be considerably simplified.
If a strong governance around new data requests is not implemented in parallel, there is a risk that data duplications will continue to arise and that new requests will need to be implemented twice by institutions: first, when a new or an amended regulation is issued; second, when the streamlining as described in option 2 is completed. This would be a worst-case scenario, which must be avoided at all cost.
Authorities and reporting institutions jointly
Harmonised and standardised, ready to be implemented by digital processes (fixed)
Concerning the question of the responsibility over the aggregates, a possibility would be to let the institutions perform all aggregations, but to nevertheless switch to regulations and ITS based on a combination of granular collections (where feasible) and aggregates to deliver, with the exact transformation rules needed to achieve these transformations. This would be a step towards machine-readable regulations where BIRD and RegTech could be of great help. For example, the challenge described in section 246: “Exploring the possibility of obtaining the aggregated data required (…) from the more granular data reported in other reporting frameworks means that a variety of dimensions defined (…) across the reporting frameworks for the same business concept have to be identified, compared and possibly controlled in the aggregation process” – is exactly what the BIRD has dealt with.
Manual adjustments are always the result of either:
1) lack of data integration (data sources or group entities not integrated into a central data warehouse, business processes such as debt collection resulting into manual Excel storage etc.)
2) insufficient data quality (including timeliness)
3) data warehouse complexity (multiple layers where all kinds of incidents can happen during the data journey, sometimes forcing institutions to adjust data "at the end" to be able to report in time)
But also:
4) regulatory requests not accurately reflecting business processes (for example, when granularity required is not on the right level)
Regarding 1), 2) and 3): reporting institutions need to do a root cause analysis of every manual adjustment in order to categorise the their nature and address solutions. Regarding 3) regulators need to make sure data requests are realistic and feasible from start. "Forced data" usually results into poor quality due to adjustments or hazardous transformations.
The prerequisite for a consolidation, performed by authorities, is the access to the complete and up-to-date legal structure of reporting institutions within the CRR perimeter. This could be achieved in the future using the RIAD database. The work done as part of the feasibility study for streamlining the Consolidated Banking Data could be re-used in the evaluation.
It is unclear how offsetting rules applying to short positions, derivatives and repos could be known to authorities. Relevant accounting standards would also need to be applied, in order for figures to match with the institutions financial statements. Institutions normally include reconciliations with their financial statements in their processes before reporting. This area, together with consolidation, is probably one of the most challenging to overcome.
We are not aware of principle-based rules in the frameworks likely to be considered in the feasibility study. It is however important that the approach retained is complied with BCBS 239 and helps institutions to fill their remaining gaps.
Maximum harmonisation principle needs to be expanded to all frameworks and the legal framework chosen needs to be unified. At the moment, the regulatory landscape consists of directives, regulations, ITS, RTS, guidelines, recommendations in addition to non-legal documentation such as rulebook and manuals. National acts should be reduced to requests which are outside the scope of the integrated reporting.
The work already done in the BIRD (Banks’ integrated reporting dictionary) should be reused in order not to reinvent the wheel but instead, to leverage on the business and technical analysis already performed for concept mapping.
NA
The definition about granularity provided in section 190 is correct but should be elaborated further since the topic is complex and the opposition between “granular” data sets and “aggregated” data sets is somewhat artificial. When it comes to regulatory reporting, granularity is tri-dimensional: rows (records representing individual transactions/contracts/customers etc.), columns (attributes with their associated code lists for categorical data) and time. At the moment, none of the frameworks included in the scope of the feasibility study is “fully granular” (or “atomic”) since:
1- Frequency is not higher than monthly (although on national level, some statistical data are requested on a daily or weekly basis)
2- “Granular” statistical frameworks are based on end-of-period stocks, not flows (i.e. individual transactions/payments are not reported)
3- Code lists are often much less granular than in banks’ internal systems (especially when it comes to types of instruments, type of collateral, purpose)
4- In addition, data models granularity is referred to as “normalisation”. Statistical frameworks using entity-relationship models (typically AnaCredit) use a very low level of normalisation (close to ontology/semantic model), whereas banks internal data warehouses consists of a large number of tables and relationships where data is normalised. Data de-normalisation requires numerous transformations, which can be complex especially when it comes to calculations about collateral.
(In contrast, transaction reporting required by the ESMA is close to atomicity when it comes to rows granularity and frequency.) This means that institutions already proceed to a lot of transformations and aggregations when reporting granular data for statistical frameworks such as SHS or AnaCredit. All other “traditional” frameworks such as BSI, FINREP etc. are also “granular” in fact. Everything is granular, only the size of the grain differs. Therefore, a thorough analysis about the right level of granularity for the three dimensions as well as models normalisation is needed before making diverse assumptions about the feasibility to go granular.
Not applicable
Not applicable
Multiple dictionaries
See question 8.
Different formats
No. Our institution is only responsible over statistical reporting which uses XML and Excel, whereas the NCA collects prudential and resolution data based on XBRL.
Very important
It is very important and we are working on setting up a common database with the NCA. Why:
1) For our internal analysis in the field of financial stability and monetary policy
2) Some of the deliveries under our responsibility require prudential data (such as international banking statistics delivered to the BIS, or CBD)
3) Expanding the data sharing across authorities is crucial to remove duplicate data collections and minimise reporting burden
4) Reciprocally, the NCA needs access to our statistical data through this common database in order to get a more detailed understanding of the institutions under their supervision through our granular data collections
The characteristics are well described in the discussion paper in section 272.
The costs would depend on the model proposed for cost sharing between authorities (and the banking industry?), which is not developed in the discussion paper.
Challenges include:
1- legal constraints, but this depends very much on the exact scope of the integrated reporting, notably whether it includes national frameworks and statistical frameworks from non-euro countries which are not subject to ECB regulations as well as to which extent the maximum harmonisation principle is applied across frameworks.
2- organisational constraints, since resources with the right level of knowledge and expertise are currently located in multiple countries, national central banks, NCAs etc. Changing the current organisation would automatically lead to a loss of knowledge which could take time to rebuild.
Benefits: for authorities, see question 34. For reporting institutions: a CDCP would represent a major improvement (single point of contact, unique technical format, streamlined regulatory watch etc.)
This question should be addressed again when the feasibility study has reached a more advanced stage.
Yes, to a limited extent
NA since we are not a reporting institution.
not valuable at allvaluable to a degreevaluablehighly valuable
Data definition – Involvement    
Data definition – Cost contribution    
Date collection – Involvement    
Date collection – Cost contribution    
Data transformation – Involvement    
Data transformation – Cost contribution    
Data exploration – Involvement    
Data exploration – Cost contribution    
Data dictionary – Involvement    
Data dictionary – Cost contribution    
Granularity – Involvement    
Granularity – Cost contribution    
Architectures – Involvement    
Architectures – Cost contribution    
Governance – Involvement    
Governance – Cost contribution    
Other – Involvement    
Other – Cost contribution    
Not analysed.
Not analysed.
A push approach
From the authorities’ perspective, it can be tempting to prefer a pull approach. However, in practice, it does not make any real difference, since institutions will not make any data available to the data layer from where data can be pulled until it would be ready to be pushed, that is, after internal sign-offs, closing the books etc. Therefore a traditional push appraoch may be preferred anyway.
NA
NA
NA
Some answers to these questions can be found under previous questions.
Yes, the approach described in the discussion seems correct and feasible. The IReF project, through its assessment of national requirements and design of extended layer, has put in place a similar process, which could be reused for the coordination mechanism.
The proposed agile coordination mechanism is already quite simple according to the discussion paper. The question is rather – how to make sure a simple mechanism in concept will not be complexified through useless bureaucracy in practice?
Some data requests do not emanate from regulations but from recommendations (such as the recommendation of the ESRB on closing real estate data gaps), which are further translated into national data requests. These could be part of Category 3.
The criteria seem arbitrary and not really relevant. The point is that, no matter the extent of the regulatory reporting integration, there will always be a need for ad hoc requests, limited in time, frequency, or reporting institutions. A reasonable process may be that authorities will need to refer to the coordination mechanism before issuing any ad hoc requests, but will not be prevented to do so if the data requested are not already available in the CDCP (as described in section 396).
This question should be addressed again when the feasibility study has reached a more advanced stage.
Other (please explain)
No. The reason is that our approach regarding data integration is not mature yet. Besides, since statistical frameworks are currently not integrated, RegTech companies usually have a very limited offer regarding those and focus on supervisory frameworks.
Data transformation
All aspects above are equally relevant, but also: file production in different formats and reporting workflow (sign-off/delivery/acceptance/revision/manual adjustments etc.).
Yes, but as mentioned before, the legal and organisational obstacles are probably underestimated.
in-house
“Keen” may not be the appropriate term. We are interested in improving our processes and infrastructure, and are open to look at different options.
RegTech firms have developed data dictionaries to which data modelled and stored in institutions’ data warehouses can be mapped in order to be further transformed and processed into an integrated flow. However, due to the lack of integration of statistical frameworks, it is not really possible yet to rely on RegTech solutions. Moreover, since the statistical data collection landscape will change radically with the introduction of the IReF and the recommendations emanating from the feasibility study, it is better to wait and see.
Yes
Yes, see answer to questions 48 and 52. Once this is achieved, RegTech should propose solutions using the common dictionary instead of, or in addition to their own. Their competitive advantage will then not be about the dictionary itself, but about the reporting workflow, agility to integrate new data requests to the data dictionary emanating from the CDCP, and ability to understand data located in institutions or authorities data warehouses.
Olivia Hauet
S