Primary tabs

Bank of Lithuania

The priority of Integrated Reporting framework should be credit institutions. However, it is important to consider including other sectors as well as (e.g. insurance) soon as feasible (at least as a second phase), particularly when considering data dictionary related aspects.
In terms of reporting domain, we agree with the proposed data collections in the scope of statistical, supervisory and resolution datasets. In terms of reporting level, it would be the most effective to begin with the European (IReF, EBA, ESA) and international (BIS, IMF) data collections. However, the benefits would be notably more significant if national level data collections could be included as well.

Data requirements of the financial conglomerate directive should be covered as well to the extent that they are addressed to credit institutions. The Integrated Reporting framework should also consider quantitative Pillar 3 disclosure requirements that are frontloaded before issuing equivalent reporting requirement to authorities (e.g. data on climate risk).
In our opinion, both reporting authorities and reporting institutions understand the benefits of having the Integrated Reporting framework in place. That said, the options proposed in the discussion paper are still rather broad and plentiful, which makes it difficult to estimate their ultimate cost burden and fit.
Not relevantSomewhat relevantRelevantHighly relevant
Training / additional staff (skills) X  
IT changes   X
Changes in processes   X
Changes needed in the context of other counterparties / third-party providersX   
Time required to find new solutionsX   
Other (please specify)X   
Changes needed in the context of other counterparties / third-party providers – impossible to indicate at a present moment.
Time required to find new solutions – impossible to indicate at a present moment.
We broadly agree with the findings. It could also be worthwhile to assess the European Markets Infrastructure Regulation (EMIR) framework. In addition, if insurance institutions are to be included into the Integrated Reporting framework, Solvency II reporting framework should be assessed as well.
Highly agreeAgreeSomewhat agreeDon’t agree
Data Dictionary - Semantic levelX   
Data Dictionary - Syntactic levelX   
Data Dictionary - Infrastructure level X  
Data collection - Semantic levelX   
Data collection - Syntactic levelX   
Data collection - Infrastructure level   X
Data transformation - Semantic levelX   
Data transformation - Syntactic level X  
Data transformation - Infrastructure level   X
Data exploration - Semantic levelX   
Data exploration - Syntactic levelX   
Data exploration - Infrastructure levelX   
Data transformation - Syntactic level: National authorities are increasingly moving towards advanced data management solutions and tools themselves. Needless to say, such projects are typically attributed with considerable time and financial costs. It would therefore be useful to conduct a market consultation and assess a feedback from major providers of standard data collection and management systems/tools, with the aim to consider possibilities of adaptation to the new Integrated Reporting framework. Particularly, if any connectors or extension modules could be offered by the providers to the reporting authorities and institutions, who have standard data system products in place. The ultimate goal should be not to discard the considerable investments in new technologies made by the national authorities.

Data transformation - Infrastructure level: Data transformations should be implemented locally (at a national authority level), not centrally. Hence there is no need for a holistic approach on data transformation infrastructure.

Data exploration: The entire data exploration block should not be a part of this Integrated Reporting exercise. Every institution should be able to decide on its own how data exploration is implemented, how many different schemas should be used and what tools should be employed in the process.
There are several different dedicated data dictionaries being used for specific datasets, including Single Data Dictionary (SDD) and EBA Data Point Model (DPM).
A data dictionary should be formulated in a language and overall manner, which is comprehensible to financial market participants. In addition, it should feature data codes in order for data to be referred to correctly. The data dictionary should also allow to identify, which reporting domain the data points are relevant to.
Definition and implementation of the standard data dictionary is a necessary precondition and a crucial point for successful implementation of the Integrated Reporting framework. In our view, it is particularly important that the data dictionary enables the use of API technologies and digitization, this way opening up possibilities for more extensive reliance on automation technologies (e.g. machine-to-machine). It is therefore of great significance to ensure unique and overlap-free definitions of the reporting requirements as well as to develop these definitions in cooperation with the financial market participants and legislators.
SignificantlyModeratelyLow
Understanding reporting regulationX  
Extracting data from internal systemX  
Processing data (including data reconciliation before reporting)X  
Exchanging data and monitoring regulators’ feedbackX  
Exploring regulatory data X 
Preparing regulatory disclosure compliance.X  
Other processes of institutionsX  
Highly important
Moderately costly
We see the cost of moving to a unique regulatory data dictionary as a likely large initial investment with a long-term benefit.
High cost reductions
We would expect that integrating the national regulatory reporting together with the harmonised reporting regulation would help achieve cost reductions on two fronts: (1) it would reduce the overall reporting burden for reporting institutions, (2) it would allow for a more effective reuse of data and hence reduce the ultimate amount of data that needs to be stored and managed.
High cost reductions
High cost reductions could be expected with a condition that a unique data dictionary enables national authorities to satisfy their ad-hoc reporting needs by compiling data themselves.
We generally agree with the costs and benefits outlined.
To enable SupTech/RegTech in a data-driven economy, standardisation should receive notably more attention. Technological solutions appear where there are data and standardised protocols. Data has to be structured in such a way that the technology could use it and create value on the basis of that data.

The current reporting mechanism, which is based on submission of aggregated data, does not fit well the context of continuously changing reporting requirements. As a result, micro-data-based reporting should be expected to gain a breakthrough in the nearest future, particularly since continuous and gradual improvement of the current reporting regime entails significant costs for both market and supervisory authorities. It is therefore highly important to find a way to adopt to these changing data needs without facing acute cost, or at least to reduce those costs to a minimum.

In the statistical domain, the collection of granular data would allow a centralised (and therefor more effective) derivation of data aggregates. Shifting the latter effort from reporting agents to statistical authorities would allow for a more standardised approach and a more consistent data. Conversely, in the supervisory reporting domain, reporting institutions should remain responsible for the calculation of certain key prudential ratios and limits. These indicators need to be extremely accurate, since they serve as a basis for prudential decisions and actions. National supervisors monitor the ability of credit institutions to calculate and report accurate data, as part of the assessment of internal governance within the Supervisory Review and Evaluation process (SREP), i.e. they check institutions’ compliance with the Basel Committee on Banking Supervision's principles n. 239 (BCBS 239).
  • statistical
  • resolution
  • prudential
option 2
Reporting of granular data is definitely desirable and feasible. That said, a need to collect key aggregates is likely to remain present in both statistical and prudential reporting domains. For instance, prudential reporting covers a number of key data points, which need to be reported on an aggregate level in order to comply with regulations, or compilations of these data points require input of expert judgement. In cases like these the provision of granular data in lieu of aggregates would not be feasible.

If a data dictionary succeeds to define data in a way for it to be clearly identifiable, unique and overlap-free, as well as to make it possible to attribute data points to specific books, asset classes, use cases, etc. – this would make it possible to seek a higher data granularity level (towards the Option 3).
Some of potential challenges are stemming from different calculation methods and models used in supervisory reporting data, classification of securities and loans, as well as different credit risks assessment models with assumptions. If data dictionary succeeds to integrate supervisory data definitions in such a way, that the data is clearly identifiable, unique, overlap-free, and attributable to specific books, asset classes, use cases, etc. – this would make it possible to seek the highest data granularity level, as presented in the Option 3.

Another potential challenge is related to the fact that compilation of financial statements and some of the accounting data reported by credit institutions is subject to a degree of discretion as well as to a mandatory input of expert judgement. It might be complicated to fully delegate this function to prudential authorities, contrary to the statistical data domain. As a result, submission of prudential data at a fully granular level would likely not be entirely feasible, as it must be accompanied by relevant prudential and accounting aggregates.
If data dictionary succeeds to define data in a way for it to be clearly identifiable, unique and overlap-free, as well as to make it possible to attribute data points to specific books, asset classes, use cases, etc. – this would make it possible to seek a higher data granularity level.
Highly (1)Medium (2)Low (3)No costs (4)
Collection/compilation of the granular data  X 
Additional aggregate calculations due to feedback loops and anchor values X  
Costs of setting up a common set of transformations*  X 
Costs of executing the common set of transformations**  X 
Costs of maintaining a common set of transformations  X 
IT resources X  
Human resources X  
Complexity of the regulatory reporting requirements  X 
Data duplication X  
Other: please specifyX   
Highly (1)Medium (2)Low (3)No benefits (4)
Reducing the number of resubmissionsX   
Less additional national reporting requests X  
Further cross-country harmonisation and standardisationX   
Level playing field in the application of the requirementsX   
Simplification of the internal reporting processX   
Reduce data duplications X  
Complexity of the reporting requirementsX   
Other: please specifyX   
A major part of data should be collected at a granular level versus aggregate level. If the proportion would shift towards a notably large share of aggregate data, the costs could exceed the benefits, which would make the solution ultimately not worth implementing.
The authorities
Harmonised and standardised, ready to be implemented by digital processes (fixed)
While the definition and maintenance of transformations will present costs to competent authorities, they should be the ones responsible for defining and executing transformations. Reporting institutions, in turn, should be responsible for validation of data. This would ensure that at a granular level data is collected correctly according to the predefined rules.

If transformations are to be defined jointly by credit institutions and competent authorities, the former should remain responsible for implementation of transformations to ensure data accuracy. The idea of an automated information flow from authorities to credit institutions to check the accurate execution of the transformation rules appears overly burdensome to all stakeholders.
It should be allowed to make some manual adjustments, for instance, in case of elaborations after audit procedures.
Since natural objectives of different reporting domains (statistical, supervisory and resolution) are different, it is important (and will be challenging) to find technical tools and methods allowing to carry out consolidations in a proper way.

Another challenge is related to the issue of subsidiaries. Depending on the chosen approach (granular vs. aggregate), the number of reporting agents might increase notably. This issue could be addressed by making a parent company responsible for subsidiary reporting. A parent company would report to its home country authorities, which would share the subsidiary information with the relevant country authorities through the central data hub, envisioned in the Integrated Reporting framework.

Reporting subjects should be identified carefully and explicitly, to know what information precisely should be used in consolidations and this information has to be easily accessible to facilitate calculations.
Every national authority should be able to define their individual extensions for transformation rules to cover gaps in national level reporting requirements and to achieve comparability by international standards. A similar process is already in place and it seems to be working well.
It is important to ensure that principles are understood in the same way by all stakeholders, since principles are generally open for broader interpretation than rules.
Not all granular data can be universally revealed (some of it might be subject to a degree of discretion). Therefore, access rights and permission to specific datasets should be custom managed and controlled, so that sensitive information would be kept protected. At the same time, this should not limit access to any not-sensitive data of general nature.
We see it advisable to review the evidence from the Cost of Compliance study prior to making a final decision on the reporting scope. Particular attention should be drawn to recommendations offered in the Cost of Compliance study to scrap some of the reporting requirements for small and medium banks.
Feedback loops should cover main supervisory requirements, like capital or liquidity indicators, in order to check if the calculations have been carried out properly. It is especially relevant in case of supervisory indicators which are used as a basis for decisions and recommendations by national authorities. In absence of feedback loops, any miscalculations would increase a chance of wrongful decisions by supervisory authorities.

In the statistical domain, feedback loops could be important for ensuring data quality in the cases of balancing figures. For instance, in case of reserve requirements, financial market participants should be required confirm that the figures are correct and therefore are good to use further in the data chain (this is currently a part of reporting procedures).
Not applicable
Not applicable
Multiple dictionaries
Different formats
NA
In our opinion, a CDCP is certainly an important advancement. However, our view is that data collection should be centralised on a national (not international) level, featuring a single national data collection point (single country, single point of collection).

From the point of reporting institutions, there is no significant difference between reporting through a national data collection point or through an international CDCP. However, the legacy of several layers of reporting, at national and international level, primary European law and the financing of a central data collection point are considerable obstacles that suggest not to pursue fully centralised direct international CDCP concept in the near future.
In our opinion, a Hub-and-Spoke approach is the most suitable on an international level. It has an advantage of not interfering with any data collection or data management processes and related decisions, which have been made (or are planned in the future) by the national competent authorities.

A Centralised approach, on the other hand, could potentially escalate risks related to a single point of attack. This approach would also be complicated to implement due to considerable national differences and a variety of reporting processes and systems.

On a national level, however, the data collection approach is not so critical, as long as there is a well-defined and properly executed data dictionary in place.
Yes, to a limited extent
It would depend on the approach chosen. In case of our favoured Hub-and-Spoke approach, we think that costs could be acceptable, up to a certain limit. The costs would also likely be shared with financial market participants.
not valuable at allvaluable to a degreevaluablehighly valuable
Data definition – Involvement  X 
Data definition – Cost contribution X  
Date collection – Involvement  X 
Date collection – Cost contribution X  
Data transformation – Involvement  X 
Data transformation – Cost contribution  X 
Data exploration – Involvement X  
Data exploration – Cost contribution X  
Data dictionary – Involvement  X 
Data dictionary – Cost contribution X  
Granularity – Involvement  X 
Granularity – Cost contribution X  
Architectures – InvolvementX   
Architectures – Cost contributionX   
Governance – InvolvementX   
Governance – Cost contributionX   
Other – Involvement    
Other – Cost contribution    
Data definition – Involvement: It is essential to define data in such a way, which would be comprehensible to financial market participants. Authorities sometimes have a limited awareness of data granularity that is available on the side of reporting institutions and hence the input of the latter is highly important.

Data definition – Cost contribution: Benefits and time costs will be experienced on both sides – authorities and reporting institution.

Generally, reporting agents could be asked to contribute to the costs of the new Integrated Reporting system (e.g. the costs of infrastructure, etc.), at least indirectly via the supervisory fees. The overall costs of the integrated system may not be so appealing for reporting agents, as the return on investment may need several years to materialise.
We are in the process of implementing our own data integration initiative under the umbrella of Data Management Maturity Program (DAMAMA). The program aims to develop new technological solutions and processes in order to effectively collect, share and integrate data. This data management initiative consists of three individual projects: Data Collection, Data Platform, and Data Governance.

Considering only Data Collection and Data Governance projects (which seem to be the best approximation of proposals under the vision of Integrated Reporting framework as described in the discussion paper by EBA), the rough monetary cost estimate could be around 3% of operational costs (during a period of 5 years).
A mixed (pull and push) approach
Our opinion is that national authority should be able to collect data on its own terms (either pull or push approach) through a single national data collection point. The EBA, in turn, would collect data from the national authorities using a pull approach, since on the national level information would be stored in a standardised and unified way. In order for pull approach to work, there ought to be clear and explicit rules and framework in place to ensure data quality.

Some of reporting requirements tend to be event based, rather than according to a predefined schedule. This kind of reporting should be done based on a push approach. Supervisory authorities would be responsible for taking actions to collect data from reporting institutions. European (and international) authorities would be able to pull such data according to their needs.

In either way, reporting agents should remain responsible for the quality of data they report, regardless of using a push or pull approach. This includes the quality of data aggregates as well.
A mixed (pull and push) approach - see our comment to question 40.
A Hub-and-Spoke approach, featuring single national data collection points, would be the most appropriate solution, in our view.
In case of a pull approach in particular, access rights and permissions to some of the data points should not be universal and should be carefully considered.
The main challenge is related to finding a suitable solution for reporting institutions to confirm that granularly collected data is true and correct, especially in case of a pull approach.
We broadly agree with the suggested mechanism. However, the central data collection point should provide technical support to competent authorities without being able to reject or limit their “ad-hoc” data requests. In other words, national competent authorities should maintain full control of their own data requests while being obliged to stay under control of the overall reporting burden.

Some of the apparent advantages could be seen for large cross border banking groups, which have operations across several countries. The primary reason for this would be a potentially smaller number of national extensions. The proposed coordination mechanism under the Integrated Reporting framework also has a potential to restrain authorities from collecting unnecessary information from financial market participants. In case of planned developments, the coordination mechanism would help ensure that it is done in a coordinated, planned, and harmonised manner.

The major disadvantage of the proposed mechanism is related to timely procedures. If a data is needed quickly, even if it’s planned in advance, the process is likely to be lengthy and not particularly smooth (especially at the beginning). This could be compensated by putting in place advanced technological solutions, which would allow to implement necessary processing and information discovery quicker and require fewer human hours.
The proposed mechanism could be further ameliorated by employing AI technologies and algorithms, which would allow for quicker checks. In addition, the AI could help navigate tremendous amounts of data points and enhance the use of data definitions to locate the relevant data to produce aggregates.
We agree with the proposed approach.
The EBA could investigate Regulatory Technology (RegTech) solutions, which aim for future-forward thinking and paradigm shift in the regulatory reporting process.

One example of such investigation could be to assess feasibility and obstacles of creating API-based “sensors” for reporting purposes that could be deployed in banks and potentially in other reporting agents as well. These sensors could serve as a common semantic layer and external interface for supervisors and regulators across FIs (banks and other financial institutions). Based on the Bank of Lithuania’s experience, sensors could be installed in FI’s IT infrastructure. Each FI would have to internally develop processes to feed these sensors, i.e. to set up a standardised API. There could be two types of sensors: 1) sensors allowing supervisors and regulators to pull data from banks in real-time; 2) sensors that would be able to push alerts to supervisors or regulators on occurrence of certain events, also in real-time.

Sensors would be able to monitor events related to deposits, loans, payments, risks, (potentially broadly transaction level information) etc. These sensors could send data and alerts in real time to supervisors and regulators. This would allow supervisors to monitor the “health” of any FI in real time. Thereby sensors also act as an early warning tool.

Sensors would allow a supervisor or regulator to obtain structured micro-level data and automatically transfer it to required reports, including the ability to access data in a specified format or manner.

It is expected that once the full application of the API technology is implemented, considerable parts of the reporting could be streamlined. As a result, financial institutions would face lower costs related to reporting challenges, since supervisory authorities themselves would be able to produce new insights using different breakdowns.

Additionally, supervisors and regulators could be able to develop anomaly detection mechanisms to identify unusual developments in a real-time data stream received from any FI, e.g. based on advanced Machine Learning techniques. An outlier in a sensor-based information could potentially mean an anomalous event, which might require attention from a supervisor and FI.

This would essentially mean an evolution of supervisory approach from principle-based to insight-based supervision, where national competent authorities gain insights on potential market risks. Such mechanism, in turn, could potentially have a positive impact on the financial stability and soundness of financial system.
  • Data collection
  • Data exploration
Data collection
in-house
The Bank of Lithuania is very keen to invest in regulatory technology (RegTech) solutions, with the aim of finding optimal solutions for all stakeholders.
RegTech solutions could help combine different types of data, e.g. quantitative (structured) information with textual (unstructured) reporting information. This would also allow tracking and employing information from sources other than reporting institutions, e.g. media information.

If reporting requirements and quality checks are defined in a machine executable language, automated systems could take care of sending, checking, integrating and disseminating data. This automation will require several measures that would also be conducive to Integrated Reporting framework.

RegTech also has a potential to take European reporting framework much further than it is proposed in the Integrated Reporting framework. With a help of RegTech, we have a possibility to become supervisors, who not only produce decisions and recommendations to financial market participants, but also aims to technologically embed these decisions into supervisory solutions. Provided a supervisory regulator has access to a full financial market information, this would make it possible for better and more accurate recognition of market patterns and delivery of decisions. In other words, regulatory decisions could be technologically embedded in the business logic on financial market participants’ side, which would allow for automatic actions in case of a certain feedback from supervisory authorities, e.g. temporarily seizing transactions.
Yes
To enable RegTech and SupTech solutions in a data-driven economy, standardisation is essential, and it should receive notably more attention. Technological solutions appear where there are data and standardised protocols. Data has to be structured in such a way that the technology could use it and create value on the basis of that data.
Ugne Saltenyte
B