We welcome the opportunity to comment on the ESAs’ observations regarding automated financial advice tools. We generally agree with these observations. While automated advice tools can create benefits for both investors and service providers, automated advice should provide for the same level of investor protection as any other advice. To ensure the same level of investor protection, to avoid further fragmentation of regulatory requirements and to avoid any unintended consequences, it is necessary to analyse existing regulation thoroughly and clarify which regulation applies to which kind of automated advice.

For instance, for some activities being understood as automated advice some regulatory requirements might be applicable, e.g. the output of a tool might qualify as reception and transmission of orders in relation to financial instruments according to MiFID II. The legal analysis becomes more complex due to the fact that there is no common legal definition of advice. The definition of advice according to MiFID II is different from the one in IDD or definitions according to national civil law. In some countries automated tools might more easily qualify legally as advice according to civil law than in other countries.

Based on this, there is some risk that the use of automated tools might provide the investor with the perception that he receives advice whereas in fact he is not. In particular the risks of output perceived as advice could be detrimental for investors. In this regard already the phrase “automated advice” is misleading since the service provided might in some cases qualify as advice and in other cases not. The less information is required and processed, the less tailored the output will be. There is a general risk that less tailored outputs focus more on costs than clients’ overall needs.

As a specific remark, we do not consider it a problem per se if the same advice is provided to every consumer within the same category (see situation described in para. 24). We believe, however, that based on the input (e.g. investment horizon, knowledge and experience) the same algorithm will usually provide different results. Hence, we would only consider it a problem if the same advice is provided to every consumer within the same category if the input varies but the output would not.

Generally, the ESAs are looking mainly at automated advice tools for transactions with the consumer (business to customer). Automated advice tools, however, are also used in order to provide business partners with the input required (business to business). Any analysis should also clarify the type of the relationships at hand.
We are not aware of any additional relevant characteristics of automated financial advice tools.
We are aware of automated financial advice tools. Not all of them are as detailed as described in the ESAs’ examples. For instance, automated financial advice tools in the securities sector could only ask the investor to define their risk appetite according to three risk categories and include information on their knowledge and experience without the need for the consumer to provide further details.
We are aware that a number of asset managers are currently exploring the possibility to offer through automated financial advice channels.
From a regulatory point of view, the main barrier is the fact that different definitions of advice lead to different legal requirements and consequences. In our view, a cross-sectoral harmonisation of the terminology would provide the basis for a regulation that focuses on investor’s financial needs.

From a practical perspective, we believe that market participants have to deal with the challenge of building tools which ensure compliance with complex legal requirements including civil law, provide for a high level of security and also ensure that the used algorithms provide the correct output. In order to achieve this, firms might have to acquire outside knowledge which might be difficult and costly.

From the perspective of an asset manager, particularly, it might be difficult to convey information regarding specific product features through automated tools. In particular asset managers offering actively managed investments have to ensure that their services (and the related costs for such services) are properly understood by the consumer. It is also a business decision how the use of automated advice tools fits into the distribution relationships of the asset manager.

Though data protection rules and similar regulatory requirements also prevent market participants to fully realise the potential of automated tools, these rules are crucial for natural persons to control the use of their own data. Consequently, we do not believe that these barriers should be questioned for the purpose of achieving the best possible advice. If, however, the customer agrees on the use and possibly sharing of personal data relevant for an output which serves the customer’s needs, we think it should be allowed to store the data in one place and use it for the automated advice tool. In such case, the automated tool has the potential of realising the full benefits of automated advice. Nevertheless, for sharing of data, it is important that the other party operates on robust standards since data protection is only as good as the weakest link in the chain.
As a general observation, we believe that it is difficult to completely separate benefits from risks since they are closely related. For instance, although there is the benefit for the consumer to provide his input and receive an output more quickly through an automated tool, this is only a benefit if the tool provides an output to the consumer which fits.

The development of automated advice, however, is beneficial for consumers who would be intimidated to consult a financial advisor because they fear the information asymmetry between the financial advisor and themselves. Offering products through digital means offers an easy way to convert cash savings into investments also for these types of consumers. In times of low interest rates, consumers who store their money in savings accounts essentially lose money. For some consumers automated advice tools might be the way they would be willing to get experience with investing their money.

However, we do not agree with the ESAs’ notion considering automated advice as substitute for human advice. A human advisor may respond to specific questions and information the consumer requires. A customer might not in all cases have the possibility to insert the information which should in his view be taken into account. In addition, consumers might at one point feel much more comfortable to check the automated output with a human advisor. Insofar, while it might be correct that the automated advice tools may come in some cases at a lower cost for the consumer, they cannot generally be considered as a substitute for human advice.

The observation that automated advice may be more consistent due to algorithms (para. 36) depends on the algorithm used and the information provided by the consumer. The assumption that consumers will provide their information in a consistent way is not true in all cases. In order to provide such consistent output, the systems have to be able to deal with inconsistencies of the input.

Further, we do not agree with the observation that automated tools in principle allow consumers to easily receive and retain the details of their financial transaction (see para. 39) compared to non-automated advice. Regulations require financial institutions to provide the consumer with documentation regarding his investment, e.g. according to MiFID II, AIFMD and UCITS Directive. Financial institutions more and more tend to provide the consumer with such documentation electronically if legally possible.
Regarding the benefit that consumers may feel intimidated to contact a human advisor (see para. 32), we noticed that automated advice tools often incentivise customers to approach them by using a more informal way (comparable to the way the furniture store IKEA addresses its customers). Consumers consequently might feel more comfortable to access a website (see also our response to Q6).
We generally do not see any differences regarding potential benefits arising for consumers, however, the product features (e.g. investment horizon) and applicable regulations (e.g. disclosure or conduct requirements) are different and this may lead to differences regarding potential benefits.
We have not observed any of these potential benefits.
We consider that potential benefits to financial institutions are accurately described except for the fact that the costs for financial institutions will not in all cases be lower. While automated advice tools may to some extent be used by consumers instead of human advice, the tools will also be used by consumers which would not have contacted a human advisor and insofar create new business. However, to the extent human advice will be replaced by automated advice tools, the financial institution will incur other costs, e.g. for development, maintenance and protecting the system against cyber-attacks. Hence, the costs will simply be different compared to how financial advice was delivered in the past, however, that does not mean that it will be any less expensive for a financial institution.
Except for the described benefits, in particular the possibility of gaining access to new consumer groups, we do not see further benefits.
Yes, we see a difference insofar as some of the products offer specific features. For instance insurance products provide protection against a certain (biometric) risk and might not be suitable for investment purposes at all. Such products only require an understanding of the consumer’s protection needs but not his overall financial needs. Advice regarding investment products, however, requires a more holistic approach in relation to the exploration of client’s needs, i.e. a full check of the financial background and the consumer’s needs.
We have not observed any of these potential benefits yet. In our view this is too early to say.
We generally agree with the description of the potential risks to consumers with the following exceptions:
- The observations miss the significant risk of security. Automated advice requires consumers to provide date through electronical means, e.g. a website. There is a risk that data could be misused or be lost. While the risks are also inherent in case of human advice, securities aspects in online tools have a different dimension.
- Regarding the transparency of compensation (para. 57 et seq), in the securities sector, it should be very clear under MiFID II how much the services provided and products offered cost. In addition, MiFID II rules as well as the rules applicable to investment management companies according to the AIFMD and the UCITS Directives provide thorough transparency and conduct standards.

Furthermore we would like to stress that the ESAs in their description confuse investment objectives with the intended use of investments. This is shown with the ESAs’ proposal that financing childrens’ studies would be an investment objective (see para. 54). The investment objective for consumers who want to use their investments for their childrens’ studies would rather be a minimum outcome at a specified investment horizon. The investment objective should not be confused with the intended use of an investment.
The main risk which is not addressed in the paper is the security risk which has a different dimension with respect to online tools. In addition, it is unclear whether automated tools will provide for an ongoing revision of the customer’s portfolio. A human advisor may always contact the customer in case he sees a need, e.g. because of market developments. Automated tools could trigger a contact automatically but possibly only based on quantitative data.
We generally do not see any differences regarding potential risks arising for consumers, however, the product features (e.g. investment horizon) and applicable regulations (e.g. disclosure requirements) are different and this may lead to differences regarding potential risks.
We do not agree with the ESAs that “herding risk” generally may be more significant due to the potentially higher volume of transactions linked to similar algorithms. This would only be the case if consumers would be directed into single types of products on a concentrated basis. In particular with respect to investment funds we do not see such risks. Due to a portfolio structure and the range of products including active and passive funds, open and closed-ended funds, equity, fixed income and real estate funds, funds’ investments are not concentrated. In addition, the more algorithms take into account specific client data, the more tailored the output and hence the more different the consumers’ portfolios will be. In this regard findings from behavioural finance research and consumer testing should be used in order to understand consumers’ reactions.

We believe, however, that there is a possibility of portfolio copying through social media which might have the potential to gain a broad follower community. We do not expect such behaviour to become a herding risk though. Understanding consumers’ choice by using insights from behavioural finance research and consumer testing might be helpful in mitigating risks for investors which could arise due to portfolio copying.
We generally agree on the risks described. In particular the risks relating to flaws, biased advice etc. should be monitored thoroughly. We do not see any potential in the securities sector that regulatory responsibility could be delegated inappropriately to the consumers. In this regard, it is crucial that the ESAs review all existing legislation and examine whether and to what extent it is applicable to the services offered by means of specific automated tools. While we believe that automated tools should be subject to robust product governance and conduct rules, MiFID II, AIFMD and UCITS Directive already provide for such rules regarding funds.
Except for the risks relating to cyber securities, we are not aware that any major risks are missing from the Paper.
We generally do not see any differences regarding potential risks arising for financial institutions, however, the product features (e.g. investment horizon) and applicable regulations (e.g. disclosure requirements) are different and this may lead to differences regarding potential risks.
No, we have not observed any such risks. The market of automated advice is comparatively new, hence risks might not have been materialised.
We generally agree. The ESAs should in our view also consider how to prevent potentially negative evolutions by ensuring that new market participants are properly qualified and authorised to avoid a decrease level of overall quality of advice, thereby running reputational risks for such tools and the wider advice model in general.

Prior to proposing or taking actions regarding automated advice specifically, the ESAs should review all existing legislation and examine whether and to what extent it is applicable to the services offered by means of specific automated tools.
This depends on how the potential benefits for financial institutions or other market participants outweigh the potential risk.
We think that the ESAs should also take a closer look at the potential use of big data available to non-financial institutions such as amazon, facebook or google. The use of such data for an investment process contains another set of potential benefits and risks. We believe that the question of automated advice cannot be answered properly without taking into account this aspect.
Dr. Julia Backmann