Artificial intelligence (AI) typically “learns” through data points that are fed into its system. Firms using AI-powered tools in the provision of financial services will need to navigate the risks of using such technology, including ensuring there is data integrity to mitigate against the risk of embedded biases within the system’s decision-making process. Transparency and explainability are key to ensuring the decision-making processes of the AI technology can be clearly articulated to clients and regulators. This article considers the regulatory implications of relying on AI in financial services by considering the example of investment advice or portfolio management.