Introduction
There is nothing new about Artificial Intelligence ("AI") – people have been creating and refining machines to independently apply problem-solving techniques to tasks for decades. This has been happening for so long, and in so many ways, that we don't really worry about AI when a Word document highlights a spelling mistake and suggests an alternative, when a social media provider curates a personalised feed of posts for a user, or when Google Maps suggests a change in route because of heavy traffic ahead.
What is new is the availability of powerful AI tools – including ChatGPT – which have the potential to revolutionise how companies process information and make decisions with accessible models and vast processing power. AI technology has the potential to have a major impact on complex tasks including pre-populating technical documents, economic forecasting, mapping health changes and risks across large populations, and modelling market research.
The use of AI introduces challenges on many fronts – not least regulatory and ethical considerations. Although offshore jurisdictions have no AI-specific legislation, the directors of offshore companies must still meet their statutory and common law obligations when their companies use, invest in, or develop AI solutions. For clarity, this article will define "offshore jurisdictions" as jurisdictions typically used for international financial services and company structures, including but not limited to the British Virgin Islands ("BVI"), the Cayman Islands, Guernsey, and Jersey. Unless otherwise specified, references to statutory duties throughout this article will be generic but primarily informed by common law principles and legislation application to the BVI, the Cayman Islands, Guernsey, and Jersey.
What is AI?
AI refers to the use of computer systems to perform tasks that would have required human intelligence, including problem solving, language processing and analysis. If this processing is confined to a simple calculator app on a phone or a spellcheck, it doesn't raise too many issues from a directors' point of view.
The arrival of ChatGPT and similar tools has changed that. ChatGPT is an example of generative AI, developed by OpenAI. Generative AI is a type of AI system that can be used to generate new context including text, images and audio. The systems are trained on a selection of data that allows new content to be generated, which is different from the training data.
AI offers an understanding of context, language capabilities, and the ability to handle complex tasks, and can produce – or appear to produce – relatively comprehensive technical documents. For example: a discrimination law policy, a summary of market conditions in a given sector and region, or an analysis of a company's projected performance.
Duty of care, diligence and skill
Directors are required to exercise care, diligence, and skill when managing a company. The use of AI, with its ability to provide quick analysis, can be a tool to support directors with decision-making.
The use of AI solutions is not without risk. There are specific risks inherent in the use of AI which may include algorithmic bias, operational failures and cybersecurity vulnerabilities. For example, algorithmic bias might result in discriminatory outcomes, such as biased client assessments or unfair hiring decisions. Operational failures could lead to financial losses if an AI system malfunctions during critical operations, while cybersecurity vulnerabilities might expose sensitive data to breaches.
There may also be civil liability risks, for instance the use or inadvertent disclosure of confidential information when using AI products may raise liability. Similarly, there are intellectual property risks in the use of content generated by AI.
AI is a tool and not a substitute for knowledge, skill and supervision. Therefore, a director must be mindful of their duties of care, diligence and skill in deploying AI solutions, and must ensure that there is appropriate oversight, management and control.
Directors of offshore companies must not view AI as a “black box”, nor should AI be used on a “set and forget” basis. A director must not allow their discretion to be fettered by the use of AI, and must not rely on AI in substitution for exercising their own decision-making. AI should augment, not replace, human judgment, with directors retaining ultimate responsibility for decisions informed by AI outputs.
Privacy
Directors should be aware of the data privacy risks when using AI. Under data protection laws in each offshore jurisdiction, companies have a duty to protect personal data and ensure that it is not mishandled or exposed to risk.
AI systems, such as ChatGPT, save your chat history and data. This includes the input (prompts) you provide to ChatGPT, and all responses. The data remains stored by OpenAI as long as you keep your ChatGPT account open.
Sending confidential information to AI systems could accidentally breach applicable data protection laws, especially if the AI system used does not have fully transparent data handling processes.
Directors should ensure that robust data protection protocols are in place when using AI systems. This could include encryption and data anonymisation.
AI bias
Another factor that directors should consider is the bias of AI. AI systems are trained on vast databases that often contain historical biases. This can influence the AI responses. For example, AI models have shown bias in areas such as gender, race and socio-economic factors.
Directors should be aware that using AI for decision-making could introduce unintended discrimination. This could potentially expose a company to reputational damage.
Practical steps for directors
In order to protect against liability, it is recommended that directors take the following steps when deploying AI solutions:
- ensure directors maintain ongoing education to understand the AI systems used by the company and subscribe to regulatory updates in relation to AI;
- ensure the board actively oversees the company’s AI strategy, aligning it with corporate values, risk tolerance, and legal duties;
- implement and review policies around AI deployment, ensuring they cover data governance (e.g. data quality and privacy), ethical guidelines (e.g. fairness and transparency), compliance with applicable regulations, and procedures for regular system audits;
- exercise due care when outsourcing AI services by conducting proper due diligence on vendors, such as assessing their track record, reviewing the transparency of their AI models, verifying compliance with data protection laws, and ensuring robust cybersecurity measures are in place;
- where AI is used, consider whether disclosures are necessary or required, such as informing shareholders or clients about AI’s role in decision making (e.g. investment strategies) or its potential risks and limitations, particularly for investment managers or fund vehicles;
- if AI is used in a decision-making capacity, document the AI’s recommendations, the rationale for accepting or rejecting them, and the extent of human oversight involved, ensuring a clear audit trail;
- mitigate liability for AI errors by ensuring rigorous testing, validation, and monitoring of systems, and maintaining records of oversight efforts;
- be prepared for future international regulatory developments, such as the EU’s AI legislation, which regulates high-risk AI systems, or guidelines from jurisdictions like the US and Singapore, by implementing horizon-scanning processes;
- examine company directors and officers insurance policies to ensure coverage for AI-related liabilities, such as data breaches or oversight failures, and consider negotiating clauses specific to AI risks; and
- consider ethical implications, ensuring AI systems promote fairness, transparency, and accountability, such as by addressing biases and justifying AI driven decisions.
Conclusion
Directors must be mindful of their duties when using AI solutions. Good governance requires proactive engagement with AI risks and ensuring that appropriate steps are taken to understand and manage such risks.
Where a director fails to act responsibly, there is a risk of facing personal liability for any breach of their duties. Furthermore, the director and the company may face reputational damage, or civil liability, aside from any liability for breach of duties.
Conversely, by promoting a culture of responsible AI use, directors can mitigate against liability, demonstrate ethical leadership, and build stakeholder trust in an AI-driven era.
Locations: BVI | Cayman Islands | Guernsey | Jersey | London | Singapore
Related Services: Corporate & Commercial | Regulatory & Compliance | Funds & Investment Structures