At BellchambersBarrett, we are committed to delivering exceptional service while embracing innovative technologies that enhance our capabilities and client outcomes. This means we are embracing AI safely, ethically and responsibly.
As we work often with Commonwealth agencies, our AI practices align to, and uphold, the policy and advice provided to government by the Digital Transformation Agency. Due to this, we use the Organisation for Economic Cooperation and Development’s (OECD’s) concepts to define AI.
We allow our staff to use AI in their work with the objective of enhancing productivity and service delivery. This includes enterprise AI deployed in our secure, internal ICT environment (Microsoft 365 Copilot), and allow the safe use of publicly available AI like ChatGPT, Claude and Canva.
We uphold our governance to assure responsible AI use
Our use of AI is governed by our internal AI Working Group and a Chief AI Officer. This group advises the firm’s leadership on AI policy, technologies and use cases based on our risk tolerance. This enables the partnership to make informed AI-related decisions.
As a firm we only accept low risk AI usage. This means our use of AI does not directly impact clients without human intervention, risk client information and data that we hold, or harm the privacy of individuals including our staff and clients.
As part of our commitment to transparent use of AI, this transparency statement will be reviewed and updated when our approach to the use of AI changes significantly, and at least every twelve months. We reassess our approach to the use of AI when:
- Our approach in a way that will affect client data, trust, or confidence in our service delivery.
- A use case evolves or changes in functionality, creating new risk.
- There is new or changed risks that impact our ability to deliver quality services.
Our staff use AI only in approved use cases
The tasks completed by our staff using AI fall into several usage patterns and domains as outlined by the DTA’s Standard for AI transparency statements. These are:
- The Analytics for insights usage pattern primarily in the Service delivery, and Policy and legal domains, where the sensitivity of the data is low risk.
- The Workplace productivity usage pattern primarily in the Service delivery, and Corporate and enabling domains.
Specifically, our staff use AI to:
- Summarise, interrogate, analyse and obtain insights from datasets.
- Assist in the analysis, creation or summarisation of documents, emails or other content.
- Assist in the creation of meeting minutes or interview transcripts, where client permission is granted.
- Search information repositories and retrieve documents, information or data.
Our humans are in control. We do not use AI as a decision-maker with respect to client services and advice. We remain the decision maker and do not disclose client data, or information that could be used to identify our clients and their services.
We maintain documentation for guiding safe use of AI
Our internal policies apply to all staff and contractors and include:
- BB AI usage policy which sets the boundaries for how we use AI. It is consistent with, and supports, DTA policy and guidance. Our AI policy is reviewed and updated regularly to ensure it remains relevant to current AI capabilities and DTA guidance.
- AI use-case register to maintain an up-to-date record of how staff are using AI, aligned to our risk tolerance.
- BellchambersBarrett Quality Control Manual Quality Assurance Policy which upholds the need for human intervention to assure our service to clients is in line with our quality standards.
We are committed to the AI literacy of our staff to assure ethical use
Staff need the knowledge of how to use AI to use it effectively and responsibly. Our staff are required to undertake the AI in government fundamentals training module prior to using AI tools, and all staff have annual, mandatory AI training developed in house. We are rolling out Enterprise Copilot in a staged approach, to test and assure the capability of staff is sufficient to use it safely, and the outputs are used in a way that upholds our quality standards.