Our AI Charter
1. Human oversight by design
We maintain meaningful human oversight across the AI lifecycle. Humans remain accountable for AI outcomes, especially in areas of judgement, ethics, leadership, and governance. Oversight mechanisms are built into every use case to enable intervention and ensure that core human values remain in control.
2. Ethical & responsible deployment
We align all AI initiatives with the Australian AI Ethics Principles and international frameworks (ISO 42001, NIST AI RMF). We commit to fairness, inclusivity, reliability and respect for privacy and data sovereignty. AI is never used to displace human judgement in strategic decisions or ethical governance.
3. Transparent & contestable systems
Wherever AI impacts people, we ensure clear disclosures and provide avenues for appeal or remediation. Users should always know when AI is involved and have the ability to challenge outcomes.
4. Risk-informed governance
Every AI use case is assessed for potential harms to individuals, communities or the environment. We document and monitor these risks over time.
5. Inclusive engagement
We engage continuously with stakeholders throughout the AI lifecycle. Our systems are embedded with accessibility, fairness and social equity.
6. Strong data & security foundations
We uphold the highest standards of data governance, security and privacy, and we expect the same of any AI solution or vendor we engage.
Our AI Policy
Our business is committed to maintaining the highest standards of integrity, confidentiality, and professionalism in delivering trusted corporate valuation, economics and advisory services.
To uphold these standards, the use of AI tools and applications must adhere to the following principles:
1. Confidentiality and Data Security
All AI tools and apps must be used in compliance with our data protection policies. They must only process information that is necessary for their function, and any sensitive or proprietary data must be anonymized or appropriately secured to prevent unauthorized access or disclosure.
2. Authorization and Approval
The deployment of AI tools and applications requires prior board (committee) authorization (in the absence of a dedicated AI-focussed data governance or IT security team). Usage must be consistent with approved systems and align with industry best practices.
3. Quality and Accuracy
AI tools should be used as support, not as a sole source of decision-making. Results from AI applications must be verified and validated against trusted data sources before being relied upon in client advice or reporting.
4. Ethical Use
AI tools must be used ethically, avoiding bias, discrimination, or misrepresentation. The selection and deployment of AI solutions should promote fairness, transparency, and accountability.
5. Legal and Regulatory Compliance
All AI applications must comply with applicable laws, regulations, and professional standards pertaining to data privacy, confidentiality, and advice accuracy.
6. Training and Awareness
Employees authorized to use AI tools must receive appropriate training to understand their capabilities, limitations, and proper handling of sensitive information.
7. Continuous Monitoring and Improvement
We will regularly review AI tool usage to ensure compliance with this policy, address emerging risks, and identify opportunities for improvement