Close

Artificial Intelligence Assurance Framework

AI Assurance Framework and accompanying Ethics principles ensure that AI used by the NTG meets ethical and assurance standards and is clearly focused on customer needs.

The Northern Territory Government (NT Government) AI Assurance Framework and accompanying ethics principles ensure that AI used by the NT Government meets ethical and assurance standards and is clearly focused on customer needs, while carefully managing potential risks.

AI will only be used where there is a clear use case for doing so, and where its use does not pose unmitigated risks in relation to key ethics principles for the NT Government including security, privacy, transparency or safety.

Key information

This is the current version of the statement as at 31/05/2024
Access previous versions.

The assurance framework will be required for all agencies which are using or propose to use an AI component or utilise AI-driven tools. This includes the use of large language models and generative AI which are explicitly within scope of the application of the framework.

There are a lot of AI products in our everyday lives which are not intended to be assessed as part of this framework. With the exception of generative AI tools, the proposed use of AI does not need to be assessed if all of the following conditions apply:

  • it uses AI that is built into a widely available commercial products (i.e. Apple Siri, Google Home or Google Assistant)
  • the AI is not customised in any way or being used other than as designed or intended
  • there is no NT Government data or personal identifiable customer data being used with AI.

(For example the use of personal digital assistants, smart phones, satnav systems are all not required to use the AI Assurance Framework prior to use.)

The AI Assurance Framework consists of three parts:

  • AI Ethics Principles
  • AI Self-Assurance Assessment
  • AI Advisory Board.
  • The AI Ethics Principles are designed to ensure best practice use of AI in the NT Government.

    The NT Government has a principles-based approach to AI governance that recognises the inherent benefits of innovating, of the possibilities that can be achieved through adoption of responsible AI and the broad and in some cases irreversible harms or risks that can come from improper AI use.

    The NTG has developed and adopted the following AI Ethics Principles to guide the AI Self-Assurance Assessment and advisory board.

    • Community Benefit - AI should deliver the best outcome for Territorians, and key insights into decision-making.
    • Safety - AI must be used safely and responsibly, and should reliably operate as intended.
    • Fairness - Use of AI will include safeguards to manage data bias or data quality risks.
    • Privacy and security - AI will include the highest levels of assurance.
    • Transparency - Review mechanisms will ensure Territorians can question and challenge outcomes where AI was involved.
    • Accountability - Decision-making remains the responsibility of organisations and individuals.
  • NT Government agencies implementing AI can demonstrate how they implement the AI Ethics Principles by undertaking an AI Self-Assurance Assessment. The assessment should be used at all major gateways or changes of the lifecycle of a system or solution that makes use of AI. Employees must honestly and comprehensively assess the AI against the framework to demonstrate how they are meeting the requirements of the ethical principles.

    For complex AI projects, the assessment could be completed by multiple project officers responsible for different aspects of the AI project. The officer can then meet, discuss, and develop a recommended AI Self-Assurance Assessment profile for submission to the project governance and sponsor to demonstrate a robust assessment.

  • The AI Advisory Board aims to support NT Government agencies to address the issues, risks and challenges with implementing AI. The AI Advisory Board reports to the ICT Governance Board. The AI Advisory Board does not govern, manage or oversee AI projects. The AI Advisory Board recognises and does not diminish:

    • the role of the agency Chief Executive (CE) as the accountable officer of their agency
    • the obligations of the agency CE as the Accountable Officer
    • agency or project governance.

    Any completed AI Self-Assurance Assessments with high rated risks are required to be submitted to the AI Advisory Board for advice regarding how to mitigate and address the risks.

  • NT Government agencies are accountable for complying with legislation and adhering to digital polices and standards, including provisions addressing the implementation of AI, whether that solution uses NTG data or not.

    NT Government agencies may supplement the AI Assurance Framework and models as necessary to ensure sufficient and effective governance appropriate to their business needs, client requirements and community expectations.

    The Department of Corporate and Digital Development (DCDD) is generally responsible for assisting and facilitating agencies with AI and project development and management, unless otherwise agreed by both agency CEs.

    DCDD develops models, guides and templates to assist agencies with implementing the requirements of the AI Assurance Framework. DCDD manages and provides secretariat for the AI Assurance Board. Further, DCDD develops and administers the NTG digital governance frameworks and coordinates related governance committees.

  • The NT Government defines artificial intelligence (AI) as intelligent technology, programs and the use of advanced computing algorithms that can augment decision making by identifying meaningful patterns in data. AI in this context should aim to help government agencies cut costs, free up labour hours for more critical tasks, and deliver better, more targeted services.

    An AI project is the development, acquisition, implementation or adoption of some kind of Artificial Intelligence, deep learning algorithm, generative AI or other AI product or service, including when the project team is acquiring and implementing a larger product suite and an AI product is built into that product and required or intended to be used.

    Data is the representation of facts, concepts or instructions in a formalised (consistent and agreed) manner suitable for communication, interpretation or processing by human or automatic means. Data is typically comprised of numbers, words or images. The format and presentation of data may vary with the context in which it is used.

    Responsible AI is the practice of developing and using AI in a way that benefits individuals, groups, and the wider society, while minimising the risk of negative consequences.

Back to top