With recent developments in the AI space and the promise the technology delivers, the need to revisit our practices and formulate guidelines both for the technical and non-technical constituents of the technology is ever so more important.
- AI Education: New education strategies within companies need to be formulated so that we can leverage upon how AI can contribute to our line of work. Artificial Intelligence itself can help us in finding novel education processes through AI tutoring. Educating ourselves on the skills needed as business embarks in this new AI ecosystem is especially important, so that we can also understand and mitigate risks that may arise with the use of the technology.
- Regulatory frameworks: It is becoming increasingly evident that to address any risks that might rise with the use of the technology, we need regulations that will govern its lifecycle. AI is an autonomous product or service, which can adapt itself by training and learning on patterns existing in vast amounts of data. It is this scale, as well as its stochastic autonomy that makes AI a difficult landscape to navigate without best-practice, regulatory guidelines, such as traceability of decisions and responsibility assignment can become a challenging task.
To date, many countries are implementing their AI regulatory frameworks. All regulations that are starting to formulate, regardless of the different approaches proposed, show many common traits. They all promote Responsible AI and the need to increase trust in its use and application. Two main approaches of regulations are starting to emerge globally, the context first based approach and the sector first based approach. In the former, AI regulations apply on a per application basis while in the latter, a macro-to-micro strategy, on a per industry sector basis.
Micro-to-macro strategy regulations focus on outcomes rather than applying constraints on specific sectors or technologies. This type of regulation focuses on specific use cases and their results. Macro-to-micro strategy regulations are closely related to the added degree of risk an AI system may introduce in an industry sector. This approach weighs the risk against the opportunity loss from forgoing AI usage on specific applications. Sector first regulatory approaches include an enumeration of high-risk AI systems within sectors and such lists can be adapted by relevant sector authorities to produce the correspondent governing regulations.
Regardless of the regulatory approach there are quite a few common principles that are introduced within all emerging frameworks, and they include:
Effectiveness and resiliency: Using both human centered, and automated quantitative assessment methodologies, we can better understand short- and long-term AI behaviour and performance.
Security and privacy: The introduction of AI systems within the software lifecycle, increases their available attack surface that, as with any software system, can be exploited by malicious users through cyber-attacks. Consequences can range from loss of data and mishandling of sensitive information, such as personal data, to denial of service or tampering of critical decision-making processes. For these reasons, it is imperative that a rigorous AI infrastructure security framework is in place from the preliminary stages of AI design. Multiple frameworks exist in the industry that address the AI threats landscape and focus both on traditional software systems cybersecurity, such as social engineering, man in the middle and DoS attacks, as well as specific types of vulnerabilities arising from the use of AI, such as data poisoning and AI model proxies.
Transparency and governance: To feel comfortable with, and trust the decisions provided by Artificial Intelligence, we need to be able to retain deep intellectual oversight of the core components that constitute an AI system. EXplainable Artificial Intelligence (XAI) lies at the heart of a transparent AI systems, allowing us to understand how the AI is trained and how it is driven by the data to its decisions. AI governance is equally important, as it allows us to reproduce AI pipelines to better assess performance at the task given.
Equity and fairness: Stereotyping can unknowingly manifest in training datasets, biasing AI behaviour. A responsible AI is trained with a fair and homogenous dataset, equitably representing all classes in the use case digital twin. The use of counterfactual fairness assessment metrics can ensure equitable class treatment with respect to sensitive attributes.
Accountability and redress: To allow AI systems to assist in making, or make decisions we need to make sure that they are in line with values that promote positive impact in our lives and that we have suitable accountability processes that ensure compliance with responsible AI principles. Accountability in AI is a complex concept because of its stochastic nature and the many technical entities and stakeholders such as SMEs, engineers, UX experts, end users, policy and legal experts that participate in building such systems. An accountability framework should include and define constituents, amongst others including those that accept responsibility, entities that the account is due to, and rules that the AI is assessed upon.