The concept of AI has come a long way since its infancy. It has now advanced into a more market-friendly and consumable digital innovation. AI systems have also developed into household items with devices like Alexa and Siri, and different types of AI systems are also being implemented in various key sectors such as banking, finance and healthcare.
AI systems are essentially computer-programmed machinery which are programmed to imitate human-like activities and behaviours which are normally associated with human intelligence or cognisance, such activities include decision-making, learning, problem solving and knowledge representation.
Malta aims to explore the potential of AI in its digital innovation industry, and how the implementation of AI could benefit Malta by increasing its appeal to investors as a digital innovation hub, and also how it could further economic growth in the digital innovation industry.
MDIA has been assigned the responsibility for supervising this industry locally. The Government is working on drafting the National AI Strategy, which will focus on putting Malta at the forefront of AI technology at a global level.
The Strategy will work on developing Malta’s strategic competitive advantages, while also analysing the potential impact that AI could have on Maltese society and recognising the industry’s economic opportunity.
This is explained further in the public consultation document entitled Malta Towards an AI Strategy[1] drafted by the Malta AI Taskforce that was created by the Government and includes individuals who are academics and experts in the field of AI technology, who are tasked with devising the policies and the framework on which AI shall be based.
The AI Taskforce has issued a Consultation Document on Malta’s strategy towards ethical and trustworthy AI, with the aim of making Malta the ultimate AI launchpad for foreign firms seeking to develop, test and offer AI services.
The Taskforce specified that in order to ensure this, Maltese regulators need to create an innovative and regulatory ecosystem which supports the development of trustworthy AI.
This will then assist in mitigating the risks that hinder the development of AI technology, which are mainly the lack of trust of public and low adoption of the system by stakeholders. To ensure the public’s trust, the AI framework being proposed shall be centred around the following objectives:
The development of AI must be centred around certain principles, enshrined in the EU Ethics Guidelines for Trustworthy AI, to ensure that the objectives of the Ethical AI Code are reached. These principles include:
A set of proposed requirements which need to be transposed into the AI system itself by the developers are mentioned within the Consultation document. These requirements need to be met throughout the whole lifecycle of the AI system, and will need to be monitored by the developers to ensure that these requirements are being met.
The specific requirements that need to be implemented into the AI system are divided into the following sections:
A set of control practices, linked to the sections listed above, are also included to ensure that each one of the requirements falling under these sections is applied effectively with the ultimate goal of achieving trustworthy AI systems. These are applied on a case-by-case basis, depending on the type of AI system being created.
Governance practices and mechanisms are also discussed, especially in light of practices which the staff needs to perform, for example the obligation to inform and involve impacted workers before implementing AI in the workplace. Governance practices and mechanisms include introducing ethical AI considerations as corporate values and ensure clear roles and responsibilities for the ethical deployment of AI.
The operations management processes are also set out in the document with the aim of creating a framework for minimising risks and creating appropriate decision-making models. These mainly involve procedures and protocols, assessments, design considerations, end-user processes and internal mechanisms.
The MDIA will be responsible for the supervision of AI Certification in Malta as is currently being proposed. The MDIA is in the process of establishing an AI certification framework, which will be the basis of ITA certification framework. It will provide a platform of AI solutions that have been developed in an ethically aligned, transparent and socially responsible manner.
novolegal has provided feedback, in relation to the consultation document “Towards Trustworthy AI” to the Malta.AI taskforce with regards to the way that the legal framework can be set out, to provide more legal certainty and propose added safeguards in light of certain areas. These recommendations include the below points:
[1] Malta Towards an AI Strategy – High level policy document for public consultation, March 2019