The Maltese AI Ecosystem: Objective & Principles

by , on Sep 16, 2019 12:00:00 PM

What is AI?

The concept of AI has come a long way since its infancy. It has now advanced into a more market-friendly and consumable digital innovation. AI systems have also developed into household items with devices like Alexa and Siri, and different types of AI systems are also being implemented in various key sectors such as banking, finance and healthcare.

AI systems are essentially computer-programmed machinery which are programmed to imitate human-like activities and behaviours which are normally associated with human intelligence or cognisance, such activities include decision-making, learning, problem solving and knowledge representation.

How Artificial Intelligence can impact Malta

Malta aims to explore the potential of AI in its digital innovation industry, and how the implementation of AI could benefit Malta by increasing its appeal to investors as a digital innovation hub, and also how it could further economic growth in the digital innovation industry.

MDIA has been assigned the responsibility for supervising this industry locally. The Government is working on drafting the National AI Strategy, which will focus on putting Malta at the forefront of AI technology at a global level.

The Strategy will work on developing Malta’s strategic competitive advantages, while also analysing the potential impact that AI could have on Maltese society and recognising the industry’s economic opportunity.

This is explained further in the public consultation document entitled Malta Towards an AI Strategy[1] drafted by the Malta AI Taskforce that was created by the Government and includes individuals who are academics and experts in the field of AI technology, who are tasked with devising the policies and the framework on which AI shall be based.

The Maltese AI Ecosystem: Objective & Principles

The AI Taskforce has issued a Consultation Document on Malta’s strategy towards ethical and trustworthy AI, with the aim of making Malta the ultimate AI launchpad for foreign firms seeking to develop, test and offer AI services.

The Taskforce specified that in order to ensure this, Maltese regulators need to create an innovative and regulatory ecosystem which supports the development of trustworthy AI.

This will then assist in mitigating the risks that hinder the development of AI technology, which are mainly the lack of trust of public and low adoption of the system by stakeholders. To ensure the public’s trust, the AI framework being proposed shall be centred around the following objectives:

  • A human-centric approach;
  • Respect for human rights, democratic values, and any other applicable laws;
  • Maximising the benefits of AI and reducing its risks;
  • Aligning with international standards and norms with respect to AI ethics.

The development of AI must be centred around certain principles, enshrined in the EU Ethics Guidelines for Trustworthy AI, to ensure that the objectives of the Ethical AI Code are reached. These principles include:

  • Human autonomy & self-determination of users;
  • The intention to prevent harm;
  • Fair development of AI;
  • Explicability of end-users of how the system works.

 Requirements of the AI System

A set of proposed requirements which need to be transposed into the AI system itself by the developers are mentioned within the Consultation document. These requirements need to be met throughout the whole lifecycle of the AI system, and will need to be monitored by the developers to ensure that these requirements are being met. 

The specific requirements that need to be implemented into the AI system are divided into the following sections:

  • Performance & safety
  • Human agency
  • Fairness & unbiased
  • Accountability
  • Respect for well-being
  • Explicability & transparency
  • Privacy & data governance

A set of control practices, linked to the sections listed above, are also included to ensure that each one of the requirements falling under these sections is applied effectively with the ultimate goal of achieving trustworthy AI systems. These are applied on a case-by-case basis, depending on the type of AI system being created.

  • Performance & safety: system must be accurate in its performance, reliable, and resilient to any cyber security attack;
  • Human agency: the system must protect fundamental human rights and ensure appropriate level of human oversight on the AI system;
  • Fairness & unbiased: system must be designed with a universal approach in mind, to cater for users with special needs, and must include measures to mitigate the risk of any unfair bias;
  • Accountability: the system needs to be auditable, ensure that any potentially negative impact is minimised and provide the users with the right to seek redress for any negative impact caused to them;
  • Respect for well-being: the system must not negatively impact the environment or society as a whole;
  • Explicability & transparency: there must be a certain level of traceability regarding the processes and decisions taken during the system’s development, and that efforts need to be made to ensure that the end-users understand how the system operates;
  • Privacy & data governance: protection of individual’s private data must be ensured, and the quality of data used in the system must also be of good quality.

Governance practices and mechanisms are also discussed, especially in light of practices which the staff needs to perform, for example the obligation to inform and involve impacted workers before implementing AI in the workplace. Governance practices and mechanisms include introducing ethical AI considerations as corporate values and ensure clear roles and responsibilities for the ethical deployment of AI.

The AI Certification Framework in Malta

The operations management processes are also set out in the document with the aim of creating a framework for minimising risks and creating appropriate decision-making models. These mainly involve procedures and protocols, assessments, design considerations, end-user processes and internal mechanisms.

The MDIA will be responsible for the supervision of AI Certification in Malta as is currently being proposed. The MDIA is in the process of establishing an AI certification framework, which will be the basis of ITA certification framework. It will provide a platform of AI solutions that have been developed in an ethically aligned, transparent and socially responsible manner.

Feedback for AI in Malta

novolegal has provided feedback, in relation to the consultation document “Towards Trustworthy AI” to the Malta.AI taskforce with regards to the way that the legal framework can be set out, to provide more legal certainty and propose added safeguards in light of certain areas. These recommendations include the below points:

  • Use the term AIS instead of AI, and clearly delineate between the 4 different types of AIs categories, including:
  • Assisted intelligence;
  • Augmented intelligence;
  • Automated intelligence;
  • Autonomous intelligence;
  • Address the brittleness of AI in the legal framework and how it can be resolved;
  • Provide a process for the continuous monitoring of the data of an AI system;
  • Implement globally-accepted standards to attract foreign investors;
  • Classify the different types of AI systems according to their impact on society;
  • Implement processes for the effective monitoring of the system’s limitations.


[1] Malta Towards an AI Strategy – High level policy document for public consultation, March 2019