Regulate AI only where required

An Open letter to Pope Francis, Elon Musk, Bill Gates, Mark Zuckerberg,
Tim Cook, Sundar Pichai, Technology Leaders, Legislators and AI Academics

Image for post
Image for post

I am writing this open letter to address the ever-increasing calls and initiatives to regulate Artificial Intelligence (AI). Whilst, many are heralding calls for regulation due to often cited impending perils, many others caution that regulating AI will stifle innovation. Indeed, it is critical that safeguards are put in place where necessary, yet at the same time innovation should be allowed to spur.

For the past two years, I have been chairing a national technology regulator (the Malta Digital Innovation Authority) which has put in to place national regulatory frameworks for blockchain and now has also issued regulatory guidelines in regards to AI. During this time I have been monitoring efforts and developments to regulate AI, and I feel it is crucial to raise some concerns so that we do not go off track. Following are a list of recommendations to consider when regulating AI.


Many of the calls for AI regulation stem from concerns related to Artificial General Intelligence (AGI) — the type of AI that is depicted in movies, the type of AI that can learn to do just about anything (including take over the World). Whilst, we should look into how to go about regulating AGI, the reality is that AGI does not (yet) exist. Artificial Narrow Intelligence (ANI), the AI which currently does exist, allows for computers to focus on a specific problem and cannot learn to tackle any problems it was not programmed to. Despite its limitation to focus on predetermined problems, ANI is an extremely powerful tool, which indeed should in some cases require oversight and regulation. When ANI are put into the same basket as AGI (which could potentially take over the World), it is no surprise that many call for mandatory AI regulation across the board. However, this would result in over-regulating ANI and putting in place regulation for something that we do not know enough about yet (AGI). Let’s start by focusing on ANI which exists today, so as to be able to define regulation that does not stifle ANI innovation yet at the same time puts into place any necessary safeguards. A recommendation with respect to AGI will be proposed later herein.
Recommendation 1:
Differentiate between AGI and ANI, and start with ANI.

Not all ANI requires to be regulated

One often-cited use case in support of implementing AI regulation is that of ensuring that AI algorithms do not discriminate or introduce unwanted bias. Consider when AI is used in the context of criminal courts — indeed discrimination in this context is unacceptable. Similarly, should banks be allowed to approve or reject loan applications based upon unrelated demographic characteristics? Of course not. Now consider a personal music AI-based recommendation system. Having more bias may well imply that the recommendation system makes better guesses with respect to one’s taste (perhaps based upon other users that have similar tastes). Should such an AI system be regulated? Mandatory regulation of all AI systems will stifle further development and innovation. Clearly, there are some types of AI-based systems that should not be subject to regulation.
Recommendation 2:
Do not regulate AI across the board. Only regulate where required.

Sector specific and safety-critical regulation of ANI

Consider, again, the case of processing bank loan applications. Should there be any difference if a discriminatory decision is made by AI or alternatively by a software algorithm that is not classified as AI? Surely not. The important thing is that the bank application is not processed in a discriminatory manner. Now, what if the application is processed manually by a person? Surely the same rules apply that applications should not be processed in a manner that may be discriminatory. If we were to regulate only AI, in these instances would a loophole be created for non-AI based software? If not, then really we would be regulating software (not just AI). Also, regulation should (already) enforce that loan application processing does not discriminate against applicants — whether processed by a human or via software (which may use AI-based algorithms). Do we really need to replicate regulation to separately regulate AI? There are definitely cases which may require further specific regulation, e.g. autonomous vehicles, however just because regulation may be required to be updated in certain sectors or activities, does not mean that we should regulate AI across the board. The onus should be on the responsible entity to ensure that their systems are law-abiding.

Indeed, we often know what an AI algorithm does, but not what it has learnt, and it is both crucial to ensure such systems function correctly and are trained on adequate data. For such systems, technology audits could be mandated to ensure that adequate levels of behavioural and functional assurances are in place. Indeed, the justice system would need to make strides with respect to identifying civil liabilities in certain cases, however, this does not justify mandatory regulation of AI across the board. We should focus on regulating the sector and activity, for example: put in place surveillance laws rather than mandate regulation of all facial recognition algorithms; focus on data protection and processing laws rather than implement AI data processing regulation; etc.
Recommendation 3:
Do not mandate ANI regulation. Focus on regulating the sector and activity, not the technology.

Recommendation 4:
For regulated or safety-critical sectors and activities, technology audits should be considered for ANI based systems (and others).

Recommendation 5:
To ensure adequate and standard levels of technology audits are undertaken (where necessary), national technology regulators should be set up.

And what about AGI?

As discussed above, we should not stifle ANI innovation just because AGI may one day be upon us. Yet we need to start investigating the various regulatory, legal, ethical and geopolitical questions surrounding AGI. The World needs to be ready for the eventuality of AGI, and its implications. What are the potential implications of an AGI algorithm being deployed in the wild? What if an AGI algorithm was able to operate in a decentralised manner?
Recommendation 6:
A world treaty should be established to ensure that AGI is not deployed until related crucial regulatory, legal, ethical and geopolitical questions surrounding AGI have been answered.

About the Author

As Chairperson of the Malta Digital Innovation Authority I oversaw the implementation of a technology assurance regulatory framework. We started with Blockchain, DLT and Smart Contracts and further added regulatory guidelines for Artificial Intelligence. The Authority vets, scrutinises and authorises system auditors and associated subject matter experts to undertake system audits on innovative technology arrangements.

Chairperson // Malta Digital Innovation Authority; Director // Centre for DLT @ Uni Malta; Lecturer. Programmer. Opinions are my own.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store