Economic Policy measures are more similar to Unsupervised Machine Learning Algorithms?

It may be difficult to understand how both work, can that be an issue?

Policies broad governance instruments that aims to achieve a particular goal for the individuals and institutions. They are conducted through different strategies that will focus on one or several scopes of the Policy. For an example, a Policy to increase the number of students enrolled to the Universities in the Science, Technology, Engineering and Math (STEM) subjects usually involve with strategies to increase the number of schools that teach STEM subjects for Advance Levels, and allocating more openings in the universities in the particular degrees and study streams. And after a period of 5 to 10 years it is possible to compare the numbers before and after the policy and evaluate the effectiveness of the policy.

Numbers have increased? Great the policy has “worked” and it is time to focus on another topic. No progress? time to go back to the drawing board and start again. But start from where? Where might the Policy has gone wrong? in fact how to find the ineffectiveness of the policy? Most of these questions pop up due to the reason that we are not confident in the mechanics of a policy. In other words we may not know how a policy does what it does.

Such a lack of traceability is mostly due to the complexity of the behaviors of the economic agents (how economists call people) and their interactions which is drastically different to the traditional understanding. As it assumes that the Economy will be in an equilibrium state, most of the time (and if isn’t it will move to an equilibrium automatically) supported by the agents with prefect rational thinking and self-interest-led behavior.

How does this relates to policy making? because most of the policy makers follow such mainstream economic theories but due to the inherent complexity in the economy and it’s agents, they regularly change their actions and strategies in response to a certain incident (example a policy change). And the outcomes of this process may further drives the agents to re-evaluate, re-think their behavior and change them again according to the new context. This leads us to the concept of Unsupervised Machine Learning Algorithms.

Algorithms but not Micro-managed

Machine Learning Algorithms, which is a prominent 4th Industrial Revolution (4IR) technology / concept, aims to automate processes that are complex and time-consuming. There are two main approaches to create solutions based on Machine Learning (ML) which are 1) supervised and 2) unsupervised machine learning. Simply the difference between the two approaches depend on the level of instructions / guidance given to the computer.

These algorithms use Big Data sets to search through and a key difference would be that the unsupervised algorithms will work with datasets that are not labeled , to inform the computer on the object the that it “sees”. Thus the computer will have to analyze and through “patterns” it discover, cluster or group objects, suggest relationships (associations) and reduce the number of data inputs for easy usage.

Since there are no data labels, the algorithm will have to figure out the content by itself through a process which would not be easily visible or interpretable. (often called a “black box” approach) And in the instances where the algorithm has performed fairly well, the framework it has followed, if studied, would seem bizarre or questionable. As one algorithm that was trained to differentiate dogs from wolves, did not do through the identification of the animal but since the dogs were on a grass setting and the wolves were on snow, the computer just clustered the data based on the two settings.

Therefore the intended goal might be achieved, but how it was achieved would not be entirely “logical” and also not be replicable to a different context, a different country in terms of policy and a different data sets in terms of an algorithm.

Policymakers’ Playbook

It may be difficult to find policy measures, that achieved what it supposed to achieve in the in the very first instance, due to several reasons such as time lags, knowledge gaps, implementation issues, etc. Therefore the policymakers will have ideally need to have several iterations of the policy (not just plan A) based on the current and future states of the context.

Also it is not possible to ignore the indirect impact of a certain “failed” policy, since it will reveal different hidden characteristics of the context, change the situation and the issue for better of worse. And the subsequent policies need to take such changes into account, in order to get one step closer to the Policy goal. Therefore policymaking does not entail overnight success as it is a continuous process. This is not different from programming ML, specially unsupervised algorithms.

Programmers, when working with a large dataset will aim to find not only the know unknows but also expect to find unknown unknows. Specially since the algorithm will go through the data set thousands of times and try to make “sense” of the them by clustering, highlighting relationships and summarizing important details as discussed above. However since it is difficult to know the methodology followed by the computer to make the conclusions, the programmer will have to make hypothesis and update them when necessary to fine tune the algorithm. (Since at the initial stages there can be some “absurd” results)

Thus Policymakers, as programmers, need to be open-minded to understand the results in a wider sense, since they may only be able to influence / control the inputs of the Policy “algorithm” and the rest will be decided by the behavior of the agents in the society.

Ideologies of the policymakers will have an impact on the inputs, and will often serve as assumptions. Open-mindedness is important again as it nudges them to welcome ideas from different people and disciplines that would enable to explore the context from a multitude of perspectives. (free of the curse of knowledge) Policy inputs that are created after such a process would have a better probability of success as it will be more closer to the reality.

Therefore not being able to understand how a policy works its way in the society may not be a concern as long as there are right inputs, consistency over time, collaboration and open-mindedness which would allow to tweak the policy regularly to achieve the preferred goal.

Originally published at




An Economic Undergraduate that is curious about everything else

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

What kind of training Data is needed and How to source them for your ML business models?

The Gumbel-Softmax Distribution

How to replicate the example “Transfer Learning Using AlexNet” with GoogleNet?

Machine learning, the future of new Technology

Lidar and Autonomous Driving Dataset

Setup your Windows 10 machine for Machine Learning

NSFW Classifier : Nudity classification on mobile and edge devices.

A Quick Overview of Contrast Enhancement and Its Variants for Medical Image Processing

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Ashen Hirantha

Ashen Hirantha

An Economic Undergraduate that is curious about everything else

More from Medium

Movie Recommendation System

Forecasting the Power Output of PV Systems Using an ML Algorithm

What are the differences between centralized learning (in monolithic systems) and decentralized…

Bank Fraud and Benford’s Law