Artificial intelligence

Download 60.65 Kb.
Size60.65 Kb.

  1. What is AI?

Artificial intelligence refers to the ability of a computer or a computer-enabled robotic system to process information and produce outcomes in a manner similar to the thought process of humans in learning, decision making and solving problems. By extension, the goal of AI systems is to develop systems capable of tacking complex problems in ways similar to human logic and reasoning.

In the words of John McCarthy also known as the father of AI defined Artificial Intelligence as: -

Artificial Intelligence is the science and engineering of making intelligent machines, especially intelligent computer programs.”

Artificial Intelligence learns from experience uses the learning to reason, recognises images, solves complex problems, understands language and its nuances and at last, creates perspectives.

The AI continuum comprises of the following: -

  1. Assisted Intelligence

Humans and machines learn from each other and redefine and breadth and depth of what they do together. Under these circumstances, the human and the machine share the decision rights.

  1. Augmented Intelligence

Enhancing human ability to do the same tasks faster or better. Humans still make some of the key decisions, but AI executes the tasks on their behalf. The decision rights we solely with humans.

  1. Autonomous Intelligence

Adaptive/continuous systems that take over decision making in some cases. But they will do so only after the human decision maker starts trusting the machine or becomes a liability for fast transactions.

In this type of intelligence, the decision rights are with the machine and hence it is fundamentally different from assisted intelligence.


Having originated as a concept as early as the 1950s, AI research and application have come a long way during the 1980s–2000s and up to the current day. An indicative timeline, along with dominant research areas in the AI space during each period, is as follows:

  1. In the year 1950, a paper about the possibility of machines with true intelligence published by Alan Turking. Later on, Conference at Dartmouth organised by John McCarthy and the field is named AI. It is worthy enough to take a note that since 1950 to 1970, AI was only a concept and there was no real application of the same.

  2. In the year 1980, Predator UAV used by US DoD in war. Later on, World chess champion Gary Kasparov defeated by IBM’s Deep Blue. It is worthy enough to take a note that since 1980-2000, Military and academia begin to show interest in AI.

  3. In the year 2010, IBM Watson defeats Jeopardy game show champions. Later on, Apple introduces Siri, MS introduces Cortana and Amazon introduces Alexa. AI start-up ‘vicarious’ passes first Turing Test: CAPTCHA. Deep Mind team uses deep learning algorithms to create a program that wins Atari games.

  4. Google self-driving cars cross the 1-million-mile mark autonomously.

  5. Facebook detects daces and shares photos with friends to whim those photos belong. It is worthy enough to take a note that from 2005 onwards Large tech companies invest in commercial applications of AI/machine learning ML.


Advances in AI have garnered extensive interest from the private and public sectors, with the field now being seen as a potential disruptor in the mass production of consumer goods and other labour-intensive activities from which human potential can be freed for higher endeavours.


AI has subtly made inroads into the daily lives of Indian citizens in the form of app-based cab aggregators and digital assistants on smartphones. The interest can be gauged from the fact that leading IT service outsourcing companies have begun thinking, talking and (a few) launching AI platforms. But these are just small steps towards achieving the ultimate goal of AI - namely replacing human intelligence. The systems being developed, as of now, are perfecting the process of increasing the efficiency of solving a repetitive problem. This will eventually lead to solutions to everchanging problems.

In contrast, the start-up sector is able to directly attack these problems as it does not carry the baggage of IT outsourcing firms. Indian start-ups are working across a plethora of AI problems - identifying patterns in objects, people, style and preferences to advice on retail shopping; building conversational services and using them over social media apps and for online shopping; developing better diagnostic services bringing in cognition in robotic process automation helping in cross-channel discovery of preferences and working in multiple languages. These are just a few of the areas that Indian start-ups are working on.

Commercial applications of AI are huge and Indian start-ups are beginning to identify them and tap into the market, which is still nascent.


Public policy in India on the application of AI has thus far lagged when compared to AI’s subtle usage by start-ups who have so seamlessly blended AI into the services provided to customers.

If we look at the applications that we use/have used at some point of time (e-commerce platforms, chat services, social media services and so on), they have all been employing AI in some form and at some level of maturity or the other. Though India is making rapid progress in terms of technology, companies and researchers are yet to utilise the full potential of AI. While the USA is currently in the process of implementing laws concerning driverless vehicles, India still lags behind. Instead of waiting for technology to reach a level where regulatory intervention becomes necessary, India could be a frontrunner by establishing a legal infrastructure in advance. Alternatively, early public-sector interest in AI could trigger a spurt of activity in the AI field in India.

The main dichotomy that the regulations will have to deal with relates to who will be liable for the activities of AI systems. These systems are designed to be creative and to continue learning from the data analysed. Hence, designers may not be able to understand how the system will work in the future.

Also, the role of an AI system, as in the case of a driverless car, could be to assist the user. In such a situation, deciding liability for what the AI system has done will be difficult. Therefore, this issue needs to be discussed and delved into deeply before arriving at any conclusion.

The digital movement in India has created data which is readable by machines. At the same time, technologies have also reached a level of maturity where they can think like humans in real time and, at times, in a cost-effective way. Thus, they are suitable for use in governance.


Compared to the West and frontrunners of AI adoption in Asia, such as China and Korea, the culture and infrastructure needed to develop a base for the adoption of AI in mainstream applications in India is in need of an impetus. Some prerequisites for an AI-supportive cultural environment include but are not limited to:

  1. Homegrown Infrastructure

Indian academics, researchers and entrepreneurs face a more acute challenge than corporates do in terms of the less than the ideal infrastructure available for an AI revolution in India. For example, cloud computing infrastructure, which is capable of storing large amounts of data and facilitating the huge amount of computing power essential for AI applications, is largely located on servers abroad.

  1. An Ecosystem fostering Innovation

Fostering a culture of innovation and research beyond the organisation are common to global technology giants. To encourage the same level of innovation in AI research efforts in India, initiatives to hold events and build user communities in the field of AI will go a long way. Examples from around the globe include the Defence Advanced Research Projects Agency’s (DARPA) Cyber Grand Challenge which attracts a large share of AI research funding in the US, the European Union’s technology funding programme, FP7, and the BRAIN initiative, a 10-year, multi-billion-dollar funding initiative for AI research in the US.


Deep learning, a part of AI, can be employed to tackle issues of scale often prevalent in the execution of government schemes. It is essentially a process that can be used for pattern recognition, image analysis and natural language processing (NLP) by modelling high-level abstractions in data which can then be compared with various other recognised contents in a conceptual way rather than using just a rule-based method. Take for instance the Clean India Initiative directed towards the construction of toilets in rural India. Public servants are tasked with uploading images of these toilet constructions to a central server for sampling and assessment. Image processing AI can be used to flag photographs that do not resemble completely built toilets.

Image recognition capabilities can also be used to identify whether the same official appears in multiple images or if photos have been uploaded by officials from a location other than the intended site. Considering the scale of this initiative, which involves creating more functional toilets, being able to check every image rather than a small sample will actually help increase effectiveness. Further, AI can be applied to the Prime Minister’s initiatives such as the Digital India Initiative, Skill India and Make in India with varying effects. The range of application for AI techniques in such large-scale public endeavours could range from crop insurance schemes, tax fraud detection, and detecting subsidy leakage and defence and security strategy.

The Make in India and Skill India initiatives can be heavily augmented as well as disrupted by AI adoption in the short term. While the former is aimed at building the nation-wide capabilities required to make India a self-sustaining hub of innovation, design, production and export, the latter seeks to aggressively build and enhance human capital.

However, the point to consider here is that if investments are made in the two initiatives without due cognisance of how Industry 4.0 (the next industrial revolution driven by robotic automation) may evolve with respect to demand for workforce size and skill sets, there is a possibility of ending up with capital-intensive infrastructures and assets that fall short of being optimised for automated operations and a large workforce skilled in areas growing beyond the need for manual intervention only.

AI can also be consumed in traditional industries like agriculture. The Department of Agriculture Cooperation and Farmers Welfare, Ministry of Agriculture runs the Kisan Call Centers across the country to respond to issues raised by farmers instantly and in their local language. An AI system will help assist the call centre by linking various available information. For example, it could pick up soil reports from government agencies and link them to the environmental conditions prevalent over the years using data from a remote sensing satellite. It could then provide advice on the optimal crop that can be sown in that land pocket. This information could also be used to determine the crop’s susceptibility to pests. Necessary pre-emptive measures can then be taken - for instance, supplying the required pesticides to that land pocket as well as notifying farmers about the risk. With a high level of connectivity, this is a feasible and ready to deploy a solution which uses AI as an augmentation to the system.


Some of the key policy initiatives undertaken in countries around the world area are as follows:


In the National AI R&D Strategic Plan, the United States government has laid stress on channelling investments to drive discovery and insight in the field of AI and ML. More specifically, the plan calls for greater focus on broad ‘general AI’ in place of ‘narrow AI’ that traditionally aims at specific tasks: for example, moving from speech recognition to video recognition and translation. General AI will find application in a broader range of cognitive domains, including learning, language, perception, reasoning, creativity and planning.


The government of South Korea (Ministry of Science, ICT and Future Planning) has been investing in ExoBrain from 2013. ExoBrain is a language analysis and self-learning system with the capacity to store large volumes of data for learning and subsequent analysis. The investment of around 83 million EUR will last for 10 years. South Korea has announced $840 Million Public-Private Partnership spanning six corporations to drive AI research.

  1. CHINA

Internet giants in China are increasingly focusing on AI research, with domestic venture capital funding being directed towards this field. Many private players are fast rising in AI research capabilities, some of whom have their own AI research labs. A study by Japan’s National Institute of Science and Technology Policy found China to be a close second to the U.S. in terms of the number of AI studies presented at top academic conferences.


To reap the societal benefits of AI systems, we would need to be able to trust them and ensure that they comply with an ethical, moral and social framework analogous to that for humans. Research efforts must be concentrated on implementing regulations in AI system design that are updated on a continual basis to respond appropriately to different application fields and actual situations. In industries such as finance and healthcare, relevant professional ethical principles are encoded and practised by professionals; these could form the core of AI ethics.

A safe and secure AI system is one that acts in a controlled and well-understood manner. The design philosophy must be such that it ensures security against external attacks, anomalies and cyberattacks. Adversarial machine learning’ is key area the NITRD cybersecurity R&D strategic plan, that evaluates the extent to which AI systems can be contaminated by training data, modified algorithms, etc.


A strong presence in AI R&D is a prerequisite for a nation to gain a lead in an automation-driven future. For this, the national policy needs to take accurate stock of current and future demand for AI experts. Building expertise, on the other hand, will require governments to evaluate the current educational pathways and curricula and, if required, overhaul the same to provide skill upgradation initiatives for a workforce that seeks to stay relevant in a fast-evolving technology landscape.


AI-driven automation raises the most commonly foreseen pitfall in society - the potential mass obsolescence of manual labour in the middle-skill order, such as factory workers and technicians. This also brings in the opportunity of upskilling the population so that other prevalent problems can be solved. China is expected to have installed more industrial robots than any other country - 30 robots per 10,000 workers. A few thousand workers have already been replaced by a robotic workforce in a single factory.

Make in India, one of the Prime Minister's flagship programmes focuses on the twin goals of strengthening India's in-house innovation and production capabilities with the added creation of jobs for the middle-skilled strata of the workforce. The former goal of the programme is likely to be facilitated by large-scale AI adoption, with difficulties to be faced in meeting its latter goal.
The key point here is that with robotic automation, the Make in India initiative may not end up creating nearly as many jobs as it is poised to at this point in time.

On a positive note, a scenario wherein low-skilled, repeatable labour can be assigned to robotic systems provides an incentive for part of the workforce to be trained in higher level skills such as designing, monitoring and oversight, and adjusting machine algorithms to enable AI systems to operate in a reliable and transparent manner.

It has also been argued that automation of repetitive jobs will create more time and opportunities for citizens to pursue creative endeavours such as the arts, scientific innovation and personal goals, leading to a society diverse in skills and achievements.


Universities struggle to retain AI talent, especially academicians studying the rapidly growing and in-demand field of ML, with talented individuals getting concentrated in a few organisations. This might lead to AI research priorities getting narrowed down to a few ventures focusing on the ‘now’ rather than the long-term potential across a broader range of applications.


The current sequential approach to skill building through a person’s formative academic years may face obsolescence in a society with rapid de-skilling of jobs through robotic automation. Instead, a system that addresses the following requirements will likely better serve to sustain a whole socio-economic stratum of the workforce:

  • Educate for the future: Academic policy formulation and dissipation of knowledge should migrate from the traditional curriculum to a more specific one tailored to emerging industry demands.

  • Facilitate reskilling and lifelong learning: Moving on from a formal education, which accounts for the initial years of an individual, policymaking must take into account the pace with which skills move in and out of demand and lay down a framework for easing the transition to alternative skill sets and careers in the event of automation.


In light of technology advances, certain sectors are expected to experience a shrinkage of employment demand as robotic systems and ML algorithms take up several tasks. It can be expected that IT, manufacturing, agriculture, forestry, etc., will experience such a demand shift. According to Oxford University researchers Carl Frey and Michael Osborne,3 based on 702 occupational groupings, the following types of workers have a very high probability of being replaced by automation: telemarketers, hand sewers, mathematical technicians, insurance underwriters, watch repairers, cargo agents, tax preparers, etc. Some short- and long-term policy initiatives to cushion the impact of job losses stemming from AI-driven automation are discussed below: -


If a large number of people end up unemployed for extended periods of time, there needs to be a way to provide healthcare, disability and pension benefits outside employment.


In the event of continuous unemployment or underemployment, government schemes to provide a minimum level of income to each citizen to guarantee basic needs are necessary to keep them out of destitution. Proposals must be structured in a way so as to maintain a balance between benefits and incentives for engagement - for example, by involving the unemployed in social and community initiatives.


In an era of fast technology changes, employees need an enabling environment to transition into and out of jobs. Emerging jobs will require skills different from what people learn through academics. Companies can contribute a set amount to an individual's fund which can then be transferred to the individual switches jobs. The goal of such an initiative will be to incentivise lifelong education and up-skilling.


If people have limited employment options, they can participate in a wide range of volunteer activities undertaken by social-minded organisations. This can simultaneously ensure an engaged population and drive socially beneficial goals.


The traditional academic curriculum is not well equipped to cater to technological advancements. The sequential system of education and work is outdated in an economic environment that is heavy on automation and deskilling of jobs and where skills gain and lose value within a few years. What is required is a continuous skill improvement system that does not depend on the sequence of the skills imparted to young minds.


‘Global economic impacts associated with artificial intelligence’—a study funded by one of the technology giants and conducted by Nicholas Chen, Lau Christensen, Kevin Gallagher, Rosamond Mate and Greg Rafert of the Analysis Group4 —estimated the potential economic outcome of AI using prior technological advancements such as IT investment, broadband Internet, mobile phones and, more recently, industrial robotics. The conclusion, using reasonable benchmarks, pegs the cumulative economic impact of AI to be between 1.49 trillion USD and 2.95 trillion USD through 2025. In one case, AI intervention helped prevent significant insurance pay-out leakages. An AI company helped an insurer identify fraudulent vehicle insurance claims, which, as they predicted, would save the insurer millions of dollars a year. Customers of AI solutions want economic outcomes through demonstrable efficiency gains and margin improvements. Accordingly, the next generation of applied artificial intelligence as a service (A-AIaaS) companies are expected to offer integrated solutions for specific use cases on a purely operational expenditure (OPEX) model. For customers, it translates to a direct positive impact on operating margins and bottom lines. For example, more than 500 companies have deals to use IBM Watson to develop commercial products and services.


While AI adoption offers several growth opportunities, it also poses a host of commercial and financial challenges that AI operators, investors and policymakers need to consider.

Balancing research innovation with commercialisation potential: There is a significant upfront investment to be made before an AI product is considered commercially viable. Any solution coming out of this space will be subject to benchmarking against comparable manual performance and that of legacy systems in place. It is only when AI systems can significantly outperform the above that a business case for their adoption can be established.

Engineering considerations: Early into product design, engineering and production specifics such as material requirements, capacity planning, infrastructure requirements and costs should be kept in focus so as to avoid roadblocks in actually implementing and scaling up at a later date.

Longer sales cycle: AI systems are relatively novel to the potential user base who who is likely a longer time to envisage the benefits from AI immediately. This understanding needs to drive sales planning and lead to the generation of conversion cycle planning.
The above challenges have important implications for potential investors in AI research and commercial start-ups. The visibility of investment break-even periods can be wildly uncertain in the AI space. A robust milestone-based progress tracking approach to track and justify the efficacy of invested capital is needed—it could range from publications, establishing user communities to creating a recurring source of revenue. A defined set of milestones needs to be met before attempting to raise the next round of capital.

One of the major concerns in any conversation involving AI is the topic of ethical, legal and societal norms. AI research needs to base itself on a sound understanding of the various implications of any innovation and ensure alignment with rules and norms. Common concerns are the breach of privacy that might arise from an environment where hackers can exploit AI solutions to collect private and sensitive information.

A bigger threat is the misuse of ML algorithms by hackers to develop autonomous techniques that jeopardise the security and safety of vital information.

There is a need to define what ‘acceptable behaviour' for an AI system translates to in its respective application domain. This should ideally drive design considerations, engineering techniques and reliability. Due diligence in ensuring that AI technologies perform in an easy to understand manner and the outcome from their applications are in line with the perception of fairness, equality and local cultural norms to ensure broad societal acceptance.

AI development will hence need the involvement of experts from multidisciplinary fields such as computer science, social and behavioural sciences, ethics, biomedical science, psychology, economics, law and policy research.
AI algorithms might, by design, be inherently subject to errors that can lead to consequences such as unfair outcomes for racial and economic classes—for example, citizen profiling based on demographics to arrive at the probability to commit crimes or default on financial obligations. AI system actions should, therefore, be transparent and easily understandable by humans. Deep learning algorithms that are opaque to users could create hurdles in domains such as healthcare, where diagnosis and treatment need to be backed by a solid chain of reasoning to buy into patient trust. Trustworthy AI systems are built around the following tenets:

  • Transparency (operations visible to user)

  • Credibility (outcomes are acceptable)

  • Auditability (efficiency can be easily measured)

  • Reliability (AI systems perform as intended)

  • Recoverability (manual control can be assumed if required)

Owing to their vague and contextual interpretation, ethical standards pose a challenge while being encoded into AI systems. Some architectural frameworks that have been widely cited to counter the above challenge are:

  • An architecture designed with operational AI distinct from a monitor agent responsible for legal and ethical supervision of any actions

  • A framework to ensure that AI behaviour is safe for humans and implemented through a set of logical constraints on AI system behaviour.

Download 60.65 Kb.

Share with your friends:

The database is protected by copyright © 2024
send message

    Main page