Artificial intelligence (AI) has arguably been one of the most disruptive technologies in the past few years. AI has played a vital role in progressing technology advancements across a variety of industries and is set to make big changes in the future.
Although AI has contributed to a lot of good in our society it can, says Asheesh Mehra, Group CEO & co-founder at AntWorks, unfortunately, also lead to misuse of technology that has been relayed into attention-grabbing headlines, pitting AI against humanity. We need to remember why humans created AI in the first place. It is because when used correctly and ethically, AI can be an incredible tool for good.
AI will continue to make drastic improvements and advancements across many sectors which will have a profound impact on society. The use of AI within healthcare will see medical research and trials conclude quicker. Transportation is set to change with self-driving vehicles and smart roads which will contribute to creating smart cities. AI can also help to predict natural disasters which will help to reduce those affected, which currently stands at 160 million people worldwide each year.
Real-time data can aid farming and address agricultural productivity to help provide for the growing population. AI allows for more efficient use of resources, increased productivity, and better customer experiences. It can contribute to improved healthcare and it’s even promising longer life expectancy. These are just a few examples of how AI can enhance our lives, businesses, and the world.
On the opposite end of the spectrum, there is AI bias, accelerated hacking, and AI terrorism. This is where big challenges await both government institutions and legal organisations as they will have to tackle the larger issues that can arise from the misuse of the technology.
Regulating AI: Who is responsible?
AI is a powerful technology but with great power comes great responsibility. This is where the discussion around ethical AI comes into play. There needs to be a widespread realisation of ethical AI and what is required to get there. Both businesses and governments need to address and ensure AI accountability. Accountability means ensuring people use AI engines as intended and not for fraudulent or other malicious pursuits.
In addition to this, ethical AI relies on the ability of AI solutions to enable auditability and traceability. Accountability means ensuring people use AI engines as intended and not for fraudulent or other malicious pursuits. I believe that technology vendors should be held to the highest standard of responsibility and accountability for ensuring an ethical approach to the design and application of AI products and services. Governments need to play an important role in AI accountability for ethical AI, too.
Technology providers must also educate their teams and partners on appropriate uses of AI. Some people are unaware of the power of AI engines and their potential for misuse. With the right education, such individuals can learn about the risks and how to avoid them. Education is a big horizontal component that is expected to cut across every aspect of an AI journey. That will go a long way towards contributing to ethical AI. Legislators and regulators have a key role to play in the area of AI accountability. That involves specifying the applications for which AI can and cannot be used and holding vendors responsible if the technology is misused.
While AI needs regulation, technology as a whole shouldn’t be stifled. There needs to be governmental legislation, developed in collaboration with businesses, put into place to help standardise how AI technology can be used sector by sector. For example, regulations should indicate the specific purposes and contexts are appropriate for the application of AI and equally, which ones should be prohibited. One way AI solution providers address accountability is by having very strong deal qualification criteria before starting work with an organisation. For example, declining the business of an organisation that planned to use AI bots to amass Broadway show tickets within seconds of their availability and then sell them to the public at elevated prices.
Whilst there is a great deal of unwarranted fear around AI and the potential consequences it may have for businesses’ workforces. We should be optimistic about a future where AI is ethical and useful.
The author is Asheesh Mehra, Group CEO & Co-Founder at AntWorks.
About the author
The author is Asheesh Mehra, Group CEO & Co-Founder at AntWorks. AntWorksTM is a global provider of AI and Robotics. He believes humane, responsible AI is the future, and is excited by its limitless applications to solve for issues that impact business, our lives and the planet we inhabit. Prior to boarding the entrepreneurial ship, Asheesh headed Infosys BPO – Asia Pacific, Japan and the Middle East. His experience over 20 years has also spanned leadership roles in large ITeS organisations, such as Mphasis, TCS and WNS, having worked extensively across the UK and the United States.
Asheesh was named Singapore Business Review’s 2019 Innovator of the Year, distinguishing the most innovative industry leaders in Singapore who pioneered solutions leading to significant company growth. His achievements were also recognised by the Shared Services & Outsourcing Network (SSON), which presented him with the ‘People’s Choice for Personal Contribution to Industry – APAC’ award in 2011 and the ‘Thought Leader of the Year – Asia’ award in 2010. With a penchant for the unconventional, Asheesh’s vision for AntWorks remains human wellbeing, authentic customer benefit, and responsible innovation that respects the environment and all it sustains.
Follow us and Comment on Twitter @TheEE_io