How Far Are We From Explainable Artificial Intelligence?
Artificial intelligence (AI) is heralding a revolution in how we interact with technology. Its capabilities have changed how we work, travel, play and live. But this is just the beginning.
The next step is explainable AI (XAI), a form of AI whose actions are more easily understood by humans. So how does it work? Why do we need it? How will it forever change the way industries – especially in marketing – function?
The Mystery of the Black Box: The Problem With Current AI
No one would deny that artificial intelligence produces amazing results. Computers that can not only process vast amounts of data in seconds, but also learn, decide and act on their own have turned many industries on their heads – according to PricewaterhouseCoopers, the market worth of AI is around US$15 trillion. However, in its current form, AI does have one major weakness: explanation.
Namely, it can’t explain its decisions and actions to humans. This is sometimes referred to as the “black box” in machine learning – for example, the calculations and decisions are carried out behind the scenes with no rationale given as to why the AI arrived at that decision.
Why is this a problem? It doesn’t engender trust in the AI, which in turn raises doubt about its actions. Explainable AI is expected to solve that.
How XAI Works
XAI is much more transparent. The human actors interacting with the AI are informed not only of what decisions it reached and actions it will take, but how it came to those conclusions based on the available data. It aims to do this while maintaining a high level of learning performance.
Current AI takes data into its machine learning process and produces a learned function, leaving the user with a number of questions such as: Why did it do that? Why didn’t it do something else? When will it succeed? And when fail? How can I trust it? And how do I correct an error?
By contrast, XAI uses a new machine learning process to produce an explainable model with an explainable interface. This should answer all the questions above.
This carries its own risks. Any decision made by an AI is only as good as the data used to make it. While XAI increases trust in the decision made, that trust could be misplaced if the data is unreliable.
Another problem is how well the AI explains its decisions. If it is not comprehensible to the user – who could be a lay person with no technical background – the explanation will be worthless. Solving this will involve scientists working with UI experts, along with complex work on the psychology of explanation.
Risk, Trust and Regulation: Why We Need XAI
In so-called “big ticket” decisions like military, finance, safety critical systems in autonomous vehicles and diagnostic decisions in healthcare, the risk factor is high. Hence it is crucial that the AI explains its decisions in order to boost trust and confidence in its ability. However, there are a host of benefits for businesses in other industries.
XAI can address pressures like regulation, as it will enable full transparency in case of an audit. It will encourage best practice and ethics by explaining why each decision is the right one morally, socially and financially. It will also reinforce confidence in the business, which will reassure shareholders.
It will also put businesses in a stronger position to foster innovation, as the more advanced the AI, the more capable it is in terms of innovative uses and new abilities. Interacting with AIs will soon be standard business practice in many industries, including marketing. Hence it is vital that users can do so comfortably and with confidence.
Experts think this will empower marketers, effectively turning AI into a co-worker rather than a tool.
“In order to trust AI, people need to know what the AI is doing,” says Hsuan-Tien Lin, Chief Data Scientist, Appier. “Much like how AlphaGo is showing us new insights on how to play the board game Go, explainable AI could show marketers new insights on how to conduct marketing. For instance, AI can reach the right audience at the right time now, but if future XAI can explain this decision to humans, it would help marketers understand their audience more deeply and plan for better marketing strategies.”
It could also usher in a new way of working, with marketers accepting or rejecting XAI’s explainable suggestions with reasons in order to help the AI learn. “Today, it is likely that many great suggestions are rejected because they are not explained, and so humans overlook their power,” says Min Sun, Chief AI Scientist, Appier. However, these days could soon be over…
The Defense Advanced Research Projects Agency is currently running an XAI program until 2021. The program is expected to enable “third-wave AI systems”, where machines can build underlying explanatory models to describe real-world phenomena based on their understanding of the context and operating environment. Other experts also predict XAI will become a reality within three to five years.
XAI is no doubt the next step for AI, improving trust, confidence and transparency. Businesses would be wise not to overlook its potential.
WE ARE HERE TO HELP
YOU MIGHT ALSO LIKE
In the early days of app marketing, brands defined success as the number of times their apps were downloaded and installed. With a deeper understanding of customer behavior in the app world has come the realization that the continued engagement and retention of customers is a much more valuable metric. With one in five users abandoning apps after just one use, only the customers who continue to engage with the app will fulfil valuable in-app events such as subscription and purchase, which ensures higher return on investment and revenue, and lower churn rate. Brands have thus turned their attention to high-quality user acquisition (UA), throwing their money behind growing the customer lifetime value (CLV/LTV). But it all starts with the app install. The key to optimizing in-app events is to drive quality installs by identifying users who will engage with your app prior to the initial download – at the right volumes and the right price. UA requires that marketers analyze data points to identify customer behavior trends and arrive at insights around which of them will demonstrate stickiness. With the sheer volume of data that marketers are dealing with today, manual analysis is not feasible, and they are turning
Our new blog post features a short list of some common python programming patterns and their C++ equivalents. This can help programmers learn C++ in a more efficient way if he or she already knows Python. Leave us a comment below and let us know what you’d like to see covered in our future posts! █ Read More Technical Insights: Introduction to GraphQL｜goo.gl/d7PyXH █ Join Us Our current openings｜goo.gl/rx1jce Basic C++ 11/14 for Python Programmers from Appier
Imagine that you are a marketer looking to run a targeted marketing campaign. What if you had a tool that could easily segment your market on the basis of factors like economic status, purchasing preferences, online shopping behavior, etc. so that you could customize your approach and messaging to each segment for maximum impact and conversion? These are the kind of insights that deep learning (DL)* can offer. DL refers to a family of advanced neural networks that mimic the way the brain processes information and extract goal-oriented models from scattered and abstract data. What differentiates it from traditional machine learning is the use of multiple layers of neurons to digest the information. A DL program trains a computer to perform human-like tasks, such as speech recognition or predicting consumer behaviors. It is fed large amounts of data and taught what the desired output should be. The more data it’s fed, the better performance. The program then applies calculations to achieve that output, modifying calculations and repeating the cycle until the desired outcome is achieved. The ‘deep’, hence refers to the number of processing layers that the data must pass through to achieve the outcome, and how the learning