Can We Make Artificial Intelligence Accountable?
September 23, 2018
Lack of explainability of decisions made by Artificial Intelligence (AI) programs is a major problem. This inability to understand how AI does what it does also stops it from being deployed in areas such as law, healthcare and within enterprises that handle sensitive customer data. Understanding how data is handled, and how AI has reached a certain decision, is even more important in the context of recent data protection regulation, especially GDPR, that heavily penalizes companies who cannot provide an explanation and record as to how a decision has been reached (whether by a human or computer).
IBM may have made a major step towards tackling this issue, announcing today a software service to detect bias in AI models and track the decision-making process. This service should allow companies to track AI decisions as they occur, and monitor any ‘biased’ actions to ensure that AI processes are in line with regulation and overall business objectives.
If this software can truly explain the decisions taken by even the most complex deep learning algorithms, this development could provide the peace of mind that many companies need before unleashing AI on their data.
Read more at Forbes