As artificial intelligence (AI) systems begin to control safety-critical infrastructure across a growing number of industries, DNV GL has released a position paper on the responsible use of AI. The paper asserts that data-driven models alone may not be sufficient to ensure safety and calls for a combination of data and causal models to mitigate risk.

Entitled “AI + Safety,” the paper details the advance of AI and how such autonomous and self-learning systems are becoming more and more responsible for making safety-critical decisions. The paper states that as the complexity of engineering systems increases, and more and more systems are interconnected and controlled by computers, human minds have become hard pressed to cope with, and understand, the associated enormous and dynamic complexity.

In fact, it seems likely that human oversight will be able to be applied to many of these systems at the timescale required to ensure safe operation. Machines need to make safety-critical decisions in real-time, and industry has ultimate responsibility for designing artificially intelligent systems that are safe.

The operation of many safety-critical systems has traditionally been automated through control theory by making decisions based on a predefined set of rules and the current state of the system. Conversely, AI tries to automatically learn reasonable rules based on previous experience. 

Since major incidents in the oil and gas industry are scarce, such scenarios are not well captured by data-driven models alone. Not enough failure-data is available to make such critical decisions. AI and machine-learning algorithms, which currently rely on data-driven models to predict and act upon future scenarios, may not be sufficient then to assure safe operations and protect lives.

Source: The Maritime Executive www.maritime-executive.com