Deviation detection and interpretability for deep learning models

Systems and methods for detecting potential deviations by artificial intelligence modeling human decision making using time series prediction data and event data of investigating participants and personal characteristic data of participants. A deep Bayesian model solves a deviation distribution that...

Full description

Saved in:
Bibliographic Details
Main Authors XIA WEI, BASAK, SUDEPTA, RAMAMURTHY ARUN, VENUGOPALAN, JANANI, SRIVASTAVA, SANJEEV
Format Patent
LanguageChinese
English
Published 12.08.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Systems and methods for detecting potential deviations by artificial intelligence modeling human decision making using time series prediction data and event data of investigating participants and personal characteristic data of participants. A deep Bayesian model solves a deviation distribution that fits a modeled predictive distribution of time series event data and personal characteristic data with a predictive probability distribution derived by a recurrent neural network. A set of population deviation clusters is evaluated for key features of relevant personal characteristics. The causal graph is defined by a dependency graph of the key features. Deviation interpretability is inferred from perturbations in a deep Bayesian model of a subset of features from a causal graph to determine which causal relationships are most sensitive to group membership changing participants. 通过使用调查参与者的时间序列预测数据和事件数据以及参与者的个人特性数据,对人类决策制定进行人工智能建模来检测潜在偏差的系统和方法。深度贝叶斯模型求解偏差分布,该偏差分布将时间序列事件数据和个人特性数据的建模预测分布与由递归神经网络导出的预测概率分布拟合。针对相关个人特性的
Bibliography:Application Number: CN202080090940