首页|MoET: Mixture of Expert Trees and its application to verifiable reinforcement learning

MoET: Mixture of Expert Trees and its application to verifiable reinforcement learning

扫码查看
Rapid advancements in deep learning have led to many recent breakthroughs. While deep learning models achieve superior performance, often statistically better than humans, their adoption into safety critical settings, such as healthcare or self-driving cars is hindered by their inability to provide safety guarantees or to expose the inner workings of the model in a human understandable form. We present MoET, a novel model based on Mixture of Experts, consisting of decision tree experts and a generalized linear model gating function. Thanks to such gating function the model is more expressive than the standard decision tree. To support non-differentiable decision trees as experts, we formulate a novel training procedure. In addition, we introduce a hard thresholding version, MoETh, in which predictions are made solely by a single expert chosen via the gating function. Thanks to that property, MoETh allows each prediction to be easily decomposed into a set of logical rules in a form which can be easily verified. While MoET is a general use model, we illustrate its power in the reinforcement learning setting. By training MoET models using an imitation learning procedure on deep RL agents we outperform the previous state-of-the-art technique based on decision trees while preserving the verifiability of the models. Moreover, we show that MoET can also be used in real-world supervised problems on which it outperforms other verifiable machine learning models. (c) 2022 Elsevier Ltd. All rights reserved.

VerificationDeep learningReinforcement learningMixture of ExpertsExplainabilityCOMPUTER-AIDED DETECTIONDEEP

Vasic, Marko、Petrovic, Andrija、Wang, Kaiyuan、Nikolic, Mladen、Singh, Rishabh、Khurshid, Sarfraz

展开 >

University of Texas System,Univ Texas Austin

Singidunum Univ

Google

Univ Belgrade

Google Brain

展开 >

2022

Neural Networks

Neural Networks

EISCI
ISSN:0893-6080
年,卷(期):2022.151
  • 6
  • 60