Dimlp algorithms¶
The Discretized Interpretable Multi-Layer Perceptron (DIMLP) is a specialized feed-forward neural network architecture, derived from the traditional MLP (Multi-Layer Perceptron). DIMLP performs predictive tasks and generates interpretable decision rules that explain the underlying reasoning behind the model's predictions. The DIMLP framework includes a set of algorithms that leverage this capability for model training, evaluation, and rule extraction. To get more details on the Dimlp algorithm, you can refer to this paper, and to this one for dimlpBT.
Architecture¶
The architecture is built as shown below:
graph TD;
DimlpTrn[DimlpTrn]
DimlpBT[DimlpBT]
DimlpPred[DimlpPred]
DimlpCls[DimlpCls]
DimlpRul[DimlpRul]
DensCls[DensCls]
d(Dimlp algorithms) --> DimlpTrn;
d --> DimlpBT;
DimlpTrn --> DimlpPred;
DimlpTrn --> DimlpCls;
DimlpTrn --> DimlpRul;
DimlpBT --> DensCls; Each algorithm has its purpose:
- DimlpTrn: Trains the
Dimlpmodel using a training dataset, obtains train/test/validation predictions and model weights, and can optionally extract global rules using theDimlpalgorithm. - DimlpPred: Generates predictions from the trained
Dimlpmodel on a test dataset. - DimlpCls: Calculates accuracy, generates predictions, and retrieves the values of the first hidden layer from the trained
Dimlpmodel on a test dataset. - DimlpRul: Generates global explanation rules using the
Dimlpalgorithm on the training dataset used to train theDimlpmodel, and retrieves training, testing, and validation accuracy, if provided. - DimlpBT: Trains the
Dimlpmodel using a training dataset with bagging, obtains train/test/validation predictions and model weights, and can optionally extract global rules using theDimlpalgorithm. - DensCls: Generates global explanation rules using the
Dimlpalgorithm and obtains train and test predictions and accuracy from aDimlpmodel trained with bagging.