Section II presents the mathematical foundations of LDA and HLDA methods.
The goal of HLDA is to find the matrix [THETA] which maximizes the likelihood function
For this variant of HLDA, a very efficient algorithm for the maximization of (8) is presented in .
This is the reason why in this paper an additional variation of HLDA is analysed.
LDA and HLDA are methods which take into account the information about observation classes.
In the iterative algorithm for HLDA proposed in , the initial transformation matrix is normalized so as to set its determinant to 1.
In Tables III and V the performances of the systems which used original HLDA, defined by (8), are given in the column "All", and the performances of the relaxed variants, defined by (9), are given in the column "Discriminative".
Experiments with HLDA are conducted only for the 2 variants of input vectors which show the best performance in LDA tests, and their results are presented in Table III.
1 the values of WERs for the systems based on the LDA and corresponding HLDA are compared.