References in periodicals archive ?
Compared to the objective function of BDL  in (10), WBDL model in (8) deletes weight [u.sub.r,j] in block sparse regularization.
Firstly, according to equation diag([[??].sub.j])[??] = z, new sparse coefficient of our WBDL model can be obtained, [??] = [[5/4, 5/4, 5, 5, 0, 0, 0, 0].sup.T].
In the objective function of our proposed WBDL model, there are two unknown variables B and Z, a variable U which can be computed from Z directly.
In our proposed WBDL model, for each proto dictionary block, rather than assigning it to only one class, we assign C weight values to indicate its relationship to all class dictionaries.
WBDL algorithm and its two classification algorithms, local classification algorithm and global classification algorithm, are described as follows.
Learn proto dictionary and weight matrix using WBDL algorithm (Algorithm 1).
Algorithm 3 (WBDL global classification algorithm).
Learn extended dictionary [([B.sup.T], [G.sup.T]).sup.T] and U using WBDL algorithm (Algorithm 1).
In this section, WBDL algorithm was evaluated on three classification tasks of simulation experiment, face recognition, and object recognition.
Compared to general dictionary learning algorithm, WBDL model introduces block structure and weight matrix.
We used the same parameters for all the following four methods, D-KSVD , WDL, BDL, and WBDL. WDL is the algorithm which only introduces weight vector, and BDL is the algorithm only introducing block structure.
In this section, WBDL algorithm is evaluated through face recognition task on AR face database .
Acronyms browser ?
Full browser ?