References in periodicals archive ?
We can see that as the number of source domains increase, the [A.sub.ROC] of MSTrA and MSDTrA increases and the corresponding standard deviations also decrease.
MSTrA and MSDTrA use four different sources domain training sets which contain more useful information, so they get higher testing accuracy than other algorithms.
Let the variables [X.sub.1], [X.sub.2], [X.sub.3], [X.sub.4], [X.sub.5] denote the classification error rate of MSDTrA, MSTrA, CDASVM, DTrAdaBoost, and AdaBoost algorithms, respectively.
Supposing that the time complexities of training a classifier and updating weight are [C.sub.h] and [C.sub.w], respectively, the time complexity of AdaBoos, DTrAdaBoost, MSTrA, and MSDTrA can be approximated to [C.sub.h]O(M) + [C.sub.w]O([n.sub.b]M), [C.sub.h]O(M) + [C.sub.w]O([n.sub.a]M), [C.sub.h]O(NM) + [C.sub.w]O([n.sub.a]NM) and [C.sub.h]O(NM) + [C.sub.w]O([n.sub.a]M).
Acronyms browser ?
Full browser ?