References in periodicals archive ?
Similar conclusion can be drawn from Table 3 where the performance of the approaches that combine fine-tuning deep representation and SVM classifier, that is, FTAN + SVM, FTGN + SVM, FTVGG + SVM, and FTRN + SVM, is significantly superior to LFBR and GBFR.
Lastly, we conclude this section by reporting the best performance of each strategy to compare three groups of strategies, including the approaches that adopt fine-tuning deep CNNs (i.e., FTAN, FTGN, FTVGG, and FTRN), the methods which combine fine-tuning deep architectures and traditional classifiers (i.e., FTVGG + kNN, FTGN + kNN, FTAN + kNN, FTRN + kNN, FTVGG + random forest, FTGN + random forest, FTAN + random forest, FTRN + random forest, FTAN + SVM, FTGN + SVM, FTVGG + SVM, and FTRN + SVM), and those strategies employing handcrafted features (i.e., GFBR, LFBR).
The results of Figure 18 demonstrate that (1) the approaches that combine both fine-tuning deep representation and kNN classier, that is, FTVGG + kNN, FTGN + kNN, FTAN + kNN, and FTRN + kNN, consistently outperform the methods that adopt hand-crafted features, like GFBR and LFBR and (2) nearly all the strategies are not sensitive to the value of k, especially when k is greater than 3.
We find that, (1) with respect to all the strategies, the performance apparently tends to be better when nTree increases and (2) the performance of the approaches that combine fine-tuning deep representation and random forest classifier, that is, FTVGG + random forest, FTGN + random forest, FTAN + random forest, and FTRN + random forest, is significantly superior to LFBR and GBFR.
Similar conclusion can be draw from Table 5 where the performance of the approaches that combine fine-tuning deep representation and random forest classifier, that is, FTAN + SVM, FTGN + SVM, FTVGG + SVM, and FTRN + SVM, is significantly superior to LFBR and GBFR.
in [13]), (2) the methods which combine fine-tuning deep architectures and traditional classifiers (i.e., FTVGG + kNN, FTGN + kNN, FTAN + kNN, FTRN + kNN, FTVGG + random forest, FTGN + random forest, FTAN + random forest, FTRN + random forest, FTAN + SVM, FTGN + SVM, FTVGG + SVM, and FTRN + SVM), and (3) those strategies employing hand-crafted features (i.e., GFBR, LFBR).
In our study, the effect of lead removing and anti-oxidative damage in the low LFBR group was almost no different with that of high LFBR group, and the dosages of L.
Thus we preliminarily inferred that there existed at least three mechanisms of lead removing and anti-oxidative damage of LFBR in selected tissues.
This study mainly demonstrated the effect of lead removing and anti-oxidative damage of LFBR in some representative tissues which easily accumulated lead toxin.
Furthermore, LFBR would not affect the absorption and metabolism of calcium, iron, magnesium and zinc in administered mice in vivo.
In this study, the results have demonstrated that LFBR could effectively reduce the lead and lead-induced oxidative damage in selected tissues in lead-exposure mice.
palustris 0.02 5 Low LFBR 0.02 2.5 (f) Mean LFBR 0.02 5 (f) High LFBR 0.02 10 (f) (a) Lead acetate was used for the establishment of mice high BLL model and administered via peritoneal injection for 7 days.