FTRN

AcronymDefinition
FTRNFair Trade Resource Network
FTRNFight the Right Network (Philadelphia, PA)
Copyright 1988-2018 AcronymFinder.com, All rights reserved.
References in periodicals archive ?
Similar conclusion can be drawn from Table 3 where the performance of the approaches that combine fine-tuning deep representation and SVM classifier, that is, FTAN + SVM, FTGN + SVM, FTVGG + SVM, and FTRN + SVM, is significantly superior to LFBR and GBFR.
Lastly, we conclude this section by reporting the best performance of each strategy to compare three groups of strategies, including the approaches that adopt fine-tuning deep CNNs (i.e., FTAN, FTGN, FTVGG, and FTRN), the methods which combine fine-tuning deep architectures and traditional classifiers (i.e., FTVGG + kNN, FTGN + kNN, FTAN + kNN, FTRN + kNN, FTVGG + random forest, FTGN + random forest, FTAN + random forest, FTRN + random forest, FTAN + SVM, FTGN + SVM, FTVGG + SVM, and FTRN + SVM), and those strategies employing handcrafted features (i.e., GFBR, LFBR).
However, there is little improvement for FTRN when combining traditional classifiers.
In general, FTRN achieved convergence a litter slower than three others.
The results of Figure 18 demonstrate that (1) the approaches that combine both fine-tuning deep representation and kNN classier, that is, FTVGG + kNN, FTGN + kNN, FTAN + kNN, and FTRN + kNN, consistently outperform the methods that adopt hand-crafted features, like GFBR and LFBR and (2) nearly all the strategies are not sensitive to the value of k, especially when k is greater than 3.
We find that, (1) with respect to all the strategies, the performance apparently tends to be better when nTree increases and (2) the performance of the approaches that combine fine-tuning deep representation and random forest classifier, that is, FTVGG + random forest, FTGN + random forest, FTAN + random forest, and FTRN + random forest, is significantly superior to LFBR and GBFR.
Similar conclusion can be draw from Table 5 where the performance of the approaches that combine fine-tuning deep representation and random forest classifier, that is, FTAN + SVM, FTGN + SVM, FTVGG + SVM, and FTRN + SVM, is significantly superior to LFBR and GBFR.
Lastly, we conclude this section by reporting the best performance of each strategy to compare three groups of strategies; they are (1) the approaches that adopt fine-tuning deep architectures (i.e., FTAN, FTGN, FTVGG, FTRN, and the methodproposedbyBianco et al.