|
|
|
|
LEADER |
01000caa a22002652 4500 |
001 |
NLM151563519 |
003 |
DE-627 |
005 |
20250205224509.0 |
007 |
tu |
008 |
231223s2004 xx ||||| 00| ||eng c |
028 |
5 |
2 |
|a pubmed25n0505.xml
|
035 |
|
|
|a (DE-627)NLM151563519
|
035 |
|
|
|a (NLM)15484917
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Wang, Chi-Hsu
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a Dynamical optimal training for interval type-2 fuzzy neural network (T2FNN)
|
264 |
|
1 |
|c 2004
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ohne Hilfsmittel zu benutzen
|b n
|2 rdamedia
|
338 |
|
|
|a Band
|b nc
|2 rdacarrier
|
500 |
|
|
|a Date Completed 16.11.2004
|
500 |
|
|
|a Date Revised 10.12.2019
|
500 |
|
|
|a published: Print
|
500 |
|
|
|a CommentIn: IEEE Trans Syst Man Cybern B Cybern. 2006 Oct;36(5):1206-9. doi: 10.1109/tcsi.2006.873184. - PMID 17036826
|
500 |
|
|
|a Citation Status MEDLINE
|
520 |
|
|
|a Type-2 fuzzy logic system (FLS) cascaded with neural network, type-2 fuzzy neural network (T2FNN), is presented in this paper to handle uncertainty with dynamical optimal learning. A T2FNN consists of a type-2 fuzzy linguistic process as the antecedent part, and the two-layer interval neural network as the consequent part. A general T2FNN is computational-intensive due to the complexity of type 2 to type 1 reduction. Therefore, the interval T2FNN is adopted in this paper to simplify the computational process. The dynamical optimal training algorithm for the two-layer consequent part of interval T2FNN is first developed. The stable and optimal left and right learning rates for the interval neural network, in the sense of maximum error reduction, can be derived for each iteration in the training process (back propagation). It can also be shown both learning rates cannot be both negative. Further, due to variation of the initial MF parameters, i.e., the spread level of uncertain means or deviations of interval Gaussian MFs, the performance of back propagation training process may be affected. To achieve better total performance, a genetic algorithm (GA) is designed to search optimal spread rate for uncertain means and optimal learning for the antecedent part. Several examples are fully illustrated. Excellent results are obtained for the truck backing-up control and the identification of nonlinear system, which yield more improved performance than those using type-1 FNN
|
650 |
|
4 |
|a Comparative Study
|
650 |
|
4 |
|a Evaluation Study
|
650 |
|
4 |
|a Journal Article
|
650 |
|
4 |
|a Research Support, Non-U.S. Gov't
|
650 |
|
4 |
|a Validation Study
|
700 |
1 |
|
|a Cheng, Chun-Sheng
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Lee, Tsu-Tian
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on systems, man, and cybernetics. Part B, Cybernetics : a publication of the IEEE Systems, Man, and Cybernetics Society
|d 1997
|g 34(2004), 3 vom: 20. Juni, Seite 1462-77
|w (DE-627)NLM098252887
|x 1083-4419
|7 nnns
|
773 |
1 |
8 |
|g volume:34
|g year:2004
|g number:3
|g day:20
|g month:06
|g pages:1462-77
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 34
|j 2004
|e 3
|b 20
|c 06
|h 1462-77
|