Nalazite se na CroRIS probnoj okolini. Ovdje evidentirani podaci neće biti pohranjeni u Informacijskom sustavu znanosti RH. Ako je ovo greška, CroRIS produkcijskoj okolini moguće je pristupi putem poveznice www.croris.hr
izvor podataka: crosbi !

A Cascade-Correlation Learning Network with Smoothing (CROSBI ID 466494)

Prilog sa skupa u zborniku | izvorni znanstveni rad | međunarodna recenzija

Petrović, Ivan ; Baotić, Mato ; Perić, Nedjeljko A Cascade-Correlation Learning Network with Smoothing // Proceedings on the International ICSC/IFAC Symposium on Neural Computation - NC'98 / Heiss M. (ur.). Academic Press, 1998. str. 1023-1029-x

Podaci o odgovornosti

Petrović, Ivan ; Baotić, Mato ; Perić, Nedjeljko

engleski

A Cascade-Correlation Learning Network with Smoothing

A cascade correlation learning network (CCLN) is a popular supervised learning architecture that gradually grows the hidden neurons of fixed nonlinear activation functions, adding one-by-one neuron in the network during the course of training. Because of fixed activation functions the cascaded connections from the existing neurons to the new candidate neuron are required to approximate high-order nonlinearity. The major drawback of a CCLN is that the error surface is very zigzag and unsmooth due to the use of maximum correlation criterion that consistently pushes the hidden neurons to their saturated extreme values instead of active region. To alleviate this drawback of the original CCLN two new cascade-correlation learning networks (CCLNS1 and CCLNS2) are proposed, which enable smoothing of the error surface. Smoothing is performed by (re)training the gains of the hidden neurons' activation functions. In CCLNS1 smothing is enabled by using the sign functions of the neurons' outputs in the cascaded connections and in CCLNS2 each hidden neuron has two activation functions: fixed one for cascaded connections, and trainable one for connection to the neurons in output layer. The performances of the network structures are tested by learning them to approximate three nonlinear functions. Both proposed structures exhibit much better performances than the original CCLN, while CCLNS1 gives a little bit better results than CCLNS2.

Cascade Correlation; Neural Network; Learning Network; Smoothing

nije evidentirano

nije evidentirano

nije evidentirano

nije evidentirano

nije evidentirano

nije evidentirano

Podaci o prilogu

1023-1029-x.

1998.

objavljeno

Podaci o matičnoj publikaciji

Proceedings on the International ICSC/IFAC Symposium on Neural Computation - NC'98

Heiss M.

Academic Press

Podaci o skupu

International ICSC/IFAC Symposium on Neural Computation, NC'98

predavanje

23.09.1998-25.09.1998

Beč, Austrija

Povezanost rada

Elektrotehnika