0
2.1kviews
i) Compare RBF and MLP (ii) How do you achieve fast learning in ART 2 network.
1 Answer
0
33views

Compare RBF and MLP:

RBF MLP
1. RBFN is a ingle hidden layer. 1. MLP is a multiple hidden layer.
--- ---
2. In RBF hidden layer computation nodes are different from output nodes. 2. MLP follows the common computational model in hidden as well as output.
3. In RBF hidden layer is non-linear and output layer is linear. 3. In MLP hidden layer and output layer is linear.
4. The argument of RBF activation function computes Euclidean norm between input vector and centre. 4. Each hidden unit computes the inner product of input vector and synaptic vector.
5. Exponentially decaying local characteristics. 5. Global approximation to non-linear input - output mapping.
6. RBFN is fully connected. 6. MLP can be partially connected.
7. In RBFN, the hidden nodes operate differently i.e. they have different models. 7. In MLP, the hidden nodes share a common model not necessary the same activation function.
8. In RBF network we take differece of input vector and weight vector 8. In MLP network we take product of input vector and weight vector.
9. In RBF training of 1 layer at a time. 9. In MLP training of all layer simultaneously.
10. RBFN does faster training process. 10. MLP is slower in training process.
11. RBFN is slow when practically used. 11. MLP is faster when practically used.

(ii) How do you achieve fast learning in ART 2 network.

  • Increasingly popular neural networks with diverse applications and the main rival to the multi layered perceptron.
  • Input layer is simple a fan out and does no processing.
  • The second or hidden layer performs a non-linear mapping from the input space into a higher dimensional space in which the patterns become linearly separable.
  • Weights of the hidden layer corresponds to cluster center, output function is usually Gaussian.
  • The hidden units provide a set of function that constitute an arbitrary "basis" for the input patterns when they are expanded into the hidden space. These functions are called Radial Basis Functions.
  • In most application the hidden space is of high dimensionality.
  • The output layer performs a simple weighted sum with a linear output.
  • A mathematical justification for the rational of a nonlinear transformation followed by a linear transformation may be traced back to an early paper by cover.
  • The underlying justification is found in cover's theorem which states that "A complex pattern classification problem cast in high dimensional space non-linearly is more likely to be linearly separable that in a low dimensional space.
  • This is the reason for making the dimension of the hidden space in an RBF network high.
  • We know that once we have linearly separable patterns, the classification problem is easy to solve.
  • It is easy to have an RBF network perform classification. We simply need to have a an output function yk(x) for each class k with appropriate targets and when the network is trained it will automatically classify new patterns.
  • The most commonly used Radial Basis Function is a Gaussian function.
  • In a RBF network r is the distance from the cluster center.
  • The distance measure from the cluster center is usually the Euclidean distance.
  • For each neuron in the hidden layer the weights represent the co-ordinates the center of the cluster.
Please log in to add an answer.