written 7.9 years ago by | modified 2.8 years ago by |

**Mumbai University > Computer Engineering > Sem 7 > Soft Computing**

**Marks:** 5 Marks

**Year:** May 2016

**1 Answer**

0

6.1kviews

Explain the perceptron learning with example.

written 7.9 years ago by | modified 2.8 years ago by |

**Mumbai University > Computer Engineering > Sem 7 > Soft Computing**

**Marks:** 5 Marks

**Year:** May 2016

ADD COMMENT
EDIT

2

148views

written 7.9 years ago by |

The Perceptron learning algorithm has been proved for pattern sets that are known to be linearly separable.

No such guarantees exist for the linearly non-separable case because in weight space, no solution cone exists. When the set of training patterns is linearly non-separable, then for any set of weights, W. there will exist some training example. Xk, such that Wk misclassifies Xk.

Consequently, the Perceptron learning algorithm will continue to make weight changes indefinitely.

Example:

input x = $( I_1, I_2, I_3) = ( 5, 3.2, 0.1 ).$

Summed input $$= \sum_i w_iI_i = 5 w_1 + 3.2 w_2 + 0.1 w_3$$

ADD COMMENT
EDIT

Please log in to add an answer.