Activity 13: Perceptron

🕑12:37, 28 Oct 2019

For this activity [1], I used the features extracted from the fruits (50 each of apples, mangoes, bananas) from the previous activity. The feature space in - (obtained from the color space) is shown in Fig. 1. Since we’ll be working with a linear classifier for now, we need to process only two classes at a time. We design the perceptron so that it follows a simple weight update rule

where is the ground truth label, is the predicted label, and is the learning rate, which we set to 10-2. The perceptron is trained for 100 epochs or until the sum of squares error (SSE) drops below some selected tolerance and the decision boundary is obtained from the final weights. The decision boundary and decision contours for each class pair is shown in Fig. 2.

ab space

Figure 1: Feature space in -.

banana-apple
(a) banana-apple
banana-orange
(b) banana-orange
apple-orange
(c) apple-orange

Figure 2:

References

  1. M. N. Soriano, A13 - Perceptron (2019).

Keywords

image processing
computer vision
feature extraction
perceptron
object classification