Lab 10: Principal Component Analysis

This lab will introduce the technique of principal component analysis (PCA). We well see how essential information about a data set is contained in its covariance matrix, and we will use the eigenvectors of this matrix to “transform” our signal to the PCA domain. This is analogous to the frequency domain for a time signal! We will reconstruct a given face using a varying number of principal components, and we will examine the reconstruction error to quantitively show us our accuracy. How should this accuracy compare to the DFT?

• Click here to download the assignment.

• Click here to download the data for the assignment.

• Due on April 6th by 5pm.

PCA decomposition (1.1 and 1.2)

In this lab, we will use the PCA transform to perform image compression on the ORL face database, which contains 10 different images of 40 distinct subjects. To do this, the first step is to load the data, the mean face and the sample covariance from the pickle file $\p{lab10data.pkl}$.

import pickle as pkl

# Loading the data

dictionary = pkl.load(open('lab10data/lab10data.pkl', 'rb'))
faces = dictionary['faces']
mu = dictionary['mu']
sigma = dictionary['sigma']

This pickle file has three contents, $\p{faces}$, $\p{mu}$ and $\p{sigma}$, all of which are arrays. The array $\p{faces}$ contains all the faces in the dataset and has shape $\p{(112,92,40)}$. The array $\p{mu}$ contains the mean face and has shape $\p{(112,92)}$. The array $\p{sigma}$ contains the sample covariance matrix of the vectorized faces and has shape $\p{(10304,10304)}$.

Given the dataset’s covariance matrix $\p{sigma}$, we use the unitary matrix corresponding to the first $K$ eigenvectors of $\p{sigma}$ to define the PCA transform on $K$ components. Thus, we need to implement a class that, given a sample covariance matrix and the number of principal components $K$, calculates the first $K$ eigenvectors of the sample covariance matrix (i.e., the $K$ eigenvectors corresponding to the $K$ largest eigenvalues). This class is defined as follows.

import scipy.sparse.linalg

# 1.1

class K_eigenvectors():
    
    def __init__(self, X, K):
        
        self.X = X
        self.K = K
        
    def solve(self):
        
        L, V = scipy.sparse.linalg.eigs(self.X, k=self.K)
        return V[:,0:self.K]

We test this class by instantiating an object with arguments $\p{sigma}$ and $\p{K=20}$. As a sanity check, we also verify that the resulting eigenvector matrix (stored in the variable $\p{K\_eig}$) is unitary by comparing $\p{K\_eig}^H\p{K\_eig}$ with the $20 \times 20$ identity matrix. For larger values of $\p{K}$, this matrix might not be unitary due to numerical errors involved in the computation of the eigendecomposition using $\p{scipy.sparse.linalg.eigs}$.

import numpy as np
import lab10classes as classes

# 1.1

K = 20

eig_obj = classes.K_eigenvectors(sigma, K)
K_eig = eig_obj.solve()

# Checking if unitary

print('Is the K eigenvector matrix unitary?', 
      np.allclose(np.eye(K),np.matmul(np.conj(np.transpose(K_eig)),K_eig),rtol=1e-03, atol=1e-03))

Using the $\p{K\_eigenvectors}$ class, we can now compute the PCA transform of the faces in the dataset. To do this, we build a class that takes as inputs an image containing a face, the mean face, and the eigenfaces and returns the projection in the space of eigenfaces. Note that in this class both the sample face and the mean face have to be vectorized, i.e., they need to be represented as vectors corresponding to a stacking of the columns of their original matrix representations.

import numpy as np

# 1.2
    
class Eigendecomposition():
    
    def __init__(self, face, mu, eigenfaces):
        
        self.x = np.ravel(face, 'F')
        self.mu = np.ravel(mu, 'F')
        self.V = eigenfaces
        
    def solve(self):
        
        return np.matmul(np.conj(np.transpose(self.V)),self.x-self.mu)

We test this class for the 20th face using the following script.

# 1.2

face_idx = 20

eigdec_obj = classes.Eigendecomposition(faces[:,:,face_idx], mu, K_eig)
coefficients = eigdec_obj.solve()
print('The coefficients of the ', K, ' principal components are:', coefficients)

Reconstruction (2.1 and 2.2)

After compressing the faces, we now want to reconstruct them to observe the error that we make when representing them with only $K$ principal components. To do that, we write the class $\p{Reconstruct}$.

# 2.1

class Reconstruction():
    
    def __init__(self, mu, eigenfaces, coefficients):
        
        self.mu = np.ravel(mu, 'F')
        self.K = eigenfaces.shape[1]
        self.V = eigenfaces
        self.w = coefficients
        
    def solve(self):
        
        return np.transpose((np.abs(np.matmul(self.V,self.w) + self.mu)).reshape((92,112)))

This class takes the mean face, $K$ eigenfaces, and the corresponding $K$ PCA coefficients of the face we are trying to reconstruct. Note that that objects from this class do not need to take $K$ as an argument, as it can be inferred from $\p{eigenfaces}$ or $\p{coefficients}$. Also note that the mean face $\p{mu}$ is assumed to be in its original format and has to be stacked column-first. The $\p{solve}$ method implements the inverse PCA and reshapes the resulting reconstructed image to its original format. Because we originally vectorized the image by stacking its columns, we reshape it to shape $\p{(92,112)}$ and then transpose it to recover the original dimensions.

Next we test this class and observe the error that we make when compressing face 20 with $K=20$ principal components. The following script reconstructs the image and plots and saves the original image and its reconstruction.

 # 2.1

recon_obj = classes.Reconstruction(mu, K_eig, coefficients)
recon_face = recon_obj.solve()

og_face = faces[:,:,face_idx]

image = im.fromarray(og_face.astype(np.uint8), mode='L')
image.show()
image.save("original.png")

recon_image = im.fromarray(recon_face.astype(np.uint8), mode='L')
recon_image.show()
recon_image.save("recon.png")

As an example, the faces of subjects 20–23 and the images obtained after reconstruction with $K=20$ principal components are shown below.

To conclude, we calculate the error involved in the reconstruction of the face of subject 20 and compare it with the error made when we use no principal components (i.e., $K=0$), which is equivalent to approximating their face with the mean face. Both of these errors and the error reduction ratio for any $K > 0$ are calculated using the following script.

# 2.2

error_energy = np.linalg.norm(og_face-recon_face)*np.linalg.norm(og_face-recon_face)
error_K_0 = np.linalg.norm(og_face-mu)*np.linalg.norm(og_face-mu)
print('Error energy: ', error_energy)
print('Error energy for K = 0: ', error_K_0)
print('Error reduction: ', (error_K_0-error_energy)/error_K_0)

For subject 20, an error reduction ratio of 6

Code links

The code used to implement the steps above can be downloaded from ESE224_LAB10.