In standard PCA, one collects n observed vectors x_1,. . . ,x_n in R^d and forms the matrix X = [x_1, . . . , x_n]. Then, one finds the best approximation of rank r by thresholding the singular values as prescribed, by the Eckart-Young theorem. One of the main drawback of PCA is that the singular vectors are very sensitive to outliers. Robust PCA (RPCA) is a kind of matrix decomposition which allows to recover a low rank matrix corrupted by both noise and outliers with potentially very large value. RPCA has been extensively used in a very wide range of applications from genetics to video processing. One of the main ideas behind Robust PCA is to reconstruct the matrices L_0 and S_0 by solving the convex programming problem

min ||L||_{∗} + λ ||S||_1 s. t. ||X − L − S|| ≤ η.

The original theoretical analysis of this estimator’s accuracy is well known to be technically very involved. The goal of the present work is to propose an elementary analysis of Robust PCA using the descent cone approach of Amelunxen et al. and a null space-type property on the eigenvectors of the original low rank matrix