3D Models are essential to modern-day Computer Vision (CV) applications, as 3D data representations are now being widely used instead of the already popular 2D image representation, for tasks such as object detection, classification, retrieval, and modeling, etc. However, many different problems have been associated with the representation, description, indexing, matching, retrieval, and classification of 3D models found in rapidly emerging domain-specific and 3D benchmarks datasets. One of such problems is developing a robust, compact, yet computationally efficient 3D shape descriptor. Although simple to complicated knowledge-based 3D shape representation and matching methods have been proposed, the simple methods usually lack efficiency and needed robustness (i.e. discriminating power), while the complicated methods, with remarkably high retrieval and classification accuracies, are either computationally prohibitive or rely on remarkably large surface points for optimum performance, which negatively impact processing speed and storage. Recently, the Deep Learning (DL) methods have been used to develop highly robust and compact 3D shape descriptors which also produce remarkably high retrieval and classification accuracies. However, the DL approaches are highly data-driven, which requires a lot of training data and high-powered GPUs to run successfully. Another important research problem is developing a 3D shape descriptor that can generalise across a wider range of datasets, each of which presents unique retrieval and classification challenges to the shape descriptor. This thesis focuses on the knowledge-based approach to propose three novel, robust, and computationally efficient methods for 3D shape retrieval and classification. Our first novel research contribution is a local 3D shape descriptor called the Augmented Point Pair Features Descriptor (APPFD). Our second novel contribution is the Hybrid Augmented Point Pair Signature (HAPPS), developed to further improve the overall robustness of the APPFD, while providing invariance to rigid 3D objects. Finally, we propose an improved method called the Agglomeration of local APPFDs with Fisher Kernel and Gaussian Mixture Model (APPFD-FK-GMM), which aggregates d-dimensional local descriptors into a single more compact vector representation, with improved performances. The proposed methods are statistically based, and able to effectively describe the local - global geometry of 3D mesh or point cloud surfaces, using as low as 3500 points samples from each surface, and capable of generalising across a wider range of datasets containing rigid and non-rigid 3D objects. The latter method produces robust, compact, and concise 3D shape signature that support more-efficient indexing and matching, for retrieval and classification tasks. The accuracies and robustness of our methods have thoroughly been examined and compared to several other state-of-the-art (data-driven and knowledge-based) methods, using nine different standard SHape REtrieval Contests (SHREC) 3D benchmark datasets, including the most recent. This thesis provides exhaustive (quantitative and qualitative) comparative analyses of performance evaluation results, for each dataset and retrieval challenge. The following Information Retrieval (IR) evaluation metrics were adopted to assess the performances (accuracies and robustness) of all shape descriptors: Nearest Neighbour (NN), Firs- Tier (FT), Second-Tierm(ST), e-Measurem(E), Discounted Cumulative Gain (DCG), meanAverage Precision (mAP), normalised-Discounted Cumulative Gain (nDCG), Area Under Curve (AUC), and Precision-Recall Curve (PRC) plots. Results of experimental evaluations reveal outstanding retrieval performances by our proposed methods, compared with several other state-of-the-art methods. We demonstrate the superiority of the HAPPS method over several other stateof-the-art methods on the SHREC’18 protein dataset. In several other experimental evaluations, our method still outperforms many of the state-of-the-art methods, including ranking top 2 or 3 position in most cases, competing very closely with the overall best performing methods for each of those retrieval challenges and datasets. Generally, the APPFD method is robust to objects with holes (i.e. with large missing surface parts) and noise, and we demonstrate that both the APPFD and HAPPS methods are highly discriminative, efficient, and capable of effectively representing 3D point clouds and triangular meshes. In addition, we demonstrate the high performance of the APPFD-FK-GMM method, which rivals both the APPFD and HAPPS methods, even with about 98% reduction of the final fv from these methods, thus providing both robustness and compact representation of 3D objects for easier indexing and faster matching.