Principal component analysis: Difference between revisions

From Citizendium
Jump to navigation Jump to search
imported>Sam Nazari
imported>Alexander Wiebel
m (adde two links)
 
(8 intermediate revisions by 2 users not shown)
Line 1: Line 1:
'''Principle Components Analysis (PCA)''' is a popular data analysis method with applications in data compression, signal and noise separation, model order reduction, pattern recognition and feature extraction.  In some contexts, PCA is also known as [[Karhunen-Loeve Transform]] or the [[Hotelling Transform]].
{{subpages}}


The Principle Components (PCs) of a data set are the set of [[orthogonal]] components ascertained by the [[eigenanalysis]] of the data sets [[covariance matrix]].  The most important components are found along the dimensions with the largest covariance values and they reveal the uncorrelated structure of the data set.  The data set or signal can be processed independently along each of these dimensions.  The goal of PCA, therefore, is to decompose a data set (such as a image from a satellite) into an orthogonal set of Principle Components so that it may be represented accurately without the need to store the entire signal.
'''Principle Component Analysis (PCA)''' is a popular data analysis method with applications in data compression, signal and noise separation, model order reduction, [[pattern recognition]] and feature extraction.  In some contexts, PCA is also known as [[Karhunen-Loeve Transform]] or the [[Hotelling Transform]].
[[Category:Signal Processing]]
 
The Principle Components (PCs) of a data set are its [[orthogonal]] components ascertained by [[eigenanalysis]] of the data sets [[covariance matrix]].  The most important components are found along the dimensions with the largest [[covariance]] values and they reveal the uncorrelated structure of the data set.  The data set or signal can be processed independently along each of these dimensions.  The goal of PCA, therefore, is to decompose a data set (such as a image from a satellite) into an orthogonal set of Principle Components so that it may be represented accurately without the need to store the entire signal.

Latest revision as of 16:52, 4 August 2010

This article is a stub and thus not approved.
Main Article
Discussion
Related Articles  [?]
Bibliography  [?]
External Links  [?]
Citable Version  [?]
 
This editable Main Article is under development and subject to a disclaimer.

Principle Component Analysis (PCA) is a popular data analysis method with applications in data compression, signal and noise separation, model order reduction, pattern recognition and feature extraction. In some contexts, PCA is also known as Karhunen-Loeve Transform or the Hotelling Transform.

The Principle Components (PCs) of a data set are its orthogonal components ascertained by eigenanalysis of the data sets covariance matrix. The most important components are found along the dimensions with the largest covariance values and they reveal the uncorrelated structure of the data set. The data set or signal can be processed independently along each of these dimensions. The goal of PCA, therefore, is to decompose a data set (such as a image from a satellite) into an orthogonal set of Principle Components so that it may be represented accurately without the need to store the entire signal.