%0 Conference Proceedings %T Probabilistic Principal Components and Mixtures, How This Works %+ University of Wrocław [Poland] (UWr) %+ Wroclaw University of Science and Technology %A Bartkowiak, Anna, M. %A Zimroz, Radoslaw %Z Part 1: Full Keynote and Invited Papers %< avec comité de lecture %( Lecture Notes in Computer Science %B 14th Computer Information Systems and Industrial Management (CISIM) %C Warsaw, Poland %Y Khalid Saeed %Y Władysław Homenda %I Springer %3 Computer Information Systems and Industrial Management %V LNCS-9339 %P 24-35 %8 2015-09-24 %D 2015 %R 10.1007/978-3-319-24369-6_2 %K Probabilistic principal components %K Multi-variate normal distribution %K Mixture models %K Un-mixing multivariate data %K Condition monitoring %K Gearbox diagnostics %K Healthy state %K Probabilities a posteriori %K Outliers %Z Computer Science [cs] %Z Humanities and Social Sciences/Library and information sciencesConference papers %X Classical Principal Components Analysis (PCA) is widely recognized as a method for dimensionality reduction and data visualization. This is a purely algebraic method, it considers just some optimization problem which fits exactly to the gathered data vectors with their particularities. No statistical significance tests are possible. An alternative is to use probabilistic principal component analysis (PPCA), which is formulated on a probabilistic ground. Obviously, to do it one has to know the probability distribution of the analyzed data. Usually the Multi-Variate Gaussian (MVG) distribution is assumed. But what, if the analyzed data are decidedly not MVG? We have met such problem when elaborating multivariate gearbox data derived from a heavy duty machine. We show here how we have dealt with the problem.In our analysis, we assumed that the considered data are a mixture of two groups being MVG, specifically: each of the sub-group follows a probabilistic principal component (PPC) distribution with a MVG error function. Then, by applying Bayesian inference, we were able to calculate for each data vector x its a posteriori probability of belonging to data generated by the assumed model. After estimation of the parameters of the assumed model we got means - based on a sound statistical basis - for constructing confidence boundaries of the data and finding outliers. %G English %Z TC 8 %2 https://inria.hal.science/hal-01444479/document %2 https://inria.hal.science/hal-01444479/file/978-3-319-24369-6_2_Chapter.pdf %L hal-01444479 %U https://inria.hal.science/hal-01444479 %~ SHS %~ IFIP-LNCS %~ IFIP %~ IFIP-TC %~ IFIP-TC8 %~ IFIP-CISIM %~ IFIP-LNCS-9339