%0 Conference Proceedings %T Gaussian Mixture Models for Classification and Hypothesis Tests Under Differential Privacy %+ Amazon %+ Purdue University [West Lafayette] %+ Department of Computer Science [Dallas] (University of Texas at Dallas) %+ Adana Science and Technology University %A Tong, Xiaosu %A Xi, Bowei %A Kantarcioglu, Murat %A Inan, Ali %Z Part 2: Privacy %< avec comité de lecture %( Lecture Notes in Computer Science %B 31th IFIP Annual Conference on Data and Applications Security and Privacy (DBSEC) %C Philadelphia, PA, United States %Y Giovanni Livraga %Y Sencun Zhu %I Springer International Publishing %3 Data and Applications Security and Privacy XXXI %V LNCS-10359 %P 123-141 %8 2017-07-19 %D 2017 %R 10.1007/978-3-319-61176-1_7 %K Differential privacy %K Statistical database %K Mixture model %K Classification %K Hypothesis test %Z Computer Science [cs]Conference papers %X Many statistical models are constructed using very basic statistics: mean vectors, variances, and covariances. Gaussian mixture models are such models. When a data set contains sensitive information and cannot be directly released to users, such models can be easily constructed based on noise added query responses. The models nonetheless provide preliminary results to users. Although the queried basic statistics meet the differential privacy guarantee, the complex models constructed using these statistics may not meet the differential privacy guarantee. However it is up to the users to decide how to query a database and how to further utilize the queried results. In this article, our goal is to understand the impact of differential privacy mechanism on Gaussian mixture models. Our approach involves querying basic statistics from a database under differential privacy protection, and using the noise added responses to build classifier and perform hypothesis tests. We discover that adding Laplace noises may have a non-negligible effect on model outputs. For example variance-covariance matrix after noise addition is no longer positive definite. We propose a heuristic algorithm to repair the noise added variance-covariance matrix. We then examine the classification error using the noise added responses, through experiments with both simulated data and real life data, and demonstrate under which conditions the impact of the added noises can be reduced. We compute the exact type I and type II errors under differential privacy for one sample z test, one sample t test, and two sample t test with equal variances. We then show under which condition a hypothesis test returns reliable result given differentially private means, variances and covariances. %G English %Z TC 11 %Z WG 11.3 %2 https://inria.hal.science/hal-01684357/document %2 https://inria.hal.science/hal-01684357/file/453481_1_En_7_Chapter.pdf %L hal-01684357 %U https://inria.hal.science/hal-01684357 %~ IFIP-LNCS %~ IFIP %~ IFIP-TC %~ IFIP-WG %~ IFIP-TC11 %~ IFIP-WG11-3 %~ IFIP-DBSEC %~ IFIP-LNCS-10359