close all; % Normalisation et visualisation des données Xn=(X-repmat(mean(X),size(X,1),1))*inv(diag(std(X,1)')); % classr(X,L,pays) % Normalisation et projection de Xc Xnc=(Xc-repmat(mean(X),size(Xc,1),1))*inv(diag(std(X,1)')); % figure % classr2(X,Xnc,L,pays,paysc) % Options pour la fonction knnclassify % 'nearest' Majority rule with nearest point tie-break % 'random' Majority rule with random point tie-break % 'consensus' Consensus rule % The default behavior is to use majority rule. That is, a sample point % is assigned to the class from which the majority of the K nearest % neighbors are from. Use 'consensus' to require a consensus, as opposed % to majority rule. When using the consensus option, points where not all % of the K nearest neighbors are from the same class are not assigned % to one of the classes. Instead the output CLASS for these points is NaN % for numerical groups or '' for string named groups. When classifying to % more than two groups or when using an even value for K, it might be % necessary to break a tie in the number of nearest neighbors. Options % are 'random', which selects a random tiebreaker, and 'nearest', which % uses the nearest neighbor among the tied groups to break the tie. The % default behavior is majority rule, nearest tie-break. % Application de la fonction knnclassify class1n = knnclassify(Xnc,Xn,L,1,'euclidean','nearest') class2n = knnclassify(Xnc,Xn,L,2,'euclidean','nearest') class3n = knnclassify(Xnc,Xn,L,3,'euclidean','nearest') class4n = knnclassify(Xnc,Xn,L,4,'euclidean','nearest') class1r = knnclassify(Xnc,Xn,L,1,'euclidean','random') class2r = knnclassify(Xnc,Xn,L,2,'euclidean','random') class3r = knnclassify(Xnc,Xn,L,3,'euclidean','random') class4r = knnclassify(Xnc,Xn,L,4,'euclidean','random') class1c = knnclassify(Xnc,Xn,L,1,'euclidean','consensus') class2c = knnclassify(Xnc,Xn,L,2,'euclidean','consensus') class3c = knnclassify(Xnc,Xn,L,3,'euclidean','consensus') class4c = knnclassify(Xnc,Xn,L,4,'euclidean','consensus') % Comment les fonctions discriminantes sont calculées % Linear discrimination fits a multivariate normal density to each group, % with a pooled estimate of covariance. % Quadratic discrimination fits MVN densities with covariance estimates % stratified by group. % Mahalanobis discrimination uses Mahalanobis distances with stratified % covariance estimates. % Application de la fonction classify [classl,Pl,errl] = classify(Xnc,Xn,L,'linear') [classq,Pq,errq] = classify(Xnc,Xn,L,'quadratic') [classm,Pm,errm] = classify(Xnc,Xn,L,'mahalanobis')