Chapter 39 Learning Vector Quantization

Learning Vector Quantiztion (LVQ) is a supervised classification algorithm for binary and multiclass problems. LVQ is a special case of a neural network.
LVQ model creates codebook vectors by learning training dataset. Codebook vectors represent class regions. They contain elements that placed around the respective class according to their matching level. If the element matches, it comes closer to the target class, if it does not match, it moves farther from it. With this codebooks, the model classifies new data. Here is a nice explanation how it works.

There are several versions of LVQ function:
lvq1(), olvq1(), lvq2(), lvq3(), dlvq().

library(class) # olvq1()
library(caret) # to split data

# generate dataset
df <- iris

id = caret::createDataPartition(df$Species, p = .8, list = F)

train = df[id, ]
test = df[-id, ]

# initialize an LVQ codebook
cb = class::lvqinit(train[1:4], train$Species)

# training set in a codebook.
build.cb = class::olvq1(train[1:4], train$Species, cb)

# classify test set from LVQ Codebook for test data
predict = class::lvqtest(build.cb, test[1:4])

# confusion matrix.
caret::confusionMatrix(test$Species, predict)
## Confusion Matrix and Statistics
## 
##             Reference
## Prediction   setosa versicolor virginica
##   setosa         10          0         0
##   versicolor      0         10         0
##   virginica       0          1         9
## 
## Overall Statistics
##                                           
##                Accuracy : 0.9667          
##                  95% CI : (0.8278, 0.9992)
##     No Information Rate : 0.3667          
##     P-Value [Acc > NIR] : 4.476e-12       
##                                           
##                   Kappa : 0.95            
##                                           
##  Mcnemar's Test P-Value : NA              
## 
## Statistics by Class:
## 
##                      Class: setosa Class: versicolor Class: virginica
## Sensitivity                 1.0000            0.9091           1.0000
## Specificity                 1.0000            1.0000           0.9524
## Pos Pred Value              1.0000            1.0000           0.9000
## Neg Pred Value              1.0000            0.9500           1.0000
## Prevalence                  0.3333            0.3667           0.3000
## Detection Rate              0.3333            0.3333           0.3000
## Detection Prevalence        0.3333            0.3333           0.3333
## Balanced Accuracy           1.0000            0.9545           0.9762