Q&A 13 How do you train a k-nearest neighbors (KNN) model?
13.1 Explanation
K-Nearest Neighbors (KNN) is a simple, instance-based classification method. It predicts a data point’s class based on the majority class among its K closest neighbors in the feature space.
KNN doesn’t require training in the usual sense — it stores the data and computes distances at prediction time.
13.2 Python Code
# Train a KNN model in Python
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import accuracy_score
from sklearn.preprocessing import StandardScaler
# Load and preprocess
df = pd.read_csv("data/titanic.csv")
df['Age'] = df['Age'].fillna(df['Age'].median())
df['Embarked'] = df['Embarked'].fillna(df['Embarked'].mode()[0])
df['Sex'] = df['Sex'].map({'male': 0, 'female': 1})
df = pd.get_dummies(df, columns=['Embarked'], drop_first=True)
# Features and target
X = df[['Pclass', 'Sex', 'Age', 'Fare', 'Embarked_Q', 'Embarked_S']]
y = df['Survived']
# Standardize features (important for distance-based models)
scaler = StandardScaler()
X_scaled = scaler.fit_transform(X)
# Split
X_train, X_test, y_train, y_test = train_test_split(X_scaled, y, test_size=0.2, random_state=42)
# Train KNN
knn = KNeighborsClassifier(n_neighbors=5)
knn.fit(X_train, y_train)
# Predict and evaluate
y_pred = knn.predict(X_test)
print("Accuracy:", accuracy_score(y_test, y_pred))Accuracy: 0.7821229050279329
13.3 R Code
# Train a KNN model in R
library(readr)
library(dplyr)
library(fastDummies)
library(caret)
library(class)
library(scales)
# Load and preprocess
df <- read_csv("data/titanic.csv")
df$Age[is.na(df$Age)] <- median(df$Age, na.rm = TRUE)
mode_embarked <- names(sort(table(df$Embarked), decreasing = TRUE))[1]
df$Embarked[is.na(df$Embarked)] <- mode_embarked
df$Sex <- ifelse(df$Sex == "male", 0, 1)
df <- fastDummies::dummy_cols(df, select_columns = "Embarked", remove_first_dummy = TRUE, remove_selected_columns = TRUE)
# Feature and target
features <- df %>% select(Pclass, Sex, Age, Fare, Embarked_Q, Embarked_S)
target <- df$Survived
# Normalize features (KNN is distance-based)
features_scaled <- as.data.frame(scale(features))
# Split
set.seed(42)
split_index <- createDataPartition(target, p = 0.8, list = FALSE)
X_train <- features_scaled[split_index, ]
X_test <- features_scaled[-split_index, ]
y_train <- target[split_index]
y_test <- target[-split_index]
# Train and predict using KNN
y_pred <- knn(train = X_train, test = X_test, cl = y_train, k = 5)
# Evaluate
confusionMatrix(y_pred, as.factor(y_test))Confusion Matrix and Statistics
Reference
Prediction 0 1
0 99 26
1 15 38
Accuracy : 0.7697
95% CI : (0.7008, 0.8293)
No Information Rate : 0.6404
P-Value [Acc > NIR] : 0.0001436
Kappa : 0.4803
Mcnemar's Test P-Value : 0.1183498
Sensitivity : 0.8684
Specificity : 0.5938
Pos Pred Value : 0.7920
Neg Pred Value : 0.7170
Prevalence : 0.6404
Detection Rate : 0.5562
Detection Prevalence : 0.7022
Balanced Accuracy : 0.7311
'Positive' Class : 0
âś… Takeaway: KNN is easy to understand and use but sensitive to feature scaling and choice of k. Standardization is essential for meaningful distance calculations