Q&A 14 How do you train a Naive Bayes model?

14.1 Explanation

Naive Bayes is a probabilistic classifier based on Bayes’ theorem. It assumes that all features contribute independently to the outcome — a “naive” assumption that often works surprisingly well in practice.

Naive Bayes is fast, interpretable, and often used in text classification and simple tabular problems.


14.2 Python Code

# Train a Naive Bayes model in Python
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import GaussianNB
from sklearn.metrics import accuracy_score

# Load and preprocess
df = pd.read_csv("data/titanic.csv")
df['Age'] = df['Age'].fillna(df['Age'].median())
df['Embarked'] = df['Embarked'].fillna(df['Embarked'].mode()[0])
df['Sex'] = df['Sex'].map({'male': 0, 'female': 1})
df = pd.get_dummies(df, columns=['Embarked'], drop_first=True)

# Features and target
X = df[['Pclass', 'Sex', 'Age', 'Fare', 'Embarked_Q', 'Embarked_S']]
y = df['Survived']

# Split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Train Naive Bayes (GaussianNB assumes normal distribution of features)
nb_model = GaussianNB()
nb_model.fit(X_train, y_train)

# Predict and evaluate
y_pred = nb_model.predict(X_test)
print("Accuracy:", accuracy_score(y_test, y_pred))
Accuracy: 0.7653631284916201

14.3 R Code

# Train a Naive Bayes model in R
library(readr)
library(dplyr)
library(fastDummies)
library(caret)
library(e1071)

# Load and preprocess
df <- read_csv("data/titanic.csv")
df$Age[is.na(df$Age)] <- median(df$Age, na.rm = TRUE)
mode_embarked <- names(sort(table(df$Embarked), decreasing = TRUE))[1]
df$Embarked[is.na(df$Embarked)] <- mode_embarked
df$Sex <- ifelse(df$Sex == "male", 0, 1)
df <- fastDummies::dummy_cols(df, select_columns = "Embarked", remove_first_dummy = TRUE, remove_selected_columns = TRUE)

# Feature and target
features <- df %>% select(Pclass, Sex, Age, Fare, Embarked_Q, Embarked_S)
target <- df$Survived

# Split
set.seed(42)
split_index <- createDataPartition(target, p = 0.8, list = FALSE)
X_train <- features[split_index, ]
X_test <- features[-split_index, ]
y_train <- target[split_index]
y_test <- target[-split_index]

# Train Naive Bayes
nb_model <- naiveBayes(x = X_train, y = as.factor(y_train))

# Predict and evaluate
y_pred <- predict(nb_model, X_test)
confusionMatrix(y_pred, as.factor(y_test))
Confusion Matrix and Statistics

          Reference
Prediction  0  1
         0 87 23
         1 27 41
                                         
               Accuracy : 0.7191         
                 95% CI : (0.647, 0.7838)
    No Information Rate : 0.6404         
    P-Value [Acc > NIR] : 0.01622        
                                         
                  Kappa : 0.3983         
                                         
 Mcnemar's Test P-Value : 0.67137        
                                         
            Sensitivity : 0.7632         
            Specificity : 0.6406         
         Pos Pred Value : 0.7909         
         Neg Pred Value : 0.6029         
             Prevalence : 0.6404         
         Detection Rate : 0.4888         
   Detection Prevalence : 0.6180         
      Balanced Accuracy : 0.7019         
                                         
       'Positive' Class : 0              
                                         

âś… Takeaway: Naive Bayes is lightweight and fast, especially useful when you want a quick benchmark. While the independence assumption is rarely true, the model still performs well on many tasks.