Final Year Project · Computer Games Development

FIVE REALMS

By Stephen Dunne

A Hearthstone-inspired card battle game rooted in Irish mythology, featuring a Deep Q-Network AI opponent trained through thousands of episodes of self-play.

Scroll

The Game

Gameplay Overview

A single player card battle game where archetypes, resource management, and tactical decision-making decide the outcome.

Five Realms gameplay

Strategic Combat

Play minion cards onto the battlefield, trade favourably, and push for lethal damage. Every mana crystal counts; playing on curve wins games.

Taunt & Keywords

Taunt forces opponents to attack protected minions first. Charge allows immediate attacks. Battlecries and Deathrattles add depth to every turn.

Two Deck Archetypes

Aggressive fire-based tempo, or defensive earth-based survival. Both archetypes play to a distinct strategic identity.

Structure & Random Modes

Play curated 30-card structure decks or randomly generated decks from each element's full card pool; a different game every time.

Artificial Intelligence

The AI System

Two interchangeable opponents built on a shared Strategy Pattern interface. Swap between a trained neural network and a heuristic AI without changing a line of game logic.

lib/ai/dqn/

Deep Q-Network Agent

A neural network trained through thousands of episodes of self-play learns to select actions by predicting Q-values - the expected future reward for every possible move from a given board state. The agent never sees hardcoded rules; it discovers strategy entirely through experience.

  • 6,000 training episodes across all matchup combinations
  • 121-feature state vector encoding the full board
  • 68 possible actions per turn with legal action masking
  • Epsilon-greedy exploration decaying from 100% to 1%

State -> Action Pipeline

Game state121 features · normalised [0,1]
Hidden layer 1128 neurons · ReLU
Hidden layer 2128 neurons · ReLU
Hidden layer 364 neurons · ReLU
Q-values output68 actions · linear
Legal action masking ε-greedy exploration Target network Experience replay
lib/ai/aiStrategy.ts

Rule-Based AI

A heuristic opponent that plays on curve, prioritises favourable board trades, and detects lethal damage. Fully explainable - every decision follows explicit scoring logic. Serves as the training baseline and a solid opponent in its own right.

  • Scores cards by mana efficiency and combined stat value
  • Enforces Taunt rules on all attack selections
  • Detects lethal face damage before trading on board
  • No training required - plays immediately on selection

Strategy Pattern

«interface» AIStrategy
selectAction(state) → AIAction
onGameEnd(state, won)
RuleBasedAI
getAIAction()
scoreCard()
scoreTrade()
DQNStrategy
DQNModelBrowser
predict(state)
→ fallback
createAI(type) factory — swap opponents at runtime without touching game logic
lib/ai/dqn/DQNModel.ts

Network Architecture

A feed-forward network connecting a 121-dimensional state vector to 68 Q-value outputs. Trained using the Bellman equation with a separate frozen target network updated every 1,000 steps for stability.

  • 121 → 128 → 128 → 64 → 68 layer sizes
  • ReLU activations with He Normal initialisation
  • Adam optimiser at learning rate 0.00001
  • Discount factor γ = 0.95 — prevents Q-value explosion

Self-Play Training Loop

AutoPlay.trainUniversalAgent()
↳ playEpisode() — headless game
↳ encodeGameState() → 121-vec
↳ DQNAgent.selectAction() ε-greedy
↳ executeAction() → new state
↳ calculateReward()
↳ ExperienceReplay.add()
↳ DQNModel.trainOnBatch()
Bellman: Q(s,a) = r + γ·max Q(s')
flipStatePerspective() — same weights play both sides

Engineering

Tech Stack

A modern full-stack TypeScript project with clear separation between rendering, state, game logic, and AI — every layer independently replaceable.

Technologies

Next.js 16 React framework, App Router, API routes
React 19 Declarative UI, escape hatch pattern
TypeScript Type safety across all layers
PixiJS v8 WebGL canvas rendering engine
Zustand 5 Lightweight state management, 5 slices
TensorFlow.js DQN training in Node, inference in browser

Architecture Layers

React UI
battle/page · GamePanel · DeckSelector
Zustand Store
5 composed slices · FSM AI turn loop
Game Logic
Pure functions · immutable BattleState
AI Layer
Strategy pattern · DQNStrategy · RuleBasedAI
Pixi Renderer
PixiBoard · CardRenderer · ScaleManager
Browser / Node split
Training runs in Node.js · Inference in browser · Weights served as JSON from public/models/