Macabre game (available on Steam)

Macabre is a cooperative extraction horror game made with Unreal Engine 5 that I worked on with the team at WeForge. My main responsibilities were implementing voice-over-IP (VoIP) and player travel between maps, as well as assisting with the weather system, the cosmetics system, and inventory system.

To implement VoIP I used the P2P Steam API paired with a task-based multi-threaded design for sending and receiving voice packets and buffering to eliminate in-game stutters. I also implemented a worker thread to down-sample voice input to 16kHz and up-sample voice packets back to 44kHz using libsamplerate.

I used a combination of C++17 for the performance-intensive parts, and exposed data fields through Blueprint to serve as an easy configuration interface for the designers.

Missile Madness!

This mini-game was presented at the Western Sydney University Welcome Week. It is a VR game about using controllers to catch rockets flying at the player. The hectic gameplay and art was meant to highlight our event partner WSU LaunchPad’s branding, and was designed together with No Brainer Games.

This was the first public game that was made with my Reboot game engine. The systems that made it possible were written from the ground up and powered by Vulkan, OpenXR, and OpenAL-Soft:

  • Sparse-array-based entity component system (ECS) used for event propagation
  • API layer for zero-overhead interop with high-performance C and C++ libraries
  • Custom shading, lighting and compute-shader-based outline rendering for a stylised look
  • GPU-driven particle-based physics inspired by Nvidia’s GPU Gems article on rigid body simulation
  • Spatial audio integrated with the GPU-driven pipeline for immersion

I made a post on LinkedIn about it.

Nobby the desktop assistant

Nobby was demoed at the Western Sydney Innovation Festival and at a public pitching event, where it was presented to investors and other participants.

This is a prototype desktop assistant for MacOS created using C++17 and Vulkan. The assistant was rendered as an animated robot on a transparent background, and intelligently interacted with the user using a local Phi 4 multi-model LLM, Kokoro TTS model, Silero VAD model, Moonshine STT model, and the Voyage multi-modal embedding model. These models were run using Microsoft’s ONNX Runtime C++ library.

The application interacted with the user by receiving voice, a desktop screenshot, and mouse position as inputs, running them through the multi-modal LLM, and rendering the LLM text output as voice, animated robot avatar actions, and tool invocations through my custom scripting system.

The application rendered smoothly with the help of my multi-threaded design, allowing local AI models to behave as data sources sitting in their own background threads, with the main thread sending queries and routing outputs between models and the rendering backend.

AI adventure RPG demo

This demo was written in C++ and compiled to WebAssembly with Emscripten. The NPC was powered by Microsoft’s Phi 4, Piper TTS, and Moonshine STT generative AI models, all dynamically loaded and run in the local browser using web workers.

The graphics was powered by WebGPU and WGSL, rendering at 120FPS on a MacBook Pro, with near-instant response time from the AI models. This was thanks to GPU-driven rendering and animations, allowing full use of both CPU and GPU for AI and rendering.

Eldervine VR experience

This was a VR experience created with the team at Ubiquitous Computing VR Lab, with the goal of researching novel input modes in VR using human hands.

For this project we used C++ with Unreal Engine 4, and the Leap Motion device mounted on the Oculus Quest 1 for detecting hand and finger motion inside the game.

Research was directed towards accurately detecting collisions between in-world objects and fast-moving player hands simulated by the Leap Motion framework. Collisions with objects deformed human hands occurately, and the skin folding around the finger joints were simulated for added realism.

Detecting and using gaze as input was also explored, and resulted in a published research paper that I co-authored with the team.