
Hardware & Systems
Digital Flower Garden
A handmade tech-art project built with my kids as a birthday gift for my wife. Coat-hanger flower stems hide embedded wiring, a routed wooden base conceals the power system, and each flower is powered by an Adafruit Circuit Express running custom Python code to animate soft, shifting LED colors. A mix of hardware, code, and family creativity.

This project began as a birthday gift for my wife — something meaningful, lasting, and built with the help of our kids. The flowers are constructed from coat-hanger wire wrapped in green pipe cleaner, which doubles as a clever way to hide the power cables running up each stem. At the base, the “grass” is made from cutouts of our kids’ handprints in green construction paper.
Each flower’s center contains an Adafruit Circuit Express board running custom Python code I wrote to animate the LEDs. The colors shift in gentle, random patterns, diffused through tracing paper to create a soft glow rather than harsh points of light. All stems mount into a routed wooden block that conceals the power supply and wiring, creating a clean, sculpture-like finish.
It’s part art piece, part hardware build, part embedded-systems project — and one of the more personal things I’ve made.
PaperMC Minecraft Server on AWS Lightsail
A cloud-hosted Minecraft server I built for my kids, originally running Bedrock and later upgraded to PaperMC to support both Java and Bedrock players. It’s hosted on AWS Lightsail for easier configuration rules, remote access, and playing with friends. It’s currently configured with 7GB RAM resulting in some lag under multi-user load.

My cloud-hosted Minecraft server originally started as a project to enable me to learn Minecraft Bedrock to support my kids. I later expanded it into a cross-platform PaperMC setup. I launched the first version in 2022 running Minecraft Bedrock so the kids could connect with tablets and phones. As their skills grew, I migrated the deployment to PaperMC in 2023, enabling both Java and Bedrock players to join the same world.
Running the server on AWS Lightsail made remote play simple — no home-network port forwarding, no dynamic DNS, and easier firewall rules. This made it easier to invite their friends and access the world while traveling. The current instance runs with 7GB of RAM (above PaperMC’s recommended minimum), though I’ve seen some limitations with block placement lag when multiple users join.
This project sits at the intersection of cloud hosting, game server optimization, and family gaming — and has turned into one of our most-used systems builds.
Raspberry Pi 5 Local LLM Deployment
This is an AI project I built to give my kids a safe, hands-on way to explore artificial intelligence. I installed Ollama on a Raspberry Pi 5 with a 256 GB microSD card, starting with TinyLlama and eventually pushing the hardware to run Mistral 7B — about the upper limit of the Pi’s capabilities. To make it easy for the kids to use, I added OpenWebUI as a front end with individual accounts. It’s a little slow and constrained, but it’s a great way to experiment with how AI works.
A hands-on AI project I built to help my kids explore artificial intelligence safely and understand it from the hardware up. I wanted them to be able to experiment with AI, but my wife and I weren’t ready to hand them unrestricted access to cloud tools like ChatGPT. I had a spare Raspberry Pi 5, so I decided to build a local AI system — something they could touch, configure, break, and experiment with.
I flashed Raspberry Pi OS onto a 256 GB microSD card and installed Ollama to host the models. The RPi 5’s limited compute meant starting small with TinyLlama, which worked but tended to drift and hallucinate. Over time I pushed the system further, settling on Mistral 7B — which seems to be about the largest practical model the RPi5 can handle without falling apart. The project became a great exercise in real-world constraints: memory, quantization, and what “model quality” really means.
To make the system kid-friendly, I added a proper interface: OpenWebUI running as a front-end layer on top of Ollama, complete with individual user accounts so they could explore independently. It’s never going to feel as powerful or fast as cloud models, but that’s part of the point. It shows them that AI is a tool — not magic — and that every tool has tradeoffs in speed, accuracy, and capability.
It’s a simple device, but it’s become a surprisingly good platform for learning: about hardware, local compute, open-source AI, and the difference between running models yourself versus relying on giant cloud systems.