I applied MCR² coding rate metrics to a frozen PixArt-α and found progressive subspace separation, dense activations, and text-independent structure — evidence that a subspace lens may complement SAE-based interpretability.
The engineering behind hambajuba2ba — SAE-steered, audio-reactive diffusion at 50 FPS on a single RTX 5090
Music visualization, generative models, and building a tool to see what's inside.
A quick experiment in interpretable audio features
A reflection on llm usage while learning at The Recurse Center
My return statement for Spring 1 2025 Recurse Center batch