Research Scientist at Alegion, PhD student at Rice University

Video object detection has several unique challenges that differentiate it from object detection within static images.


Object detection is now one of the most common applications in computer vision, especially as vanilla classification problems have become easier to solve with modern deep learning architectures. Due to the wide availability of large-scale datasets (e.g., COCO [1]), object detection research has become more accessible to the computer vision…

Thoughts and Theory

An overview of online learning techniques, focusing on those that are most effective for the practitioner.


In this blog post, I will take a deep dive into the topic of online learning — a very popular research area within the deep learning community. Like many research topics in deep learning, online learning has wide applications in the industrial setting. Namely, the scenario in which data becomes…

Figure 1: A depiction of the training pipeline for GIST. sub-GCNs divides the GCN model into multiple sub-GCNs. Every sub-GCN is trained by subTrain using mini-batches constructed with the Cluster operation. Sub-GCN parameters are intermittently aggregated into the global model through the subAgg operation. [Figure created by author.]

In this post, I will overview a recently proposed distributed training framework for large-scale graph convolutional networks (GCNs), called graph independent subnetwork training (GIST) [1]. GIST massively accelerates the GCN training process for any architecture and can be used to enable training of large-scale models, which exceed the capacity of…

Thoughts and Theory

How more sophisticated momentum strategies can make deep learning less painful.

(from “Introduction to Optimization” By Boris Polyak)


Momentum is a widely-used strategy for accelerating the convergence of gradient-based optimization techniques. Momentum was designed to speed up learning in directions of low curvature, without becoming unstable in directions of high curvature. In deep learning, most practitioners set the value of momentum to 0.9 without attempting to further tune…

In this post, I aim to outline the theoretical foundations for one of the most canonical quantum phenomena — entanglement. I aim to do this in a way that can be fully understood by anyone with a basic understanding of arithmetic and (very basic) linear algebra. Consequently, the majority of…

A comprehensive explanation of the theory behind CPPNs


For the last two years, I have been researching Compositional Pattern Producing Networks (CPPN), a type of augmenting topology neural network proposed in [1]. However, throughout my research, I was always baffled by some of the concepts and theory behind CPPNs, and I struggled to understand how they work. Although…

Cameron Wolfe

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store