The Astrofake Project
The astrofake project aims to provide a tool capable of quickly generating high resolution giga-parsec scale cosmological N-body simulations, without the need for a supercomputer. This will be done using an unsupervised machine learning architecture known as a General Adversarial Network (Goodfellow, 2014).

Below is an example of an N-body simulation of box-size 50 h-1 Mpc with (128)^3 particles using LCDM initial conditions.
N-body simulation snapshot
A GAN is two neural networks competing against each other in a game. One network is trained to become extremely good at discriminating if an image is real or fake (compared to a provided sample), while the other is trained to become a master at generating more and more realistic looking images. This continues until the generator network produces an image that looks indistinguishable from the original to both human and machine. Once the GAN is trained, it is able to produce realistic images that are not merely direct copies of the original.

If we apply this technique to N-body simulations we get a tool that can be used to generate large-scale simulations in a fraction of a second. This will allow me to create a catalog of cosmic voids of unprecedented size, allowing me to constrain statistics such as the growth rate of voids or the largest/smallest size of any void at any redshift. A single N-body simulation of Gpc size would take thousands of computation hours to run and would produce only a handful of voids. Similarly, the Dark Energy Survey's void catalog only reports ~80 voids in their catalog. This project will be the first to truly charactarize voids with large-number statistics, which will allow us to further constrain the growth of large-scale structure in our universe.

Cluster Research
I study the largest gravitationally bound objects in the universe (galaxy clusters) and how they act as lenses of light following Einstein's general theory of relativity. Currently, I work with Professor Tereasa Brainerd at Boston University trying to constrain the relative distribtuion of dark and luminous material in the universe in order to test predictions from ΛCDM simulations.

We are using a subset of the redMaPPer galaxy cluster to study massive clusters at redshifts z ~ 0.4. In order to conduct a number of studies that cannot be done with the shallow SDSS images of redMaPPer we make use of the Large Monolithic Imager on the Discovery Channel Telescope. Currently, I am looking at the 50 richest, most massive redMaPPer clusters focussing on: (1) weak gravitational lensing (2) cluster "satellite" galaxies, and (3) diffuse intracluster light. Specifically, the former we use to measure the individual ellipticities of the dark matter mass distribtuions, the degree of alignment of mass and light within each individual cluster, the frequency with which the brightest cluster galaxy is coincident with the peak in the dark matter distribution, and the robustness of weak lensing magnification as a mass estimator for clusters.

We also measure the radial distribution of satellites as a function of rest-frame color to constrain the faint end of the satellite luminosity function. We plan on using the intracluster luminosity (ICL) studies to improve the constraints on the contribution of the ICL to the total cluster light, which will inform the field to the processes that occur within clusters.

Undergraduate Research
As an undergraduate at the University of South Florida I persued thesis work in both mathematics and planetary science. On the physics side I worked with Professor Maria Womack I wrote an algorithm (available on GitHub here) that creates the best cometary lightcurves to date from amateur astronomers. On the math side I worked with Professor Mohamed Elhamdadi on algebraic knot theory.