Estimation and validation of LOF-intolerant variants in essential transcription factors

^{5}copies per mL, substantially higher than other accessible sources (~10

^{4}copies per 100mL in river water, for instance), and thus wastewater will be used for this initial study \cite{Caldwell_2009,20208426}. As the copy number in direct fecal samples is 10

^{9}per gram, and wastewater is approximately 99.5% water; thus in 1L of wastewater we expect 5g of feces, which corresponds to ~5e9 mtDNA and 5e6 nuclear DNA copies per liter. This is comparable to the estimates obtained from direct wastewater samples and will be used going forward.

# Aim 1: A sensitive and specific method for isolation of human DNA sequences from wastewater.

## Simulation of allele frequencies

## DNA purification

## Library preparation and sequencing

## Comparison to ExAC

# Aim 2: Population modeling of functional diversification in the homeobox transcription factor HOXC6.

# Aim 3: Downstream functional characterization of constrained HOXC6 variants with a high throughput sequencing assay.

# Conclusions

Mining Theorems in Compressed Sensing for NMR Gold

and 1 collaborator

## Abstract

**Introduction**

*phase diagram*linking sampling coverage and spectral sparsity and the

*phase transition*separating the regimes corresponding to successful and unsuccessful recovery have important implications for experiment design in multidimensional NMR, including the effect of higher magnetic fields. In addition to discussing these implications, we consider some recent results from CS that

**Reconstruction guarantees**

*coherence\cite{Donoho_2006,Tropp_2004,Monajemi2016ACHA} , restricted isometry property (RIP) \cite{CandesStable}*and

*phase transition*\cite{DoTa10,MoDo17,DoTa05} theories. Coherence and RIP arguments provide sufficient conditions for a successful reconstruction, which often lead to pessimistic lower bounds on the number required samples. Phase transition theories, on the other hand, measure the exact probability of successful reconstruction and lead to accurate theoretical lower bounds that match the empirical results exactly \cite{Monajemi_2012}.

## New results for multidimensional NMR

### Coherence guarantee

Welcome to Authorea!

Hey, welcome. Double click anywhere on the text to start writing. In addition to simple text you can also add text formatted in **boldface**, *italic*, and yes, math too: *E* = *m**c*^{2}! Add images by drag’n’drop or click on the “Insert Figure” button.

Implementing Socially Aware LSTMs for effective Crowd Navigation

and 1 collaborator

## Abstract

# Introduction

# Related Works

Stochastic Deconvolution

and 2 collaborators

# Introduction

In the deep learning literature there have been many methods to produce images that correspond to certain classes or specific neurons in the CNN[Zeiler]. There are two main methods in the literature. Deconvolution methods rely on an input image and highlight pixels in an image that activate a neuron of interest. Deconvolution requires the presence of an image. There are other methods that try maximize the class scores or activations of neurons with respect to pixel intensities[Simonyan]. However these methods only work for lower level features or more shallow neurons. At higher layers, the neurons represent more abstract concepts such as a dog. Thus an optimal image may have 10 dogs all over the image in different orientations and also have tails and dog ears that are actually not part of a dog. We propose several potential methods that do not rely on an input image that can also create realistic abstract concepts that correspond to certain neurons ”deep” in the CNN.

The key reason abstract concepts such as “dog” can not be generated using the above method is that that there are multiple features in multiple locations that may fire the “dog” neuron. In real life however dog images do not occur with dogs all over the sky and big gigantic ears that exist by themselves without an attached body. Since intuitively, shallower neurons correspond to smaller features and the higher level neurons correspond to combinations of shallower features, a natural approach to fix the issue of generating unrealistic images would be to gather statistics of joint distributions of shallower features. We could use these statistics in a variety of ways. We could for example, use the optimization method mentioned in class and then look at the activations that the optimization method generates. If the activations of the shallow features seem to be an outlier of the joint distribution , we can decide that we need to reduce the activations of certain neurons. Once those neurons have been decided, we can back propagate from any one of those unneeded neurons, and instead take gradients steps to decrease the activation rather than increase it. This could be seen as a method combining both Deconv and and the method introduced by Simonyan.

One could also conceptually have joint distributions of layer k and layer k+1 for all k less than the number of layers. Now suppose we want to generate the abstract concept that a neuron N represents. Initially, we could find which activations of neurons in the previous layer are associated with N firing. This most likely follows some distribution. Thus we can sample the activations from the joint distribution where we fix the activation of N. Now we can use this same method over and over again and proceed back into the image where each time we fix in the joint distribution the activations of layer k+1, and sample the marginal for layer k.

As one can see many potential ideas seem plausible with the extra information of statistics generated from many images going through the convnet. We aim to try a few methods, improve our understanding, and then iterate to think of improved methods that might generate better images.

# Problem Statement

In our problem we will aim to use a pretrained CNN to generate random images corresponding to abstract concepts. We will use the pretrained VGGNET model with 16 layers from Oxford University. We will pass many images corresponding to a specific class (that we will get from ImageNet) to capture statistics of activations for neurons. We then will use our methods to generate random images corresponding to abstract concepts. We expect to be able to generate more realistic images than images generated by Simonyan and we can test this by simply comparing our generated images to created by Simonyan.

Proposal cs221 project: Sheepherding

Strong Lens Time Delay Challenge: I. Experimental Design

and 7 collaborators

**Abstract**: The time delays between point-like images in gravitational lens systems can be used to measure cosmological parameters as well as probe the dark matter (sub-)structure within the lens galaxy. The number of lenses with measuring time delays is growing rapidly due to dedicated efforts. In the near future, the upcoming *Large Synoptic Survey Telescope* (LSST), will monitor ∼10^{3} lens systems consisting of a foreground elliptical galaxy producing multiple images of a background quasar. In an effort to assess the present capabilities of the community to accurately measure the time delays in strong gravitational lens systems, and to provide input to dedicated monitoring campaigns and future LSST cosmology feasibility studies, we pose a “Time Delay Challenge” (TDC). The challenge is organized as a set of “ladders,” each containing a group of simulated datasets to be analyzed blindly by participating independent analysis teams. Each rung on a ladder consists of a set of realistic mock observed lensed quasar light curves, with the rungs’ datasets increasing in complexity and realism to incorporate a variety of anticipated physical and experimental effects. The initial challenge described here has two ladders, TDC0 and TDC1. TDC0 has a small number of datasets, and is designed to be used as a practice set by the participating teams as they set up their analysis pipelines. The non mondatory deadline for completion of TDC0 will be December 1 2013. The teams that perform sufficiently well on TDC0 will then be able to participate in the much more demanding TDC1. TDC1 will consists of 10^{3} lightcurves, a sample designed to provide the statistical power to make meaningful statements about the sub-percent accuracy that will be required to provide competitive Dark Energy constraints in the LSST era. In this paper we describe the simulated datasets in general terms, lay out the structure of the challenge and define a minimal set of metrics that will be used to quantify the goodness-of-fit, efficiency, precision, and accuracy of the algorithms. The results for TDC1 from the participating teams will be presented in a companion paper to be submitted after the closing of TDC1, with all TDC1 participants as co-authors.