The Design of HyperFETs

# Model

## Transistor

The transistor is modeled generically by a heavily simplified virtual-source (short-channel) MOSFET model \cite{Khakifirooz_2009}. Although this model was first defined for Silicon transistors, it has been successfully adapted to numerous other contexts, including Graphene \cite{Han_Wang_2011} and Gallium Nitride devices, both HEMTs \cite{RadhakrishnaThesis} and MOSHEMT+VO_{2} HyperFETs \cite{Verma_2017}. Following Khakifirooz \cite{Khakifirooz_2009}, the drain current *I*_{D} is expressed \begin{equation}
\frac{I_D}{W}=Q_{ix_0}v_{x_0}F_s
\end{equation} where *Q*_{iz0} is the charge at the virtual source point, *v*_{x0} is the virtual source saturation velocity, and *F*_{s} is an empirically fitted “saturation function” which smoothly transitions between linear (*F*_{s} ∝ *V*_{DS}/*V*_{DSSAT}) and saturation (*F*_{s} ≈ 1) regimes. The charge in the channel is described via the following semi-empirical form first proposed for CMOS-VLSI modeling \cite{Wright_1985} and employed frequently since (often with modifications, eg \cite{Khakifirooz_2009, RadhakrishnaThesis}): \begin{equation}
Q_{ix_0}=C_\mathrm{inv}nV_\mathrm{th}\ln\left[1+\exp\left\{\frac{V_{GSi}-V_T}{nV_\mathrm{th}}\right\}\right]
\end{equation} where *C*_{inv} is an effective inversion capacitance for the gate, *n**V*_{th}ln10 is the subthreshold swing of the transistor, *V*_{GSi} is the transistor gate-to-source voltage, *V*_{T} is the threshold voltage, and *V*_{th} is the thermal voltage *k**T*/*q*.

For precise modeling, Khakifirooz includes further adjustments of *V*_{T} due to the drain voltage (DIBL parameter) and the gate voltage (strong vs weak inversion shift), as well as a functional form of *F*_{s}. For a first-pass, we will ignore these effects, employ a constant *V*_{T}, and assume the supply voltage is maintained above the gate overdrive such that *F*_{s} ≈ 1. However, we will add on a leakage floor with conductance *G*_{leak}. Altogether, the final current expression (for the analytical part of this analysis) is \begin{equation}
\frac{I_D}{W}=nv_{x_0}C_\mathrm{inv}V_{th}\ln\left[1+\exp\left\{\frac{V_\mathrm{GSi}-V_\mathrm{T}}{nV_{th}}\right\}\right]+\frac{G_\mathrm{leak}}{W}V_\mathrm{DSi}\label{eq:transistor_iv}
\end{equation}

AEP 4830 HW9 Monte Carlo Calculations

The purpose of this homework is to explore the Monte Carlo Algorithm and apply it to the simplified protein folding model in 2D.

# Monte Carlo Method

Monte Carlo Method uses the randomly generated possible solutions to a certain problem in a solution space and test its degree of goodness based on certain physical requirements\cite{NumRec}. The ways of generating the possible solutions are usually two. First, we can generate the possible solutions totally at random. For example, we use random number generator to do Monte Carlo Integration. Second, we can generate the possible solutions from the previous step by randomly changing some parameters of the previous one. We will use the later one to generate our 2D protein structures in this homework.

The general flow of Monte Carlo Method is shown as follows. Note that we use the term “conformation space” instead of “solution space” since we are talking about protein structures here.

Start from a initial state in the conformation space.

Randomly change the previous state, subjecting to requirement 1.

Determine the degree of goodness by criterion 2.

Accept/ reject this state by physical rule 3.

If it is accepted, pass this state and repeat 2 through 5 for certain number of steps.

If it is rejected, do not pass the state and repeat 2 through 4 until the new state is accepted.

The requirement 1, criterion 2 and rule 3 are problem-specific and we will mention these in our protein folding problem.

# 2D Protein Folding

Proteins are composed of 20 different amino acids (AAs) in a polypeptide chain and due to the mutual interactions between those AAs, proteins will favor some folded states to lower the Gibbs free energy. The interactions are mostly negative because of hydrophobic effects or ion-ion interactions. In order for proteins to perform certain biological functions, their unique structures are essential. We can use a simple bead-and-chain model for a 2D protein chain\cite{S_ali_1994}, assuming that all the AAs are of the same size and the peptide bond between two AAs is rigid, being only one unit and unstretchable. Each AA occupies one grid point of the 2D space and cannot be in the same point of any other AAs. When protein folds, the non-covalent interactions apply to the two non-bonding AAs separate by one unit. An we can calculate the relative Gibbs free energy *Δ**G* by summing all the interactions of non-bonding neighbors.

\begin{equation}
\Delta G = E_0 = \sum_{(i,j)} E_{t(i)t(j)}
\end{equation} where (*i*, *j*) are the indices of two neighboring AAs of types (*t*(*i*),*t*(*j*)) and *E* is an 20 × 20 interaction matrix.

With this model in mind, we can determine the requirements mentioned in the previous section.

Requirement 1:

The modified AA cannot occupy other’s positions.

The modified AA must be one unit away from its neighbor(s).

The best way to modify an AA’s position is to move (1, 0), (1, 1), (0, 1), ( − 1, 1), ( − 1, 0), ( − 1, −1), (0, −1) and (1, −1), eight possible changes.

Criterion 2: Evaluate the interaction energy,

*E*_{0}and use this number to determine the goodness of the state. The lower, the better.

Rule 3:

If the new state has lower

*E*_{0}, the protein will adopt this state in order to reach the minimum of the folding landscape.

If the new state has higher

*E*_{0}, the protein does not favor such state. However, there is still some probability to jump from lower energy state to higher energy ones,*P*=*e*^{−(Enew − E0)/kT}.

Once the model is set and the steps are clear, we can start to do the simulation.

# Program Codes

First, we need a general random number generator, *myrand(seed)*. We will test its validity and then apply it to alter the position of a randomly selected AA. Given different *seed*, the function will give different random number sequences. Our seeds for generating interaction matrix *E* and the AA sequence in the protein are two distinct yet fixed value. So we will guarantee that we use exactly the same protein and interactions throughout the calculation. Other than those, the seed will be set by *time(NULL)* and independent of our bias.

Second, there are several subroutines to do the Monte Carlo calculations and to make sure the protein is subject to some requirements. Note that the information of proteins is stored in a 45 × 3 matrix with the first column being AA types, second the x positions and third the y positions.

*neighbor()*: inputs a Protein Vector and outputs the pairs of indices of two non-bonding AAs.

*Energy()*: input pairs of neighbor indices, Protein Vector, interaction matrix*E*and outputs the energy*E*_{0}.

*n2ndistance()*: inputs a Protein Vector and outputs the end-to-end distance of the protein.

*pcheck()*: inputs Protein Vector and the index of certain AA and check if that AA occupies others’ positions. The function outputs*true*if the protein is not allowed,*false*otherwise.

*conformationchange()*: input Protein Vector and make a position change to one of its AA and outputs a modified new Protein Vector.

# Results

First we tested the random number generator *myrand()*. The random number generator gives a uniform distribution of numbers between 0 and 1. And the points (*x*_{n + 1}, *x*_{n}) cover the 1 × 1 square without noticeable patterns as shown in Fig. 1. We further test it by estimating *π*.

\begin{equation}
\frac{\pi}{4} = \frac{N_{in}}{N}
\end{equation} where *N*_{in} is the number of points in the quarter circle and *N* is the total number of points.As the total number of points increases, the RHS will reach $\frac{\pi}{4}$ asympotically, as shown in Fig. 2.

Macalester POTW 1201: Problem 1201. What Goes Up Might Not Come Down

## Problem Statement

A random walk on the 2-dimensional integer lattice begins at the origin. At each step, the walker moves one unit either left, right, or up, each with probability $\frac13$. (No downward steps ever.) A walk is a success if it reaches the point (1, 1). What is the probability of success?

Note: One can vary the problem by varying the target point. Eg., use (1, 0) or (0, 1) instead. Perhaps there is a good method to resolve the general case of target (*a*, *b*).

Source: Bruce Torrence, Randolph-Macon College

AEP 4830 HW6 Boundary Value Problem and Relaxation

# Finite Difference Algorithm

We have used Runge-Kutta method to solve ODEs with initial conditions; however, many of the physical systems cannot be described by ODEs. The partial differential equations with boundary conditions are inevitable for physical systems such as time-independent Schrodinger equations and electromagnetic fields. Those equations involve partial differential in spatial coordinates with specified boundaries and/or sources. When trying to solve such system numerically, the finite difference algorithm is a straightforward and easy-to-program method.

We will use the 2D Laplace equation as an example to demonstrate the algorithm.

\begin{equation}
\nabla^2\phi(x,y) = 0
\end{equation} The finite difference algorithm requires to mesh-grid the space of interest and the value on each grid point (*i*, *j*) represents the *ϕ*(*x* = *i**h*_{1}, *y* = *j**h*_{2})=*ϕ*_{ij} where *h*_{1} and *h*_{2} are the spacing in *x* and *y* axes. It’s always required to set up another “flag” grid which determines whether certain combination of (*i*, *j*) is on the boundary or source (electrodes). Next, based on the finite difference of adjacent values to *ϕ*_{ij} and spacing, the second partial differential can be approximated. We can solve for *ϕ*_{ij} in terms of its neighbors for the first iteration.\cite{NumRec}

\begin{equation}
\phi_{ij}^{FD} = \frac{1}{4}\left( \phi_{i-1,j}+\phi_{i+1,j}+\phi_{i,j+1}+\phi_{i,j-1} \right)
\end{equation} The flag grid is like a mask, leaving all the specified boundary values unchanged during iterations. Similar to Runge-Kutta method, we evaluate the maximum error in each iteration.

\begin{equation}
\Delta\phi_{ij} = max | \phi_{ij}^{FD}-\phi_{i,j} |
\end{equation} If the maximum error is larger than the specified tolerance *Δ**ϕ*_{ij} > *ϵ*_{max}, we should pass the new values of *ϕ*_{ij} to Eq. (2) and repeat the calculation until the error is below the tolerance or the number of iterations exceeds certain upper bound. The new *ϕ*_{ij}^{new} can be the following form, which is the weighed average of *ϕ*_{ij} and *ϕ*_{ij}^{FD}.

\begin{equation}
\phi_{ij}^{new} = (1-\omega)\phi_{ij} + \omega\phi_{ij}^{FD}
\end{equation} where 0 < *ω* < 2. If 1 < *ω* < 2, the algorithm converges more quickly and this is known as the successive over-relaxation algorithm. The regime where 0 < *ω* < 1 corresponds to the under-relaxation where the solution converges relatively slowly.

Here, we will solve the Laplace equation in a cylinder with three disks of inner and outer radius. Instead of utilizing Eq. (2), we will use the finite difference in cylindrical coordinates and assume azimuthal symmetry.

\begin{equation}
\phi_{ij}^{FD} = \frac{1}{4}\left( \phi_{i-1,j}+\phi_{i+1,j}+\phi_{i,j+1}+\phi_{i,j-1} \right)+\frac{1}{8j}\left( \phi_{i,j+1}-\phi_{i,j-1} \right)
\end{equation} ,where *z* = *i**h* and *r* = *j**h*. The above equation applies to points where *r* ≠ 0. On the central axis where *r* = 0, we should have

\begin{equation}
\phi_{ij}^{FD} = \frac{1}{6}\left( 4\phi_{i,j=1}+\phi_{i+1,j=0}+\phi_{i-1,j=0}\right)
\end{equation} In order to use finite difference algorithm, we need to set up matrices in c++. We used *arrayt.h* to generate and do the calculations for matrices.

# Over-Relaxation

First, we investigated the powerful role of *ω* here. We made the grid spacing *h* = 0.1 and error tolerance *ϵ*_{max} = 2 and calculated the number of iterations for the error to go below the tolerance. The results are shown in the following table.

ω |
1.0 | 1.2 | 1.4 | 1.6 | 1.8 | 1.9 |
---|---|---|---|---|---|---|

Number of iterations | 655 | 511 | 431 | 371 | 359 | 1077 |

The best choice of *ω* seems to be around *ω* = 1.8 and we would use this over-relaxation value for the following calculation.

# Solutions

For solving the electric potential inside the cylinder with three disk electrodes and boundaries, the grid spacing is the same *h* = 0.1 while the error tolerance is much smaller, *ϵ*_{max} = 0.01. This one percent error would increase the number of iterations significantly to *N* = 4231. The contour plot of the numerical solution in space is shown in Fig. 1, where the red line represents the middle disk electrode held at *ψ* = 2000*V* and the black line and rectangle are the two grounded electrodes *ψ* = 0*V*. The electric potential contour is distorted by the two grounded electrodes, compressing the potential lines in the z direction. The potential at about *r* = 10*m**m* is symmetric while that closer to the central axis is slightly asymmetric due to the mismatch in inner radii of the two grounded electrodes. The electric potential is mainly restricted among the disks and near the origin. Outside of those disks, the potential is negligibly zero everywhere. The configuration is seen in the electron guns of the electron microscopes to focus electrons by adjusting the fixed voltage of the center electrode.

AEP 4830 HW5 Three or More Body Problem

AEP 4830 HW4 Ordinary Differential Equations and Chaotic Systems

PHYS6562 F2 Daily Quaetion

PHYS6562 M3 Daily Question

## Weirdness in high dimensions

Most of the volume of high dimensional spheres is near the surface. As we can approximate the volume as *V* ≈ *r*^{3N} in 3N dimensions, near the center, *r* ≈ 0 and *V* ≈ 0. Near the surface, where *r* becomes larger and contributes to most of the volume.

For statistical mechanics, instead of taking the whole E sphere into account, which might involve high-dimensional integral, it’s easier to focus on the shell of E sphere. Then the momenta can be considered to be $p = \sqrt{2mE}$ instead of doing

\begin{equation} \int_0^{\sqrt{2mE}} (....)dp \end{equation}

PHYS6562 W2 Daily Question

## Local conservation

a. The density of tin atoms *ρ*_{Sn}(**x**) is locally conserved since there is no tin atom created or annihilated.

b. It depends. Under the constraints that one cannot change his/her political belief, yes, the density of democrats is locally conserved. However, if changing of political belief is allowed, the density of democrats is not locally conserved. There might be democrats changing to republicans or vise versa, which implies that the democrats or republicans can teleport, created or annihilated.

## Density-dependent diffusion

Suppose there is no external force. We can write the current as

\begin{equation} J = -D(\rho)\nabla\rho \end{equation}

And apply

\begin{equation} \frac{\partial\rho}{\partial t} = -\nabla\cdot J \end{equation}

\begin{equation} \frac{\partial\rho}{\partial t} = \frac{\partial D(\rho)}{\partial\rho} \nabla\rho\cdot\nabla\rho + D(\rho)\nabla^2\rho \end{equation}

\begin{equation} \frac{\partial\rho}{\partial t} = \frac{\partial D(\rho)}{\partial\rho} (\nabla\rho)^2 + D(\rho)\nabla^2\rho \end{equation}

This is the diffusion equation with density-dependent diffusion constant.