Public Articles
Analysis of soccer matches with clustering of player trajectories and Mutual Information based metric
and 1 collaborator
Abstract
The main goal of this project is to extract high level semantic cues from soccer match video sequences, using machine learning and computer vision techniques.
During the first year, I explored the state-of-the-art techniques that could be used to extract the player trajectories given the videos of a match recorded with several static camera.
During the second year, I focused on the player trajectories clustering problem. I particularly focused on designing a reliable and efficient metric to establish correspondences between trajectories.
Rebalancing and leverage
Rebalancing does not need to have a contrarian flavour. For this reason, the claim that rebalancing has benefits is empty. It needs further qualification. I illustrate this using a leveraged portfolio.
As usual, I take a world with two assets, cash bearing zero interest rate and a risky asset with price \(p\). At inception, investment is initiated with one dollar. The initial price is \(p_{0}=1\). The initial amount is leveraged to reach an exposure to the risky asset of \(\pi>1\). The position in cash is thus \(1-\pi \lt 0\).
Assuming the portfolio is continuously rebalanced and the price trajectory is smooth, its value as a function of the price is given by:
\[V(p)=p^{\pi},\]
and the number of shares held as a function of the price is:
\[n(p)=\pi p^{\pi-1}.\]
This is an increasing function of the price so that the investment policy consists in buying more shares as they go up, and selling shares as they go down. Rebalancing to a leveraged exposure is a momentum policy, as opposed to a contrarian policy.
I show the leveraged trading policy and the leverage value function versus the corresponding buy-and-hold quantities in the graphs below.
Predicción del indice carbonilo del poliestireno expandido utilizando aproximación de funciones con redes neuronales artificiales
and 1 collaborator
En este articulo se emplea un modelo matemático de aproximación de funciones utilizando Redes Neuronales Artificiales (RNA),el cuál se ajusta al comportamiento en la degradación del EPS (Poliestireno Expandido) en un CPC (Concentrador Parabĺico Compuesto). El objetivo es obtener el valor del índice carbonilo (IC) del EPS . Los valores de entrada obtenidos experimentalmente cuando se concentra energía solar sobre el absorbedor del CPC están compuestos por algunos de los factores que influyen en el deterioro del EPS, los cuales son: temperatura (T), tiempo de exposición de la muestra (t) y radiación UVB (W/m2). La aplicación de este modelo resulta útil para predecir el índice de deterioro del polímero sometido a concentración solar en el CPC a lo largo del año.
Facilitating pediatric diagnosis using dynamic programming approach
and 1 collaborator
The scenario of maternal and pediatric health in the Philippines has been an ongoing problem. This paper discusses a tool developed to expedite the process of diagnosing, informing the patient and prescribing the appropriate medication using novel techniques in dynamic programming. With the integration of a medical knowledge base, existing patient data and an inference engine, we are able to generate a case specific advise acting as a decision support system for medical practitioners. The goal to facilitate an improved delivery of diagnosis aiding physicians in giving timely and appropriate advice. A team of pediatric physicians piloted and tested this tool garnering an average usability rating of 9.2 over 10.
Keywords: Inference engine, dynamic programming, knapsack, e-health;
Ideas Adventure 2016 diary
Characterizing Homogeneous Raw Material via Non-Invasive X-ray Flourescence Spectroscopy
Welcome to Authorea!
Hey, welcome. Double click anywhere on the text to start writing. In addition to simple text you can also add text formatted in boldface, italic, and yes, math too: E = mc2! Add images by drag’n’drop or click on the “Insert Figure” button.
Rebalancing: confirming your intuition
As a follow up to the post, "Rebalancing: testing your intuition", I give the solution.
The value of the buy and hold portfolio is always:
\[V_{bh}(p)=0.5+0.5\,p.\]
It is a linear function of the price since the number of shares held is constant.
Assuming the price trajectories are smooth, the value of the continuously rebalanced portfolio is the solution of:
\[\frac{dV_{rb,u}}{V_{rb,u}}=0.5\,\frac{dp_{u}}{p_{u}},\]
and this is just:
\[V_{rb}(p)=p^{0.5},\]
assuming standard college calculus applies (this solves the equation and fits the initial condition).
Both value functions equal \(1\) for \(p=1\) since both portfolios are initialized with one dollar. As soon as the price deviates from one, the rebalanced portfolio has an active position versus the buy-and-hold portfolio which has the wrong sign (underweight if the price has gone up, overweight if the price has gone down). The value functions are graphed below.
Under these assumptions, the rebalanced portfolio always underperforms the buy-and-hold portfolio. Its value does not depend on the precise price trajectory, it only depends on the endpoint. Continuous rebalancing does not seem to be able to add value in this context. I'll tackle discrete rebalancing in the next post.
Remember that \(p\) can be interpreted as a relative price when the portfolio comprises two risky assets. Clearly, rebalancing is not offering good prospects if the relative price drifts away from its starting point.
Rebalancing: testing your intuition
If you are confident that you understand the benefits of rebalancing, read this post to test your intuition. Confronting these intuitions is, I believe, the only way to really understand the rebalancing problem.
As in a previous post, I assume there are two stocks. The first stock is taken as the numeraire so that its price is constant. The second stock is quoted in terms of the first one, and its relative price is \(p\). Alternatively, one can see this as a universe with cash (paying zero interest rate) and a risky asset with price \(p\). In all cases, the return of the first asset is always zero. I assume the price is initialized at \(1\).
I consider two policies. The first one continuously rebalances so that the weight of the portfolio on the risky asset is \(50\%\) at all times. The second one starts with a weight of \(50\%\) on the risky asset but never rebalances. Both strategies are funded with one dollar at inception. I then look at the relative pay-off \(R\) of the two strategies, i.e. long the rebalanced portfolio, short the buy-and-hold portfolio.
Assuming the risky asset experiences a cycle, i.e. its price moves up from \(1\) to \(p_{max}>1\) and then back to \(1\), what is the relative return \(R\)?
Proposed answer and intuition: Both portfolios make the same return. Indeed, the rebalanced strategy is short the risky asset (vis-à-vis the buy-and-hold portfolio) on the way up and this leads it to underperform. But symmetrically, it is short the risky asset on the way down. This leads it to outperform. This outperformance matches the initial underperformance. and the net result is zero.
The situation is symmetric if the risky asset price moves down and then back to \(1\).
Assuming the risky asset drifts away from \(1\) to end at \(p_{end} > 1\), what is the relative return \(R\)?
Proposed answer and intuition: The rebalanced portfolio is underweight (vis-à-vis the buy-and-hold portfolio) the risky asset which in the end outperforms. Thus the rebalanced portfolio underperforms along such trajectories.
The situation is symmetric if the end price is strictly lower than \(1\).
Tentative conclusion:
If the intuition above is correct, the pay-off of the relative strategy is always negative. It is zero along price cycles.
Advanced analytical question:
What is the pay-off of the buy-and-hold portfolio? What is the pay-off of the rebalanced portfolio? What is the relative return of the two strategies?
Rebalancing as contrarian trading
Rebalancing is the act of restoring portfolio weights or numbers of shares to prespecified values. The need to rebalance is usually created by the natural tendency of portfolio weights to drift in the absence of any action. Seen at the level of the number of shares held, the need to rebalance is the result of a need to change the number of shares.
As explained in the previous post, rebalancing is easily understood when the target portfolio weights are constant. In the absence of rebalancing, portfolio weights drift away from their targets unless all asset returns turn out to be equal. But the term rebalancing is often used even when targeted portfolio weights are time varying. In the equity context, smart beta indices typically have time varying portfolio weights. A conjecture in Bouchey[2015] is that smart beta indices beat cap weighted portfolios because they rebalance. Technically, they rebalance since they are not buy-and-hold portfolios, but why should that contribute to performance? What assumptions are needed to rationalize the idea that rebalancing towards time varying weights adds value?
Rebalancing is just trading and the question can then be recast as: which characteristic of the trading process induced by smart beta indices would ensure that it adds value? No explicit answer is given to that question by defenders of the rebalancing edge, as far as I am aware. So here is my proposition.
One should distinguish contrarian trading from momentum trading. Let's cast this in a set up where there are two stocks. The first stock is taken as the numeraire, so its price is constant. The second stock has a time varying price \(p\) quoted in terms of the numeraire, which I initialize at \(1\) at inception. I assume that the portfolio is initialized with \(n_{0}\) shares of the second stock. Subsequently, trading ensures that I hold \(n(p)\) shares of the second stock if the price is \(p\). I'll fix \(n(1)=n_{0}\) as a simplification.
First, buy-and-hold is the rule \(n(p)=n_{0}\) whatever the price \(p\). Contrarian trading is the situation where \(n(\cdot)\) is a decreasing function of \(p\). I sell shares as the price rises, or I buy shares as the price falls. Momentum trading is the situation where \(n(\cdot)\) is an increasing function of \(p\). I buy shares as the price rises, or I sell shares as the price decreases.
Intuitively, contrarian trading might beat buy-and-hold if the price has a tendency to move away from its initial value and then reverts back. Momentum trading might beat buy-and-hold if the price moves away and never comes back. I say might because we need other assumptions to make sense of these intuitions (discrete trading for instance). If these intuitions are robust, then we might hope that the following situation holds for smart beta indices versus cap weighted indices (which buy and hold):
Now the case is not won. Market prices are not cyclical in any obvious way. Smart beta indices have time varying weights and it is therefore not obvious at all that smart beta indices induce contrarian trading. One needs other assumptions to make sense of these claims and subsequently, a structured empirical investigation is needed to check whether these assumptions hold in the data. But it looks like we now have a plan.
Bouchey[2015]: P. Bouchey, V Nemtchinov and T.L. Wong, Volatility Harvesting in Theory and in Practice, The Journal of Wealth Management, Vol 18,pp 89-100.
GADFLY: A pandas-based Framework for Analyzing GADGET Simulation Data
\label{sec:intro}
In the past decade, astrophysical simulations have increased dramatically in both scale and sophistication, and the typical size of the datasets produced has grown accordingly. However, the software tools for analyzing such datasets have not kept pace, such that one of the primary barriers to exploratory investigation is simply manipulating the data. This problem is particularly acute for users of the popular smoothed particle hydrodynamics (SPH) code gadget \citep{SpringelYoshidaWhite2001,Springel2005}. Both gadget and gizmo \citep{Hopkins2015}, which uses the same data storage format, are widely used to investigate a range of astrophysical problems; unfortunately this also leads to fractionation of the data storage format as each research group modifies the output to suit its needs. This state of affairs has historically forced significant duplication of effort, with individual research groups separately developing their own unique analysis scripts to perform similar operations.
Fortunately, the issue of data management and analysis is not endemic to astronomy, and the resulting overlap with the needs of the broader scientific community and the industrial community at large provides a large pool of scientific software developers to tackle these common problems. In recent years, this broader community has settled on python as its programming language of choice due to its efficacy as a ‘glue’ language and the rapid speed of development it allows. This has led to the development of a robust scientific software ecosystem with packages for numerical data analysis like \citep{VanderWaltColbertVaroquaux2011}, \citep{JonesOliphantPeterson2001}, \citep{McKinney2010}, and ; \citep{Hunter2007} and for plotting; for machine learning, and statistics and modeling packages like , , and \citep{Foreman-Mackeyetal2013}.
Python is quickly becoming the language of choice for astronomers as well, with the Astropy project \citep{Robitailleetal2013} and its affiliated packages providing a coordinated set of tools implementing the core astronomy-specific functionality needed by researchers. Additionally, the development of flexible python packages like \citep{Turketal2011b} and \citep{Pontzenetal2013}, capable of analyzing and visualizing astrophysical simulation data from several different simulation codes, have greatly improved the ability of computational researchers to perform useful, insight-generating analysis of their datasets.
Recently, the scientific python community has begun to converge on the provided by the high-performance data analysis library as a common data structure. As a result, once data is loaded into a , it becomes much easier to take advantage of the powerful analysis tools provided by the broader scientific computing ecosystem. With this in mind, we present a -based framework for analyzing gadget simulation data, gadfly: the GAdget DataFrame LibrarY. Rather than providing an alternative to the existing and projects, the aim of the project is to ease interoperability with the python ecosystem at large, lowering the barrier for access to the tools created by this broader community.
In this paper we present the first public release (v0.1) of , which is available at . The framework design and organizational structure are outlined in Section [sec:framework], followed by a description of the included SPH particle rendering in Section [sec:vis]. Our plans for future development are outlined in Section [sec:future], and a summary is provided in Section [sec:summary].
Some numbers
I highly recommend looking at this blog, https://thepfengineer.com/, especially the following posts:
Shannon's Demon
The Mythical Rebalancing Bonus - Part 1
If you have been following my own posts you'll remember that rebalancing works when relative prices experience cycles, while it does not work when relative prices diverge. Shannon's Demon is a case of noisy but stable relative prices. It is the golden case for rebalancing. However, it is not a realistic situation.
In The Mythical Rebalancing Bonus - Part 1, the author looks at various multi-asset mixes and various time frames. The observation is that over recent samples (1996 to recent years), annual rebalancing has brought tremendous benefits. Indeed, the relative price of equities to bonds has been quite stable thanks to stellar bond returns, while also experiencing very ample cycles in times of crisis. If you look at longer periods, or if you change the duration of the bond index, this feature breaks down. The relative price is just less stable and less cyclical. Rebalancing is much less favourable.
No rebalancing is not an option in the multi-asset space. The main choice is between slow rebalancing and fast rebalancing.
In all cases, rebalancing needs to be relatively slow (semi-annual to annual) if calendar based. Trigger based rebalancing (as a function of observed deviations in weights) is also quite a sound practice. Fast rebalancing can serve as a reference but it is in most cases beaten by slow rebalancing.
Literature Review
We first introduce the problem and examine several domains where it has been examined. We then address the two primary research questions that arise when comparing different methods: how best to model and evaluate timelines. In each of these two areas we organise the key papers thematically. The contributions of each paper are synthesised into a general discussion about the pros and cons of each approach.
Timeline generation (TLG) is a way of representing a large amount of temporally dependent information concisely. It is query driven; we retrieve a corpus of text linked to some entity, event or other term. The canonical TLG model clusters these articles into topics or stories, selects the most important of these clusters and returns timestamped summaries. It can be seen as a generalisation of the multi-document summarisation task, where we have introduced temporal dependency and structure.
Clustering: Our model handles both current and historical articles. Several domains where the TLG model is desirable (e.g. news) are characterised by a large historical catalog and high frequency. Our clustering model has to be scaleable and ideally functional in a streaming context.
Topic models: Incorporating a topic structure into our model gives us an intuitive and understandable document representation. Looking at the topic distribution also lets us prioritise certain kinds of sentences or stories.
Timelines: TLG differs from regular multi-document summarisation in its temporal dependence. As such, we value diversity in both the kind of content selected as well as when it was published.
Summarisation: While clustering and topic modeling are useful structuring tools, we must still compress the data for human readability.
Timeline generation has been applied in a range of domains. Several approaches focus on specific sub-problems in TLG. Althoff et al. timelines entities in the Freebase repository \cite{Althoff:2015dg}. This knowledge base is comprised of subject and objects linked by predicates; 'Robert Downey Junior' might be linked by 'Starred In' to 'The Avengers. The TLG model seeks to group and select a subset of these subject-predicates that summarise our query entity. Working from knowledge-base instead of a corpus of text provides useful structure. We are spared the task of preprocessing, entity linking, consolidation and data validation on our text corpora. Of course, this relies on the existence of a knowledge base of timestamped facts about our query entity. This is an issue if we want to be able to perform queries on lesser known entities. A similar approach was used by Ahmed et al. on a corpus of academic papers \cite{Ahmed:2012vh}.
Single-document variations of the TLG model have been applied to Wikipedia and Wikinews articles \cite{Bauer:2015cz, Minard:2015tw}. The single-document nature of the task is characterised by a focus on the summarisation and selection tasks over clustering. Timelines took the form of timestamped subsets of the document sentences. A similar approach has been applied to Twitter feeds; Hong et al. applied a single-document version of the TLG model to a user's tweet-history \cite{Hong:2011du}.
The most frequent application of TLG models is to a corpus of news articles \cite{Chieu:2004id, Hong:2011du, Yan2011, Yan2011a, Allan:2001bx, Swan:2000dy, Ahmed2011}. A document retrieval service is given a query term; a person, event, company or generic term. The body of documents retrieved form the input for the TLG task. The task then is to group and summarise a potentially large body of articles and provide a compact representation of the query term.
Desmos Activities for the course Introduction to College Math
I have created a course based off of Desmos activities. The course provides supplementary once-a-week content to the Introduction to College Math course at Nevada State College. The outline of the course is below.
Students should bring laptops or tablets to every class. They can work in pairs, but they will need a computer.
Prior to each class period you will create each activity using your account at teacher.desmos.com. Send students to student.desmos.com, and have them enter the codes you give them for each activity. These exercises are supposed to be exploratory in nature. Students will learn how Desmos can help them to answer their questions and they will become comfortable learning math and expressing their answers.
Note that students must create their own Desmos account if they want to save their work for future use.
The Varying Mass Distribution of Molecular Clouds Across M83
and 4 collaborators
The work of \citet{Adamo_2015} identified characteristic mass scales in the mass distributions of young massive clusters in the nearby galaxy M83. Here, here we present a cloud-based analysis of ALMA CO(1 → 0) data to search for such characteristic mass scales in the molecular cloud population. We identify a population of molecular clouds in M83 that is largely similar to those found in the Milky Way and Local Group galaxies, however clouds in the center of the galaxy show high surface densities and enhanced turbulence, as is common for clouds in high density nuclear environments. Like the work on young massive clusters, we find a characteristic mass scale for the populations that decreases radially in the galaxy. We find the most massive young massive cluster tracks the most massive molecular cloud radially with the cluster mass being ∼0.01 that of the most massive molecular cloud. The maximum mass of the molecular cloud distribution also corresponds well to the value expected from shear instabilities, particularly in regions outside the nuclear and bar regions, where our simple shear model is inappropriate. Outside the nuclear region of M83 (Rgal > 0.5), there is no evidence for changing internal conditions in the population of molecular clouds, with the average internal pressures, densities, and free-fall times remaining constant for the cloud population over the galaxy. Thus, the observed variation in the fraction of star formation that results in bound clusters must result from the changing fragmentation process of the molecular gas.
To What Extent is an Oligopolistic Market Structure of Health Insurance Providers to Blame for Inefficiency in the United States Healthcare Market?
Reading and listening for self-development
Social attitudes cannot be predicted from federal court decisions and judge characteristics
and 4 collaborators
Federal US circuit courts often make rulings in areas that are socially
relevant to the American public, such as capital punishment,
affirmative action, or racial discrimination. In this
project, we seek to measure how these rulings may affect Americans' social and
political attitudes. Our goal is to determine whether court rulings tend to move
attitudes in the direction "intended" by the ruling or whether rulings tend to move
attitudes in the opposite direction or polarize attitudes.
We will focus on rulings about gender discrimination and their impact on
on attitudes about gender roles.
To address this question we used two datasets.
First, we used the US General Social Survey (GSS), which is a long running (1972–) survey on social attitudes and behaviors of US American citizens \citep{smith2013general}. In addition to social attitudes, the GSS provides demographic and life-course data about respondents. Each row represents one respondent.
Second, we used a database of federal appeals court cases that were decided at the level of circuit courts \citep{sunstein2007judges}. The cases were separated by issue (e.g. affirmative action, gender discrimination, racial discrimination). The federal court system is divided into 12 circuits, each of which establishes legal precedent for a group of several states. A circuit court case is decided by a randomly assigned panel of three judges chosen from the pool of judges appointed to that circuit. The dataset provides information about the outcome of each case, which is coded as the number of judges who voted in favor the outcome that can be considered to be more "progressive" (for example, pro-affirmative action, and against racial or gender discrimination).
Another dataset was used to assign judge characteristics to each case \cite{eb60c240-7076-4b7f-829d-9b68e9f5494c}. This included, for instance, the number of panel judges who were female, or who were appointed by a Democratic president. It also included the average number of female judges (and other characteristics) in the pool of judges for that circuit at the time of the ruling.
For our analyses reported below, we focused of the issue of gender discrimination, and restricted ourselves to court case data pertaining to that issue. In total, we used 100 cases (one case per row) that were decided between 1995 and 2004. We chose this subset of cases because it was the subset that had the longest period of intersection with several relevant questions on the GSS.
Understanding rebalancing
In portfolio management contexts, rebalancing is often believed to add value. But what is rebalancing? Rebalancing is best understood by contrasting buy-and-hold portfolios and constant proportion portfolios. Buy-and-hold portfolios naturally concentrate over time into the best performing assets. They don't trade. Constant proportion portfolios trade to counter the concentration tendency. They sell the winners and buy the losers.
Under what condition does rebalancing pay off? Rebalancing will pay off as seen from a terminal date if, on average, the price at which a quantity has been bought (respectively sold) is lower (resp. higher) than either the price at which it is subsequently sold (resp. bought) if that happens or the terminal price. This condition is meant to hold across all trades triggered by rebalancing.
What is the likelihood that this condition holds? It depends. First, all that matters for the relative performance of a constant proportion portfolio versus its buy-and-hold counterpart is relative prices. Clearly, if relative prices keep oscillating around an average value, rebalancing will in the end pay off. Oscillations create occasions to buy low and sell high. Rebalancing has great potential in such contexts. But relative prices can typically diverge. In a stock context, Apple has beaten many other competitors over a long period of time.
In a multi-asset context the most important quantity is the relative price of the equity index versus the price of the bond index. The graph below shows that this quantity has been relatively stable over the last thirty years, indicating that this period was favourable to rebalancing. One should however caution that this relative price should in principle trend. After all, bonds and equities don't have the same volatility. Even if they had similar Sharpe ratios, their risk premia should be different. It could be that the historical stationarity of the relative price should not be expected to persist.
Rebalancing is best understood starting with buy-and-hold. Buy-and-hold consists in keeping the number of contracts held in a portfolio constant. If \(n=(n_{i})_{i=1,\ldots,N}\) denotes the constant number of shares (say) held in the portfolio, the change in value of the portfolio will reflect the change in value of the contracts over the given interval \([t,t+1]\), i.e. \(p_{i,t+1}-p_{i,t}\), times the quantities held. This change in value is thus:
\[V_{bh,t+1}-V_{bh,t}=\sum_{i=1}^{N}n_{i}(p_{i,t+1}-p_{i,t}).\]
To understand rebalancing, it is useful to introduce the proportion of the value invested on a given contract:
\[\pi_{i,t}=\frac{n_{i}p_{i,t}}{\sum_{i=1}^{N}n_{i}p_{i,t}}\]
As a reminder, using proportions, the change of value of a portfolio is accounted by:
\[\frac{V_{\pi,t+1}}{V_{\pi,t}}=\sum_{i=1}^{N}\pi_{i}\frac{p_{i,t+1}}{p_{i,t}}.\]
In a buy-and-hold portfolio, the proportions invested on the most performing assets rise while the proportions invested on the least performing assets fall, as can be easily checked. Rebalancing can be conceived as leaning against this natural tendency, i.e. as trading in such a way that the asset proportions are less distorted by relative performance than implied by the buy-and-hold policy. This principle is best illustrated by constant proportion portfolios. These are portfolios that trade so as to keep the proportions \(\pi_{i,t}\) constant. The trading operation carried out to prevent the change in proportions is the rebalancing operation. This amounts to selling the winners and buying the losers.
When does rebalancing work? The value of a constant proportion portfolio evolves according to:
\[V_{\pi,t+1}-V_{\pi,t}=\sum_{i=1}^{N}n_{i,\pi,t}(p_{i,t+1}-p_{i,t}),\]
where we have accounted for the fact that the quantities held \(n_{i,\pi,t}\) change with time. Assume the proportions and quantities have been chosen such that the buy-and-hold portfolio and the constant proportion portfolios are identical at date \(0\). We can subsequently track the difference in quantities induced by rebalancing. Let's call
\[\Delta n_{i,t}=n_{i,\pi,t}-n_{i},\]
the difference in quantities held in the constant proportion portfolio and in the buy-and-hold portfolio. Note that we made sure that \(\Delta n_{i,0}=0.\). Then assuming both portfolios are initialized with the same amount of money, the different in the value of the two portfolios at terminal date \(T\) is just:
\[V_{\pi,T}-V_{bh,T}=\sum_{t=0}^{T-1}\sum_{i=1}^{N}\Delta n_{i,t}(p_{i,t+1}-p_{i,t}).\]
If we designed a convention to keep track of individual trades (for example using last-in first-out accounting), we could split this pay-off into trades, initiated then either terminated or carried up to date \(T\). The cumulated performance \(V_{\pi,T}-V_{bh,T}\) would then depend on the sum of the P&Ls of these individuals trades.
In practice, the outcome depends on relative prices. Indeed, one can quote prices against any convenient numeraire. Whether \(V_{\pi,T}-V_{bh,T}\) is positive or negative does not depend on the chosen numeraire. We can thus choose the first asset as the numeraire. This amounts to assuming \(p_{1,t}=1\) at all dates. Other prices are then prices quoted relatively to that of the first asset. Then, quite intuitively, the best situation is one where all relative prices oscillate without ever diverging. One can show (this is beyond this simple note) that rebalancing almost surely beats buy-and-hold if one is sufficiently patient. In contrast, if relative prices have different trends, buy-and-hold should outperform. In practice, both behaviors will be observed in a sample and the outcome will depend on the mix between cycles and trends.
I illustrate this in a multi-asset context where the most important relative price is that of stocks versus bonds. The graph below shows that this quantity has been quite stable over the last thirty years, indicating that this period was favourable to rebalancing. One should however caution that this relative price should actually trend. After all, bonds and equities don't have the same volatility. Even if they have similar Sharpe ratios, their risk premia should be different. The historical stationarity of the relative price should not be expected to persist.
Mindfulness as a treatment for Bipolar disorder
Mindfulness has become an increasingly popular in recent years. Research has found mindfulness based therapy to be successful as part of a treatment for depression \citep{hofmann_effect_2010}. This is of particular interest for the treatment for bipolar disorder, where treating depressive symptoms has to be carefully managed to avoid an increase in manic symptoms. This article will outline the main points relating to bipolar disorder, mindfulness, mindfulness based cognitive therapy, and six studies which have looked specifically at MBCT and its effectiveness as a treatment for bipolar disorder.