Public Articles
Utilizing Gabor Deconvolution to Improve Resolution in the Image Domain
Gabor deconvolution, an extension to nonstationary Weiner deconvolution, was utilized as a post- depth migration processing filter to determine its benefit for improved resolution and multiple attenuation in the image domain. Tests were carried on pre-stack depth-migrated 2D gathers on deep offshore data. The geology of the region combines complex salt tectonics with layers of evaporative sequences (LES). The LES sequence generates short-period multiples which are difficult to attenuate using velocity-based algorithms.
Traditional deconvolution compensates for absorption through its assumption of “white” reflectivity spectra, with most of the implementation therefore implying an infinite “Q” attenuation function. In contrast, the nonstationary approach to the deconvolution process approximates the values associated with the attenuation function “Q”. We expect, therefore, that the nonstationary approach should be more suitable in the presence of the complex velocity where the assumption of an infinite “Q” would provide suboptimal results (i.e., failure of the underlying assumption of Stationary Deconvolution). We performed a series of tests which applied Gabor deconvolution in the image domain in order to suppress multiples and to balance reflection amplitudes. We established a systemic approach to test this method by applying Gabor either before or after a velocity-based multiples attenuator. Results were then compared as controlled group against each other through spectral analyses
Application of the Gabor deconvolution in the image domain resulted in the recovery of frequencies between 20-40 Hz and the slight suppression of low frequency noise between 0-10 Hz. As a result we obtained a laterally well-focused stacked section with preferable amplitude balancing. A validation study was undertaken in order to compare our results with those obtained from the conventional predictive deconvolution. Our final results showed an overall improvement in resolution, better continuity of the reflections and suppression of multiples in several zones.
Seismic Interpretation of the D3 Sand, Gulf of Mexico Block ST-54
I. Introduction [Narrow Azimuth OBC 3D Post Stack Time Migrated Seismic Survey on South Timbalier Block 54]:
We interpreted the D3 Sand within the South Timbalier 54 seismic volume to evaluate its potential for HC exploration. Various interpretation techniques were integrated to explain physical observations and anomalies. D3 Sand was deposited on the shelf in a steady to slightly dropping sea level resulting in increased in accommodation space; thus, a continuous aggradational to progradational deltaic heterogeneous sand [1]. Seismic-wise, D3 appears as continuous layer across ST-54 with almost uniform thickness (Fig. 3 & 5), which fits with previous geological background. However, resolving its deltaic heterogeneities is beyond our seismic resolution; yet, implementing further seismic analysis could improve the detectability to resolve them.
II. Methodology [Landmark Decision Space, GeoProbe]:
We started by mimicking the seeded points on (200, 400, 600, 800 & 1000) across In-line & (50, 100, 150 & 200) along cross-line which provided us with the legacy 200x50 mesh for the top and the bottom of the D3 (Fig 1). Then, we refined the mesh down to 50x25 mesh. We altered the mesh size to minimize the effect of tracking along the low fold cross-line direction while keeping the mesh bigger within in-line where S/N is sufficient to be tracked by the software (Fig 1 & 5). Picks were made on min-phase peak assuming a min-phase data where we found coherent continuous reflection. Before tracking the mesh; we implemented user-controlled tracking process to account for the encountered irregularities within the mesh rather than using the unguided auto-tracker where we blocked tracking against polygons which were define based on anomalies driven from the attributes (Fig 1 & 2). Observed irregularities such as abrupt termination of reflectivity, mis-ties, vertical offsets & abrupt changed in seismic attributes (Amplitude, Frequency & Discontinuity) were later interpreted based on our structural & stratigraphic background of the region as faults. Results were quality controlled in 3D where we extracted the discontinuity along the fault plane (Fig 3). Structure Analysis [Previous existing topography, Graben, generated by deep faults originated at salt diaper]: To better illustrate our picks, let’s consider diagonal a NE-SW traverse line which orthogonally intersect most of our lateral irregularities as they commonly share a NW-SE trending. As we move from SW to NE; we observed 50 ms vertical offset within our section (Anomaly B); the offset is not unique only to our picked horizon at 1750 ms as it appears from 1100 ms all the way down to around 2400 ms or even beyond. Since data is offshore, sever near surface processing issues are excluded from our analysis and the continuity of such marginal offset in laterally and temporally indicate a major regional normal fault as those truncation does not reach the surface; instead, they gradually disperse upward. Moreover, we quality-control this interpretation against a discontinuity volume where we extracted discontinuity RMS-Amplitude then transparently overlaid on the time horizon (Fig 5). Similar analogy was used to interpret anomalies B-H (Fig 5). In addition, Discontinuity volume-slicing, in 3D, showed an incoherency signature along slicing through our horizon and showed continuum discontinuities as we time-sliced our volume which supported our initial fault interpretation. Moving NE, we continued to observe these mis-tie anomalies with different magnitude and less truncation at the middle. The observed offset decreases toward the middle, then; increases sidewise. Using similar analogy to interpret those features as before, we reached a conclusion that they are mainly normal growth faults starting from zero-reflection zone beyond 2400 ms. We spot the D3 to be geometrically depressed block of sand broader by semi-parallel faults. Therefore, we interpreted the general structure to be graben. The graben is located above the center of the zero-reflectivity body, possibly the Louann Salt. Blending horizon and fault interpretation burial history, we observed that the sand could had been deposited comfortably on semi-syncline area on the top of active salt based on (Mov 1). Geological background, observed zero-reflectivity zone at the middle and the strike and dip direction of the faulting system alluded that ST54 is resides on top salt diaper/dome as the two regional major faults could be traced down to the salt; thus, originated there before the deposition of the D3 Sand . Kinematic modeling illustrated that lower strain on the sides of the graben; hence, more weight and compaction on the sides compared to the center which was the driving force behind expelling the salt upward in a semi-vertical direction in the center of the graben where the model showed higher strain; hence, more deformation at the middle of graben (Mov 1). In addition, faults strain modeling showed a higher strain on the middle indicating that those smaller faults could not be generated by salt directly; rather, they originated from other faults due to salt collapsing which is common in rapid increase of sedimentation [3,4]. Such conclusions were possible to draw considering the fact that D3 was deposited within local sea level drop during or closing of the gulf in the early stage of the graben collapsing; then, faults increase the temporal stratigraphic offset between its compartments while overall hot climate played a vital role in increasing the supply of sediments [2,3 & 4].
III. Stratigraphic Analysis [On Shelf Deltaic sand with good vertical resolution in seismic]:
Extracting the RMS-Frequency at horizons showed a dominate Frequency around 20 Hz for both yielding a vertical resolution around 30 m; assuming constant 2400 m/s at the AOI (Fig 4). Since, Frequency is inversely related to thickness; such frequency respond infer sediments diverge against the normal faults as we moved from the footwall to the hanging wall block which tie with earlier analysis. IV. Hydrocarbon Potential [Yes, Bright spot in post stack seismic section with 4.12 BCF] :  Utilizing the most successfully DHI, RMS-Amplitude showed bright spots and 3 major gas pools were identify (Fig 6). Since, it is below seismic resolution to draw early conclusion about multi-producing layers within the D3; we can safely assume that the D3 is homogenous layer from reserve booking point of view only. Hence, attribute calculations are generated from the top to the bottom of the D3. Surprisingly, we did not focus our attention on the brightest area; rather, shifted our auto-polygon function cut-off value down to 4500-5200 for two main reasons (Fig 6). First, the nature of the amplitude distribution within the D3 is quite good despite contamination coming from striping effect (Fig 6). A 4500 (Highest %75-%50) is a good estimate considering that we had the same amplitude respond in A3 where we used a %35-cutoff on amplitude histogram. Moreover, D3 has been primarily in oil-production since 1979 and the tiny bright spots on the high topographic map should not correlate to leads; as they are quite small in size for associated gas and oddly fit within topographic high (Fig 4 & 6). Therefore, those tiny bright spots are secondary gas caps which were generated as result of rapid production of oil; as pressure drop during production, pushing the dissolved gas within oil upward toward topographic high while liquid is being produced. The 3 gas pools contain many leads; however, with no marginal structure change nor resolvable strata change between them, we could assume that they belong to the same gas pool but that does not necessarily mean that they are pressure-connected. On the other hand, the three pools are segregated structurally by faults; therefore, they cannot pressure communicate. In short, our pools/leads are not text-book example of 4-way closure as stratigraphic thinning beds altered the rock properties and helped defining trapping mechanism which ties with the nature of heterogeneous deltaic sand deposits in general. Using full cycle min-phase wavelet assumption, average thickness was estimated and OGIP estimate and probability were generated (Table 1).
V. Conclusion & Recommendation (Re-Processing, AVO, Inversion & Pore-Pressure Model):
Seismic interpretation and analysis provided a key role in understanding the structure style with ST-54. In addition, re- processing AVO-friendly gather should be a by-production if we decided to re-process the data as the DHI approach did not quietly represent the booked gas reserve within this field; underestimating it three time. Processing/acquisition striping artifact were observed along the cross line in amplitude attribute and interpreters should be careful as those stripes have masked the true background amplitude signature. We ought to recommend PSDM to better image the deep faulting system and the salt. Post stack seismic inversion; with enough well control, could also be beneficial to generate an accurate porosity map while Pre stack seismic inversion could also help us identify the saturated zone within the reservoir itself. Moreover, pore pressure prediction based on a seismic-velocity should be considered as it aid in optimizing the drilling program as well as the drilling mud weight design to lower the risk associated with drilling hazards.
 References:
[1] Bose, S., and Mietra, S., 2014, Structural analysis of a salt-cored transfer zone in the South Timbalier Block 54, offshore Gulf of Mexico: Implicationsfor restoration of salt-related extensional structures: AAPG Bulletin, v. 98, no. 4 (April 2014), pp. 825– 849.
[2] Stude, G. R., 1978, Depositional Environments of the Gulf of Mexico South Timbalier Block 54 Salt Dome and Salt Dome Growth Models. Transactions-Gulf Coast Association of Geological Societies, v. 28, p. 627-646.
[3] Hudec, M. R., and M. P. A. Jackson, 2006, Advance of salt sheets in passive margins and orogens: AAPG Bulletin, v. 90, p. 1535–1564.
[4] Vendeville, B. C., and M. P. A. Jackson, 1992, The rise of diapirs during thin-skinned extension: Marine and Petroleum Geology, v. 9, p. 331–353. [5] Investor Relation, 2013, Annual Report Filing to SEC, Energy XXI.
Pore Pressure Prediction Based on Borehole Acoustic Logs
This paper will investigate utilizing borehole acoustic logs to predict pore pressure at the borehole and how can we infer pore pressure from velocity logs. We will review common approaches to predict pore pressure at the borehole using acoustic logs. Two specific velocity to effective pressure equations will be the focus of this study: Eaton’s equation \cite{eaton:aa} and Bower’s equation \cite{bowers1995pore}. We will discuss the normal compaction trend analysis and how it could help identify overpressure zones using Eaton’s equation. In conclusion, we aim to examine the advantages and disadvantages of those methods. This paper will provide a contribution in advocating the use of borehole acoustics to infer pressure at the borehole in the absence of direct pressure measurement.
Небезпека поширення псевдонаукових видань для розвитку науки в Україні
and 1 collaborator
The Journal of Neuroscience Template
Cancer Discovery - Research Article
eNeuro Template
Korean Semantic Role Labeling using Sense Information
and 1 collaborator
Taking Two Steps Back (With Publication Data): The Epistemological Presupposition of Scholarly Communication
\label{introduction}
Social media have become an integral part of society, including the scientific community. Not only are popular services, such as Facebook and Twitter, used and embraced by academics (van Noorden, 2014), but platforms tailored towards the needs of researchers, like ResearchGate and Academia.edu, are highly successful (Ponte & Simon, 2011; Moran, Seaman, & Tinti-Kane, 2011). In the wake of the recent publication crisis and the rise of social media, scholarly communication and especially non-traditional metrics for science impact (altmetrics) are gaining traction. In his current project on Understanding the Societal Impact of Research through Social Media, Dr. Alperin et al. (n.d) points out that it is time for the scholarly communication community to take a step back and examine the theoretical frameworks that underpin how we construct research metrics, as well as how we evaluate and interpret researcher’s activities and outputs. Despite the explosion of new ‘alternative’ metrics (beyond citations), there has been little work to develop a theory of scholarly metrics, beyond the literature that exists on citations (citations, take from Haustein et al. 2015). One of the few examples that they point out is Haustein et al. (2015), who build on the existing theories of citations and propose a compound of research programs from various academic disciplines to investigate the motivations of social media behavior in scholarship (e.g., social constructivism, normative theory, and other social theories).
Despite the promising nature of this work, I propose to take a second step back, in order to further examine the very epistemological presuppositions of a research program in the field of scholarly communication, and its implications for empirical research in scholarly communication. Haustein et al.’s (2015) suggestion of a polytheoretic framework for social media behaviour exemplifies the demand for scrutiny, as the overarching term of constructivism (i.e., social, radical, and cognitive constructivism) often represents a diverging variety of assumptions about central ideas such as the nature of personhood, the existence of objective knowledge or intersubjectivity. I propose to study the conceptual scaffolding of research and practices in scholarly communication, especially in regard to epistemological assumptions, from two disjoint perspectives: the human agent (authors, readers, “sharers”, etc.) and the underlying technology used to distribute, read and share (with a special focus on social media). While the former is motivated by my experience in cognitive science and will draw inspirations from the philosophy of mind, the latter part will be chiefly guided by Dr. Feenberg’s application of critical theory to a philosophy of technology.
Coming from an engineering background and engaging in a data-driven field such as scholarly communication, I want to avoid a complete “armchair philosopher” method. As a proponent of empirically-informed philosophy (Prinz, 2008) (e.g., experimental moral philosophy, experimental epistemology), empirical data-driven research will play a vital role. Inspired by methods from phenomenological research in the brain sciences (e.g., Neurophenomenology (Varela, 1996), First-Person Neuroscience (Northoff & Heinzel, 2006)) that tries to combine first and third person data as well as traditional social sciences, I hope to bridge the gap between epistemology and methodology in scholarly communication research.
The present research proposal comprises both theoretical and practical work; the former drawing inspirations from the critical theory and philosophy of technology, and philosophy of mind; while the latter builds on the work of Dr. Alperin and his research group on scholarly communication and social media. I hope that these contributions will lead to the development of a holistic framework of scholarly communication research; holistic in a twofold way: (1) a theory that considers the individual participant in scholarly communication as an experiencing subject and the involved technology in a non-instrumentalist and critical manner and (2) avoids the “armchair philosopher” approach by embracing empirical research.
\label{objectives}
Scholarly communication has been changing rapidly over the last two decades. While some scholars might disagree on the specific name and details, many voices can be heard that are speaking of a revolution taking place in science communication—whether it is Harnad’s (1991) cognitive revolution, Cronin’s (2012) velvet revolution, or Nielsen’s (2016) open science revolution. As Sugimoto et al. (2016) observe, the common theme across all these revolutions is an increased visibility and heterogeneity of science itself, especially driven by the Web 2.0 and, more specifically, the recent of rise of social media in scholarship. Simultaneously the emergence of new vehicles of disseminating science caused an increased demand for alternative indicators and metrics of science impact—namely the aforementioned altmetrics.
Despite an abundance of collectable data (scientometrics, bibliometrics, altmetrics), scholarly communication remains a profoundly human and social activity, which brings forth the question of the subjective experience of producing, disseminating and consuming knowledge. While from a positivist or instrumental rationalist standpoint it might be desirable to reduce scholarly activity to “objective” measures, I believe that an approach that acknowledges the irreducibility of the individual, might be a better fit to understand the dynamics of scholarly communication. In the fashion of empirically-informed philosophy (i.e., I want to avoid to restrict myself to purely theoretical considerations) I am planning to explore the interplay of individuals as experiencing subjects and the community/public from the perspective of modern scholarly communication, especially in the digital domain.
While I want to emphasize the importance of the individual researcher's experience and contribution in scholarly communication, I believe that another essential aspect is the structure of the underlying technology. The recent changes in the way scholarly communication is practiced, as well as researched, have been strongly driven by technological advances (i.e., the use of social media to disseminate research, and the emergence of altmetrics as a form of measurement and assessment were largely enabled by technological progress, namely the advance of the social web) (Sugimoto et al., 2016). At the same time, social media and altmetrics are changing the very nature of scholarship by influencing the academic incentive system. Altmetrics just being one example, I want to engage in a critical analysis of technology in scholarly communication. I will use Dr. Feenberg’s work in the critical theory of technology, especially the instrumentalization theory, which combines approaches from the philosophy of technology as well as empirical, constructivist methods. This framework, a “synthesis of theoretical and empirical approaches” (Feenberg, 2009, p.62), will serve as a scaffolding for a critical and empirical account of technology in scholarly communication.
Guided by two perspectives, one focussing on the individual actor and the other on the underlying technology, I want to evaluate current practices in scholarly communication, as well as epistemological and methodological assumptions of research in scholarly communication, social media, and altmetrics.
Plos One Template
Arabian Journal for Science and Engineering
Cell Template
Mobile standards research - part one: iOS
Seismic Design of Moment Resisting Steel Framed Buildings. An Investigation of the European Approaches
The research is focused on the investigation of the design methods of moment resisting steel framed structures (MRF) in seismic areas. The Eurocode 8, the main reference for earthquake design in Europe, allows four design approaches. Two of these (i.e. the lateral force method and the modal response spectrum), are based on linear finite element analysis, while the other methods (i.e. the pushover and the time history) are based on non-linear analysis. In the current design practice, linear analysis are preferred because they lead to a simpler structural modelling and to easy interpretation of the results. It is a common idea that Eurocode 8 should guarantee the same structural performance independently from the analysis method used. In order to state the validity of this idea it is useful to estimate with non-linear analysis the performance of an extended sample of structures designed with linear methods. In accordance to the research purpose, this paper shows and explains the preliminary results obtained analysing a single MRF.
Creating flight simulation educational game as method for stimulation students for learning and team working
and 3 collaborators
Programare Web - Proiect final
Brevkursus
Let n be a natural number and let S be a subset of {1, 2, ..., n} such that no pair of elements of S are relatively prime and no element in S is divisible by another element of S. Find the maximal number of S (as a function of n).
Besvarelse. Enhver delmængde S, der opfylder kravene, må bestå af et antal elementer, da der altid kan være minimum et element, og for n = 1 er størrelsen af S 1. Herefter arbejder vi med n ≥ 2. Hvert element i S må nødvendigvis have en største ulige divisor der er mindre eller lig n. 2 elementer i S kan ikke have samme største ulige divisor, da der i så fald kun er en faktor af en 2’er potens til forskel mellem de 2 elementer, hvormed det ene af de to elementer må dele det andet af de to elementer. Dermed må antallet af elementer i S være mindre eller lig antallet af ulige tal mindre end n, dvs. det maksimalt mulige antal elementer i S er reduceret til $\lceil \frac{n}{2} \rceil$.
Det vises nu at det kan lade sig gøre at udvælge en delmængde S på $\lfloor \frac{n+2}{4} \rfloor$ elementer, der opfylder de ønskede krav. Der udvælges nu alle ulige tal mindre eller lig $\frac{n}{2}$. Dem er der netop $\lfloor \frac{n+2}{4} \rfloor$ af:
Hvis $n\equiv 0 \ \pmod{\ 4}$ da er $\frac{n}{2}$ et lige tal, og da er der $\frac{n}{4}$ ulige tal mindre end eller lig $\frac{n}{2}$, og $\lfloor \frac{n+2}{4} \rfloor = \frac{n}{4} + \lfloor \frac{2}{4} \rfloor = \frac{n}{4}$
Hvis $n\equiv 1 \ \pmod{\ 4}$ da er $\frac{n}{2}$ ikke et heltal, og da er $\frac{n-1}{2}$ et lige tal, og da er der $\frac{n-1}{4}$ ulige tal mindre end eller lig $\frac{n}{2}$, og $\lfloor \frac{n+2}{4} \rfloor = \frac{n-1}{4} + \lfloor \frac{3}{4} \rfloor = \frac{n-1}{4}$
Hvis $n\equiv 2 \ \pmod{\ 4}$ da er $\frac{n}{2}$ et ulige tal, og da er der $\frac{n+2}{4}$ ulige tal mindre end eller lig $\frac{n}{2}$, og $\lfloor \frac{n+2}{4} \rfloor = \frac{n+2}{4} + \lfloor \frac{0}{4} \rfloor = \frac{n+2}{4}$
Hvis $n\equiv 3 \ \pmod{\ 4}$ da er $\frac{n}{2}$ ikke et heltal, og da er $\frac{n-1}{2}$ et ulige tal, hvormed der $\frac{n+1}{4}$ ulige tal mindre end eller lig $\frac{n}{2}$, og $\lfloor \frac{n+2}{4} \rfloor = \frac{n+1}{4} + \lfloor \frac{1}{4} \rfloor = \frac{n+1}{4}$
Der gælder for en ulige faktor x, hvor $1\leq x \leq \frac{n}{2}$, at der kan ganges med en 2’er potens 2k, hvor k ∈ ℕ, så $\frac{n}{2}< 2^k x \le n $. Dette indses relativt nemt, idet: $1\leq x \leq \frac{n}{2}$, hvormed 2 ≤ 2x ≤ n. Hvis $2x>;\frac{n}{2}$ er vi færdige, ellers er $2\le 2x\le \frac{n}{2}$, hvormed vi igen kan gange igennem med 2. Dette gøres k gange, hvormed vi har et tal $\frac{n}{2}< 2^k x \le n $. Dette gøres for alle de $\lfloor \frac{n+2}{4} \rfloor$ ulige tal mindre end eller lig med $\frac{n}{2}$, og disse tal bliver netop elementerne i S. Det ses at disse $\lfloor \frac{n+2}{4} \rfloor$ tal netop opfylder kraverne: da de alle er lige er ingen par indbyrdes primiske, og da de alle har en forskellig største ulige divisor, kan ingen af tallene være ens, samt intet tal kan dele et andet, da der mellem det mindste og største tal er mindre end en faktor 2.
Nu argumenteres der for at dette er den størst mulige mængde: Vælges en ulige divisor større end $\frac{n}{2}$ ses at ganges en 2’erpotens på denne eller et hvilket som helst heltal større end 1, er det resulterende tal nødvendigvis større end n og kan dermed ikke være del af S. Dermed skal en sådan ulige faktor, hvis den indgår i et element i S, indgå som tallet selv. Det ses for et ulige tal at: gcd(y, y ± 2)=gcd(y, 2)=1 og gcd(y, y ± 4)=gcd(y, 4)=1 Dvs. inddrages et ulige tal i mængden S udelukkes der minimum 2 andre ulige tal, som heller ikke kan indgå som den største ulige divisor i et tal, da de er indbyrdes primiske. Hvis der ikke er flere end 2 ulige tal mindre end eller lig n, så er der kun 1 og 3, og 1 kan ikke indgå sammen med nogle andre elementer i S, da 1 deler alle tal. I tilfælde af overlap mellem de to nye ulige tal der udelukkes, da er antallet af nye ulige tal der kan inddrages stadigvæk ikke flere end dem der udelukkes, og dermed er det umuligt at gøre mængden større. Dermed er det maksimale antal elementer i S er $\lfloor \frac{n+2}{4} \rfloor$, undtagen i tilfældet n = 1 hvor det maksimale antal elementer er 1.
A stack of boxes is stable if no box has a heavier box on top of it. If the stack above a box is stable, we can move that box to the top of the entire stack. This is called a move. We start with a stable stack which for every positive integer k ≤ n contains k boxes weighing k each, and place a box weighing n + 1 to the top of the stack. How many moves do we need to make in order to make the entire stack stable?
Besvarelse. Lad an betegne antallet af træk det kræver at gøre hele stakken stabil for n = 1. For et givet n ≥ 2 betragter vi nu stakken. For at kunne foretage et træk med en af boksene med vægt n, skal hele stakken ovenfor være stabil, dvs. at alle bokse med vægt mindre end n skal flyttes oven over boksen med vægt n + 1, for ellers er der en boks med mindre vægt end n + 1 mellem boksene med vægt n og boksen ovenfor med vægt n + 1, og derfor kan der ikke udføres et træk med en boks med vægt n, da stakken ovenfor ikke er stabil. At flytte alle boksene med vægt mindre end n over boksen med vægt n + 1 så stakken fra boksen med vægt n + 1 og opefter er stabil, må netop kræve an − 1 antal træk, da det svarer til situationen for n − 1. Herefter er stakken ovenfor de n bokse med vægt n stabil. Dermed kan en af boksene med vægten n flyttes til toppen, hvorefter alle bokse med vægt mindre end n skal flyttes ovenfor boksen med vægt n på toppen, før der kan udføres et træk med den næste boks med vægt n. Dette skal gøres yderligere n − 1 gange, hvormed hele stakken er stabil. Dvs. at antallet af træk der kræves for at gøre hele stakken stabil må da være: $$a_n=a_{n-1}+n(a_{n-1}+1)=(n+1)\cdot a_{n-1}+n$$ Det ses at a1 = 1.
Det vises nu at an = (n + 1)! − 1 med induktion.
Induktionsstarten: For n = 1 ses ganske rigtigt at: $$(n+1)!-1=2-1=1=a_1$$
Induktionsantagelsen: Det antages at an = (n + 1)! − 1.
Induktionsskridtet: Det ses nu for n + 1 at: $$a_{n+1}=(n+2)\cdot a_n + n+1=(n+2)((n+1)!-1)+n+1=(n+2)!-n-2+n+1=(n+2)!-1$$ hvilket netop er det ønskede. Dermed er antallet af træk det kræver for at gøre hele stakken stabil an = n!−1.
Let n be a positive integer. Find the smallest integer k with the following property: Given any real numbers a1, a2, ..., ad such that a1 + a2 + ⋯ + ad = n and 0 ≤ ai ≤ 1 for i = 1, 2, ..., d, it is possible to partition these numbers into k groups (some of which may be empty) such that the sum of the numbers in each group is at most 1. (IMO shortlist 2013)
Besvarelse. For et givet n og de reelle tal a1, a2, ..., ad, da må der gælde at for en fordeling ud i k grupper, at hvis der eksisterer to grupper, for hvilke summen af elementerne fra begge grupper er mindre end eller lig 1, da kan én gruppe indeholde alle elementerne fra begge grupper, og dermed er k ikke det mindste heltal med den beskrevne egenskab. For det mindst mulige k, vil det sige at summen af elementerne fra to vilkårlige grupper må være større end 1. Hvis man for alle mulige parringer af to grupper lægger elementerne sammen, får man ${k\choose 2}$ summer der alle skal være større end 1, idet der netop er ${k\choose 2}$ mulige parringer, og lægges alle disse summer sammen fås: $$(k-1)\cdot(a_1+a_2+ \cdots+a_d)$$ da hver gruppe tælles med netop (k-1) gange, da hver gruppe parres til alle de (k-1) andre grupper, og da alle tal a1, a2, ..., ad netop optræder i én gruppe, må alle elementer tælles med netop (k-1) gange. Da denne sum netop er ${k\choose 2}$ summer der alle skal være større end 1, og a1 + a2 + ⋯ + ad = n, da fås at for det mindste k må der gælde at: $$(k-1)\dot n=(k-1)\cdot(a_1+a_2+ \cdots+a_d)>{k\choose 2}\cdot 1=\frac{k\cdot(k-1)}{2}$$ og dermed fås at der skal gælde: $$2n>k$$ For et givet n, da ses at der for d = 2n − 1, findes elementer $a_1=a_2=\cdots=a_{2n-1}=\frac{n}{2n-1}>\frac{1}{2}$, hvormed det ses at der ikke kan være 2 elementer i en gruppe. Dermed har vi at k ≥ 2n − 1. Dermed fås for det mindst mulige k at: $$2n>k\ge 2n-1$$ og dette medfører at det mindst mulige k, der opfylder betingelserne er k = 2n − 1, da k er et heltal.
First partial cool down of the SPIRAL 2 LINAC
and 9 collaborators
Determination of the Dynamic and Acoustic Performances of Composite Materials Based on Innovative Resin Foam Cores
Rapport de stage
and 2 collaborators
Monografia - Gustavo e Gamaliel
and 1 collaborator
Empresas são entidades que têm como objetivo utilizar seus recursos para fazer a circulação de bens e serviços. Uma empresa é formada por diversas figuras entre elas: a estrutura, que corresponde a forma hierárquica de como a empresa se organiza; pessoas, que são responsáveis pela parte operacional e desenvolver as atividades da empresa; capital, que é o dinheiro envolvido e necessáario para funcionamento da organização.
Hoje, existem diversas formas de se conseguir capital para realizar as operações da empresa ou até mesmo ampliá-las, como por exemplo: financiamento bancário, BNDES, notas promissórias, debêntures, eurobonds, e também a abertura de capital.
IPO, do inglês Initial Public Offering, corresponde a primeira emissão de ações da empresa para a o mercado, ou seja, é ele que faz com que a empresa abra seu capital e passe a ter sócios anônimos. Esse é um processo muito grande e um passo muito importante na trajetória da empresa durante sua vida útil, e consequentemente envolve diversos agentes: bancos, escritórios de advocacia, auditoria, a própria empresa.
Dentre os diversos benefícios gerados pela abertura de capital, \citet{Ljungqvist_2004} como por exemplo, destaca o fato de que a empresa passará a ser negociada em um espaço organizado e favorável, que é a bolsa de valores, e o acesso ao capital do mercado, podendo ser alvo de investimento de grandes e pequenos investidores. Em contrapartida, o mesmo autor argumenta que a abertura de capital gera alguns ônus para a organização por exemplo: o fato de IPO poder ser um processo muito caro, ter várias práticas de governança prezando transparência, entre outras.
Além das vantagens e desvantagens, \citet{Stoll_1970}, \citet{McDonald_1972}, \citet{Logue_1973}, \citet{Ibbotson_1975}, \citet{Ljungqvist_2004} e muitos outros autores do mundo inteiro documentaram e deram destaque para dois fenômenos que envolvem a abertura de capital de uma empresa: a subprecificação no curto prazo e a subperformance no longo prazo.
A subprecificação no curto prazo, é um fenômeno em que o valor das ações lançadas para o mercado sobem consideravelmente ao fechamento do primeiro dia de negociação. Já a subperformance, é quando as ações têm uma performance inferior em relação as expectativas dos investidores.
\citet{Ljungqvist_2004} e \citet{Ritter_2002} argumentam a importância e a recorrência do fenômeno da subprecificação, onde, em seus estudos, foi possível detectar diversas evidências de que a subprecificação é um fato recorrente em IPOs em diversos países do mundo inteiro. Esses estudiosos também deram destaque para a importância do fenômeno, pois, um dos maiores objetivos de uma companhia ao realizar a abertura de capital, é angariar o maior número possível de recursos, e ao ser subprecificado, infere-se que a companhia deixou de arrecadar recursos que ela poderia ter conseguido apenas fazendo um lançamento de ação a um preço X, e agora esse preço X esta sendo praticado no mercado secundário ao final do primeiro dia de negociação.
Dada essa importância, diversos estudos propõe maneiras de se mensurar o IPO e consequentemente calcular o valor de dinheiro que a companhia deixou de ganhar ao fazer o lançamento de ação. Dentre esse métodos de cálculo destaca-se: a diferença percentual entre o preço pela qual as ações do IPO foram vendidas a investidores e o "volume de dinheiro deixado na mesa", ambos serão apresentados posteriormente, no decorrer dessa leitura.
Por mais que seja menos documentado, o fenômeno da subperformance também foi estudado por autores como: \citet{RITTER_1991}, \citet{Stoll_1970}, \citet{McDonald_1972}, e esses também encontraram recorrência e importância para o fenômeno, pelo fato de que os investidores também esperam retornos positivos de seu capital aplicado na companhia. Porém, diversos autores documentaram resultados positivos e acabaram tornando os estudos a respeito da subperformance mais subjetivos e questionáveis. Dentre eles, os estudos de \citet{BRAV_1997}, \citet{Dawson_1987} e muitos outros, tiveram destaque.
Como citado anteriormente, os estudos dos fenômenos da subprecificação e subperformance são de grande importância para o campo das finanças corporativas. Pois, como relatado o principal objetivo de uma empresa ao se transformar em uma empresa de capital aberto, é maximizar o volume de investimento angariados no mercado. De modo a otimizar suas operações ou realizar qualquer procedimento que esteja alinhado com alguma estratégia da organização. Lembrando que esse valor deve estar alinhado com os interesses do acionistas no longo prazo que buscam rentabilidade e valorização de seu dinheiro investido naquela empresa. Por isso, entender as principais causas e as teorias envolvidas nos fenômenos da subprecificação e da subperformance, poder mensurar esses eventos da maneira mais precisa possível de modo a torná-los mais tangíveis e também transpor informações fidedignas com a realidade do mercado, podem proporcionar melhorias nas precificações das ações por parte das companhias e maior entendimento por parte dos investidores, gerando uma angariação maximizada para a companhia e um retorno de acordo com a expectativa do investidor.
Brevkursus
Let n be a natural number and let S be a subset of {1, 2, ..., n} such that no pair of elements of S are relatively prime and no element in S is divisible by another element of S. Find the maximal number of S (as a function of n).
Besvarelse. Enhver delmængde S, der opfylder kravene, må bestå af et antal elementer. Hvert element i S må nødvendigvis have en største ulige divisor der er mindre eller lig n. 2 elementer i S kan ikke have samme største ulige divisor, da der i så fald kun er en faktor af en 2’er potens til forskel mellem de 2 elementer, hvormed det ene af de to elementer må dele det andet af de to elementer. Dermed må antallet af elementer i S være mindre eller lig antallet af ulige tal mindre end n, dvs. det maksimalt mulige antal elementer i S er reduceret til $\lceil \frac{n}{2} \rceil$.
Det vises nu at det kan lade sig gøre at udvælge en mængde på $\lfloor \frac{n+2}{4} \rfloor$ elementer. Der udvælges nu alle ulige tal mindre eller lig $\frac{n}{2}$. Dem er der netop $\lfloor \frac{n+2}{4} \rfloor$ af:
Hvis $n\equiv 0 \ \pmod{\ 4}$ da er $\frac{n}{2}$ et lige tal, og da er der $\frac{n}{4}$ ulige tal mindre end eller lig $\frac{n}{2}$, og $\lfloor \frac{n+2}{4} \rfloor = \frac{n}{4} + \lfloor \frac{2}{4} \rfloor = \frac{n}{4}$
Hvis $n\equiv 1 \ \pmod{\ 4}$ da er $\frac{n}{2}$ ikke et heltal, og da er $\frac{n-1}{2}$ et lige tal, og da er der $\frac{n-1}{4}$ ulige tal mindre end eller lig $\frac{n}{2}$, og $\lfloor \frac{n+2}{4} \rfloor = \frac{n-1}{4} + \lfloor \frac{3}{4} \rfloor = \frac{n-1}{4}$
Hvis $n\equiv 2 \ \pmod{\ 4}$ da er $\frac{n}{2}$ et ulige tal, og da er der $\frac{n+2}{4}$ ulige tal mindre end eller lig $\frac{n}{2}$, og $\lfloor \frac{n+2}{4} \rfloor = \frac{n+2}{4} + \lfloor \frac{0}{4} \rfloor = \frac{n+2}{4}$
Hvis $n\equiv 3 \ \pmod{\ 4}$ da er $\frac{n}{2}$ ikke et heltal tal, og da er $\frac{n-1}{2}$ et ulige tal, hvormed der $\frac{n+1}{4}$ ulige tal mindre end eller lig $\frac{n}{2}$, og $\lfloor \frac{n+2}{4} \rfloor = \frac{n+1}{4} + \lfloor \frac{1}{4} \rfloor = \frac{n+1}{4}$
Der gælder for en ulige faktor x, hvor $1\leq x \leq \frac{n}{2}$, at der kan ganges med en 2’er potens 2k, hvor 1 ≤ k ∈ ℕ, så $\frac{n}{2}< 2^k x \le n $. Dette indses relativt nemt, idet: $1\leq x \leq \frac{n}{2}$, hvormed 2 ≤ 2x ≤ n. Hvis $2x>\frac{n}{2}$ er vi færdige, ellers er $2\le 2x\le \frac{n}{2}$, hvormed vi igen kan gange igennem med 2. Dette gøres k gange, hvormed vi har et tal $\frac{n}{2}< 2^k x \le n $. Dette gøres for alle de $\lfloor \frac{n+2}{4} \rfloor$ ulige tal mindre end eller lig med $\frac{n}{2}$