Discover and publish cutting edge, open research.

Browse 66,354 multi-disciplinary research preprints

Featured documents

Michael Weekes

and 11 more

Nick K. Jones1,2*, Lucy Rivett1,2*, Chris Workman3, Mark Ferris3, Ashley Shaw1, Cambridge COVID-19 Collaboration1,4, Paul J. Lehner1,4, Rob Howes5, Giles Wright3, Nicholas J. Matheson1,4,6¶, Michael P. Weekes1,7¶1 Cambridge University NHS Hospitals Foundation Trust, Cambridge, UK2 Clinical Microbiology & Public Health Laboratory, Public Health England, Cambridge, UK3 Occupational Health and Wellbeing, Cambridge Biomedical Campus, Cambridge, UK4 Cambridge Institute of Therapeutic Immunology & Infectious Disease, University of Cambridge, Cambridge, UK5 Cambridge COVID-19 Testing Centre and AstraZeneca, Anne Mclaren Building, Cambridge, UK6 NHS Blood and Transplant, Cambridge, UK7 Cambridge Institute for Medical Research, University of Cambridge, Cambridge, UK*Joint first authorship¶Joint last authorshipCorrespondence: [email protected] UK has initiated mass COVID-19 immunisation, with healthcare workers (HCWs) given early priority because of the potential for workplace exposure and risk of onward transmission to patients. The UK’s Joint Committee on Vaccination and Immunisation has recommended maximising the number of people vaccinated with first doses at the expense of early booster vaccinations, based on single dose efficacy against symptomatic COVID-19 disease.1-3At the time of writing, three COVID-19 vaccines have been granted emergency use authorisation in the UK, including the BNT162b2 mRNA COVID-19 vaccine (Pfizer-BioNTech). A vital outstanding question is whether this vaccine prevents or promotes asymptomatic SARS-CoV-2 infection, rather than symptomatic COVID-19 disease, because sub-clinical infection following vaccination could continue to drive transmission. This is especially important because many UK HCWs have received this vaccine, and nosocomial COVID-19 infection has been a persistent problem.Through the implementation of a 24 h-turnaround PCR-based comprehensive HCW screening programme at Cambridge University Hospitals NHS Foundation Trust (CUHNFT), we previously demonstrated the frequent presence of pauci- and asymptomatic infection amongst HCWs during the UK’s first wave of the COVID-19 pandemic.4 Here, we evaluate the effect of first-dose BNT162b2 vaccination on test positivity rates and cycle threshold (Ct) values in the asymptomatic arm of our programme, which now offers weekly screening to all staff.Vaccination of HCWs at CUHNFT began on 8th December 2020, with mass vaccination from 8th January 2021. Here, we analyse data from the two weeks spanning 18thto 31st January 2021, during which: (a) the prevalence of COVID-19 amongst HCWs remained approximately constant; and (b) we screened comparable numbers of vaccinated and unvaccinated HCWs. Over this period, 4,408 (week 1) and 4,411 (week 2) PCR tests were performed from individuals reporting well to work. We stratified HCWs <12 days or > 12 days post-vaccination because this was the point at which protection against symptomatic infection began to appear in phase III clinical trial.226/3,252 (0·80%) tests from unvaccinated HCWs were positive (Ct<36), compared to 13/3,535 (0·37%) from HCWs <12 days post-vaccination and 4/1,989 (0·20%) tests from HCWs ≥12 days post-vaccination (p=0·023 and p=0·004, respectively; Fisher’s exact test, Figure). This suggests a four-fold decrease in the risk of asymptomatic SARS-CoV-2 infection amongst HCWs ≥12 days post-vaccination, compared to unvaccinated HCWs, with an intermediate effect amongst HCWs <12 days post-vaccination.A marked reduction in infections was also seen when analyses were repeated with: (a) inclusion of HCWs testing positive through both the symptomatic and asymptomatic arms of the programme (56/3,282 (1·71%) unvaccinated vs 8/1,997 (0·40%) ≥12 days post-vaccination, 4·3-fold reduction, p=0·00001); (b) inclusion of PCR tests which were positive at the limit of detection (Ct>36, 42/3,268 (1·29%) vs 15/2,000 (0·75%), 1·7-fold reduction, p=0·075); and (c) extension of the period of analysis to include six weeks from December 28th to February 7th 2021 (113/14,083 (0·80%) vs 5/4,872 (0·10%), 7·8-fold reduction, p=1x10-9). In addition, the median Ct value of positive tests showed a non-significant trend towards increase between unvaccinated HCWs and HCWs > 12 days post-vaccination (23·3 to 30·3, Figure), suggesting that samples from vaccinated individuals had lower viral loads.We therefore provide real-world evidence for a high level of protection against asymptomatic SARS-CoV-2 infection after a single dose of BNT162b2 vaccine, at a time of predominant transmission of the UK COVID-19 variant of concern 202012/01 (lineage B.1.1.7), and amongst a population with a relatively low frequency of prior infection (7.2% antibody positive).5This work was funded by a Wellcome Senior Clinical Research Fellowship to MPW (108070/Z/15/Z), a Wellcome Principal Research Fellowship to PJL (210688/Z/18/Z), and an MRC Clinician Scientist Fellowship (MR/P008801/1) and NHSBT workpackage (WPA15-02) to NJM. Funding was also received from Addenbrooke’s Charitable Trust and the Cambridge Biomedical Research Centre. We also acknowledge contributions from all staff at CUHNFT Occupational Health and Wellbeing and the Cambridge COVID-19 Testing Centre.

Guangming Wang

and 4 more

Tam Hunt

and 1 more

Tam Hunt [1], Jonathan SchoolerUniversity of California Santa Barbara Synchronization, harmonization, vibrations, or simply resonance in its most general sense seems to have an integral relationship with consciousness itself. One of the possible “neural correlates of consciousness” in mammalian brains is a combination of gamma, beta and theta synchrony. More broadly, we see similar kinds of resonance patterns in living and non-living structures of many types. What clues can resonance provide about the nature of consciousness more generally? This paper provides an overview of resonating structures in the fields of neuroscience, biology and physics and attempts to coalesce these data into a solution to what we see as the “easy part” of the Hard Problem, which is generally known as the “combination problem” or the “binding problem.” The combination problem asks: how do micro-conscious entities combine into a higher-level macro-consciousness? The proposed solution in the context of mammalian consciousness suggests that a shared resonance is what allows different parts of the brain to achieve a phase transition in the speed and bandwidth of information flows between the constituent parts. This phase transition allows for richer varieties of consciousness to arise, with the character and content of that consciousness in each moment determined by the particular set of constituent neurons. We also offer more general insights into the ontology of consciousness and suggest that consciousness manifests as a relatively smooth continuum of increasing richness in all physical processes, distinguishing our view from emergentist materialism. We refer to this approach as a (general) resonance theory of consciousness and offer some responses to Chalmers’ questions about the different kinds of “combination problem.”  At the heart of the universe is a steady, insistent beat: the sound of cycles in sync…. [T]hese feats of synchrony occur spontaneously, almost as if nature has an eerie yearning for order. Steven Strogatz, Sync: How Order Emerges From Chaos in the Universe, Nature and Daily Life (2003) If you want to find the secrets of the universe, think in terms of energy, frequency and vibration.Nikola Tesla (1942) I.               Introduction Is there an “easy part” and a “hard part” to the Hard Problem of consciousness? In this paper, we suggest that there is. The harder part is arriving at a philosophical position with respect to the relationship of matter and mind. This paper is about the “easy part” of the Hard Problem but we address the “hard part” briefly in this introduction.  We have both arrived, after much deliberation, at the position of panpsychism or panexperientialism (all matter has at least some associated mind/experience and vice versa). This is the view that all things and processes have both mental and physical aspects. Matter and mind are two sides of the same coin.  Panpsychism is one of many possible approaches that addresses the “hard part” of the Hard Problem. We adopt this position for all the reasons various authors have listed (Chalmers 1996, Griffin 1997, Hunt 2011, Goff 2017). This first step is particularly powerful if we adopt the Whiteheadian version of panpsychism (Whitehead 1929).  Reaching a position on this fundamental question of how mind relates to matter must be based on a “weight of plausibility” approach, rather than on definitive evidence, because establishing definitive evidence with respect to the presence of mind/experience is difficult. We must generally rely on examining various “behavioral correlates of consciousness” in judging whether entities other than ourselves are conscious – even with respect to other humans—since the only consciousness we can know with certainty is our own. Positing that matter and mind are two sides of the same coin explains the problem of consciousness insofar as it avoids the problems of emergence because under this approach consciousness doesn’t emerge. Consciousness is, rather, always present, at some level, even in the simplest of processes, but it “complexifies” as matter complexifies, and vice versa. Consciousness starts very simple and becomes more complex and rich under the right conditions, which in our proposed framework rely on resonance mechanisms. Matter and mind are two sides of the coin. Neither is primary; they are coequal.  We acknowledge the challenges of adopting this perspective, but encourage readers to consider the many compelling reasons to consider it that are reviewed elsewhere (Chalmers 1996, Griffin 1998, Hunt 2011, Goff 2017, Schooler, Schooler, & Hunt, 2011; Schooler, 2015).  Taking a position on the overarching ontology is the first step in addressing the Hard Problem. But this leads to the related questions: at what level of organization does consciousness reside in any particular process? Is a rock conscious? A chair? An ant? A bacterium? Or are only the smaller constituents, such as atoms or molecules, of these entities conscious? And if there is some degree of consciousness even in atoms and molecules, as panpsychism suggests (albeit of a very rudimentary nature, an important point to remember), how do these micro-conscious entities combine into the higher-level and obvious consciousness we witness in entities like humans and other mammals?  This set of questions is known as the “combination problem,” another now-classic problem in the philosophy of mind, and is what we describe here as the “easy part” of the Hard Problem. Our characterization of this part of the problem as “easy”[2] is, of course, more than a little tongue in cheek. The authors have discussed frequently with each other what part of the Hard Problem should be labeled the easier part and which the harder part. Regardless of the labels we choose, however, this paper focuses on our suggested solution to the combination problem.  Various solutions to the combination problem have been proposed but none have gained widespread acceptance. This paper further elaborates a proposed solution to the combination problem that we first described in Hunt 2011 and Schooler, Hunt, and Schooler 2011. The proposed solution rests on the idea of resonance, a shared vibratory frequency, which can also be called synchrony or field coherence. We will generally use resonance and “sync,” short for synchrony, interchangeably in this paper. We describe the approach as a general resonance theory of consciousness or just “general resonance theory” (GRT). GRT is a field theory of consciousness wherein the various specific fields associated with matter and energy are the seat of conscious awareness.  A summary of our approach appears in Appendix 1.  All things in our universe are constantly in motion, in process. Even objects that appear to be stationary are in fact vibrating, oscillating, resonating, at specific frequencies. So all things are actually processes. Resonance is a specific type of motion, characterized by synchronized oscillation between two states.  An interesting phenomenon occurs when different vibrating processes come into proximity: they will often start vibrating together at the same frequency. They “sync up,” sometimes in ways that can seem mysterious, and allow for richer and faster information and energy flows (Figure 1 offers a schematic). Examining this phenomenon leads to potentially deep insights about the nature of consciousness in both the human/mammalian context but also at a deeper ontological level.

Susanne Schilling*^

and 9 more

Jessica mead

and 6 more

The construct of wellbeing has been criticised as a neoliberal construction of western individualism that ignores wider systemic issues including increasing burden of chronic disease, widening inequality, concerns over environmental degradation and anthropogenic climate change. While these criticisms overlook recent developments, there remains a need for biopsychosocial models that extend theoretical grounding beyond individual wellbeing, incorporating overlapping contextual issues relating to community and environment. Our first GENIAL model \cite{Kemp_2017} provided a more expansive view of pathways to longevity in the context of individual health and wellbeing, emphasising bidirectional links to positive social ties and the impact of sociocultural factors. In this paper, we build on these ideas and propose GENIAL 2.0, focusing on intersecting individual-community-environmental contributions to health and wellbeing, and laying an evidence-based, theoretical framework on which future research and innovative therapeutic innovations could be based. We suggest that our transdisciplinary model of wellbeing - focusing on individual, community and environmental contributions to personal wellbeing - will help to move the research field forward. In reconceptualising wellbeing, GENIAL 2.0 bridges the gap between psychological science and population health health systems, and presents opportunities for enhancing the health and wellbeing of people living with chronic conditions. Implications for future generations including the very survival of our species are discussed.  

Mark Ferris

and 14 more

IntroductionConsistent with World Health Organization (WHO) advice [1], UK Infection Protection Control guidance recommends that healthcare workers (HCWs) caring for patients with coronavirus disease 2019 (COVID-19) should use fluid resistant surgical masks type IIR (FRSMs) as respiratory protective equipment (RPE), unless aerosol generating procedures (AGPs) are being undertaken or are likely, when a filtering face piece 3 (FFP3) respirator should be used [2]. In a recent update, an FFP3 respirator is recommended if “an unacceptable risk of transmission remains following rigorous application of the hierarchy of control” [3]. Conversely, guidance from the Centers for Disease Control and Prevention (CDC) recommends that HCWs caring for patients with COVID-19 should use an N95 or higher level respirator [4]. WHO guidance suggests that a respirator, such as FFP3, may be used for HCWs in the absence of AGPs if availability or cost is not an issue [1].A recent systematic review undertaken for PHE concluded that: “patients with SARS-CoV-2 infection who are breathing, talking or coughing generate both respiratory droplets and aerosols, but FRSM (and where required, eye protection) are considered to provide adequate staff protection” [5]. Nevertheless, FFP3 respirators are more effective in preventing aerosol transmission than FRSMs, and observational data suggests that they may improve protection for HCWs [6]. It has therefore been suggested that respirators should be considered as a means of affording the best available protection [7], and some organisations have decided to provide FFP3 (or equivalent) respirators to HCWs caring for COVID-19 patients, despite a lack of mandate from local or national guidelines [8].Data from the HCW testing programme at Cambridge University Hospitals NHS Foundation Trust (CUHNFT) during the first wave of the UK severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) pandemic indicated a higher incidence of infection amongst HCWs caring for patients with COVID-19, compared with those who did not [9]. Subsequent studies have confirmed this observation [10, 11]. This disparity persisted at CUHNFT in December 2020, despite control measures consistent with PHE guidance and audits indicating good compliance. The CUHNFT infection control committee therefore implemented a change of RPE for staff on “red” (COVID-19) wards from FRSMs to FFP3 respirators. In this study, we analyse the incidence of SARS-CoV-2 infection in HCWs before and after this transition.

How it works

Upload or create your research work
You can upload Word, PDF, LaTeX as well as data, code, Jupyter Notebooks, videos, and figures. Or start a document from scratch.
Disseminate your research rapidly
Post your work as a preprint. A Digital Object Identifier (DOI) makes your research citeable and discoverable immediately.
Get published in a refereed journal
Track the status of your paper as it goes through peer review. When published, it automatically links to the publisher version.
Learn More

Most recent documents

Joshua H Viers

and 5 more

The San Joaquin Valley of California is a paradox. It is highly productive agriculturally, but highly vulnerable to hydroclimatic shocks. A principal means of improving systemwide water management flexibility is strategic land fallowing. To date, however, land fallowing is sporadic, ephemeral, and unpredictable. Understanding patterns and drivers of land fallowing would allow resource managers to better plan for water allocation to cities, farms, and ecosystems. The inability to accurately anticipate the distribution of fallow land hampers the optimization of resource use, thereby affecting both the agricultural economy and regional water management. Thus, forecasting fallow land would allow for more optimum agricultural productivity and water delivery, particularly to vulnerable communities and ecosystems. To address this issue, this study compares state-of-the-art machine learning models with deep learning methods using a rich dataset that includes satellite images, weather patterns, soil properties, historical land use information, and other pertinent geospatial data. We employ an advanced ensemble of machine learning algorithms, i.e., Random Forest (RF), Extreme Gradient Boosting (XGBoost), and Support Vector Machines (SVM), for forecasting fallow land. Additionally, we investigate the use of Convolutional and Recurrent Neural Networks (like ConvLSTM, CNN-LSTM, and GAN) to take advantage of the spatial and temporal relationships contained in the data. Our preliminary findings suggest that deep learning models, with their capacity to capture intricate spatial and temporal correlations within the data, may perform better than more conventional machine learning techniques. As a result, these deep learning models could help water managers, policymakers, and researchers make more precise forecasts about the distribution of fallow land, leading to more optimal use of water resources and agricultural output. The results of this study can not only offer insights for California water managers and policymakers, but also enable practical plans for more optimal resource allocation, sustainable agricultural land use, and long-term water security. Our approach may also be applicable to other agricultural regions facing similar challenges globally.

Evan Krell

and 5 more

Atmospheric AI modeling is increasingly reliant on complex machine learning (ML) techniques and high-dimensional gridded inputs to develop models that achieve high predictive skill. Complex deep learning architectures such as convolutional neural networks and transformers are trained to model highly non-linear atmospheric phenomena such as coastal fog [1], tornadoes [2], and severe hail [3]. The input data is typically in the form of gridded spatial data composed of multiple channels of satellite imagery, numerical weather prediction output, reanalysis products, etc. In many studies, the use of complex architectures and high-dimensional inputs were shown to substantially outperform simpler alternatives. A major challenge when using complex ML techniques is that it is very difficult to understand how the trained model works. The complexity of the model obfuscates the relationship between the input and prediction. It is often of interest to understand a model’s decision-making process. By exposing the model’s behavior, users could verify that the model has learned physically realistic predictive patterns. This information can be used to calibrate trust in the model. The model may have also learned novel patterns within the data that could be used to gain new insights into the atmospheric process. Extracting learned patterns could be used to generate hypotheses for scientific discovery. The rapid adoption of complex ML models and the need to understand how they work has led to the development of a broad class of techniques called eXplainable Artificial Intelligence (XAI). These methods probe the models in various ways to reveal insights into how they work. Correlations among input features can make it challenging to produce meaningful explanations. The gridded spatial data common in atmospheric modeling applications typically have extensive correlation. Spatial autocorrelation is present among the cells of each spatial grid, but autocorrelation may exist across the gridded data volume due to spatial or temporal relationships between adjacent channels. In addition, there may be correlations between distant locations due to teleconnections between them. Correlated input features may cause high variance among the trained models. If grid cells are highly correlated, then the target function that the network is attempting to learn is ill-defined and an infinite number of models can be generated that achieve approximately equal performance. Even assuming a perfect XAI method exists, the attribution reflects only the patterns learned for a given model. It is arbitrary which of the correlated features are used by a given model. This can lead to a misleading understanding of the actual relationship between the input features and target. A potential solution is to group the correlated features before applying XAI. Attribution can be assigned to each group rather than to individual cells. In this case, all the correlated cells will be permuted at the same time to analyze their collective impact on the output. The purpose is to reveal the contribution of each group of related cells toward the model output. Ideally, the explanations are insensitive to the random choice among correlated features learned by the model. Without grouping, the user can be misled to consider a feature as not being related to the target because of the presence of correlated features. With grouping, the explanations should better reveal the learned patterns. Grouping features based on correlation can be challenging. The correlation rarely equals one and the strength of the correlation influences the variance among trained models. Calculating the correlation can be difficult because of partial correlations and fuzzy, continuous boundaries. The choice of groups can greatly influence the explanations. Another challenge is that it is not straight-forward to assess the quantitative accuracy of an XAI technique. This is because there is rarely a ground truth explanation to compare to. If we knew the attribution, we would not need XAI methods. Synthetic benchmarks for analyzing XAI have been proposed as a solution [4]. It is possible to define a non-linear function such that the contribution of each grid cell’s value to the function output can be derived. This attribution map represents the ground truth for comparison the the output of XAI methods that are applied to a model that very closely approximates the hand-crafted function. In this research, we develop a set of benchmarks to investigate the influence of correlated features on the variation in XAI outputs for a set of trained models. We then explore how features can be grouped to reduce the explanation variance so that users have improved insight into the learned patterns.  First, we create a set of very simple mathematical demonstrations that precisely demonstrate the influence of correlated features and how grouping features provides a solution. Using insights from these experiments, we develop a tool for detecting when correlated features are likely to cause misleading explanations. We then create a set of more realistic benchmarks that are based on atmospheric modeling problems such as sea surface temperature and coastal fog prediction. By defining benchmarks with known ground truth explanations, we can analyze various techniques for grouping the grid cells based on their correlations. Based on our findings, we offer recommendations for strategies to group correlated data so that users can better leverage XAI results toward model development and scientific insights. [1] Kamangir, H., Collins, W., Tissot, P., King, S. A., Dinh, H. T. H., Durham, N., & Rizzo, J. (2021). FogNet: A multiscale 3D CNN with double-branch dense block and attention mechanism for fog prediction. Machine Learning with Applications, 5, 100038.[2] Lagerquist, R. (2020). Using Deep Learning to Improve Prediction and Understanding of High-impact Weather.[3] Gagne II, D. J., Haupt, S. E., Nychka, D. W., & Thompson, G. (2019). Interpretable deep learning for spatial analysis of severe hailstorms. Monthly Weather Review, 147(8), 2827-2845.[4] Mamalakis, A., Ebert-Uphoff, I., & Barnes, E. A. (2022). Neural network attribution methods for problems in geoscience: A novel synthetic benchmark dataset. Environmental Data Science, 1, e8.

Yijing Chen

and 24 more

Browse more recent preprints

Powerful features of Authorea

Under Review
Communities
Collections
Learn More
Journals connected to Under Review
Ecology and Evolution
Allergy
Clinical Case Reports
Land Degradation & Development
Mathematical Methods in the Applied Sciences
Biotechnology Journal
Plant, Cell & Environment
International Journal of Quantum Chemistry
PROTEINS: Structure, Function, and Bioinformatics
All IET journals
All AGU journals
All Wiley journals
READ ABOUT UNDER REVIEW
Featured Collection
READ ABOUT COLLECTIONS
Featured communities
Explore More Communities

Other benefits of Authorea

Multidisciplinary

A repository for any field of research, from Anthropology to Zoology

Comments

Discuss your preprints with your collaborators and the scientific community

Interactive Figures

Not just PDFs. You can publish d3.js and Plot.ly graphs, data, code, Jupyter notebooks

Documents recently accepted in scholarly journals

Tanja Kalic

and 21 more

Background: Recent studies indicated that fish-allergic patients may safely consume certain fish species. Multiplex IgE testing facilitates the identification of species tolerated by individual patients. Methods: Sera were collected from 263 fish-allergic patients from Austria, China, Denmark, Luxembourg, Norway and Spain. Specific (s) IgE to parvalbumins (PVs) from 10 fish species along with IgE to 7 raw and 6 heated fish extracts was quantified using a research version of the ALEX 2 assay. IgE-signatures of individual patients and patient groups were analyzed using SPSS and R. Results: sIgE to alpha-PV from ray, a cartilaginous fish, was not detected in 78% of the patients while up to 41% of the patients, depending on their country of origin, tested negative for at least one beta-PV. sIgE values were highest for mackerel and tuna PVs (>10 kUA/L) and significantly lower for cod (4.9 kUA/L) and sole PVs (2.55 kUA/L). 17% of the patients, although negative for PVs, tested positive for the respective fish extracts. Based on the absence of IgE to PVs and extracts, up to 21% of the patients were identified as potentially tolerating one or more bony fish. Up to 90% of the patients tested negative for ray. The probability of negativity to one fish based on negativity to others was calculated. Negativity to tuna and mackerel emerged as a good marker of negativity to additional bony fish. Conclusion: Measuring sIgE to PVs and extracts from evolutionary distant fish species indicates bony and cartilaginous fish species for tolerance-confirming food challenges.

Kévin SPINICCI

and 3 more

Hypoxia Inducible Factor (HIF), the main actor in the cell response to hypoxia, represents a potential target in cancer therapy. HIF is involved in many biological processes such as cell proliferation, survival, apoptosis, angiogenesis, iron metabolism and glucose metabolism. This protein regulates the expressions of Lactate Dehydrogenase (LDH) and Pyruvate Dehydrogenase (PDH), both essential for the conversion of pyruvate to be used in aerobic and anaerobic pathways. HIF upregulates LDH, increasing the conversion of pyruvate into lactate which leads to higher secretion of lactic acid by the cell and reduced pH in the microenvironment. HIF indirectly downregulates PDH, decreasing the conversion of pyruvate into Acetyl Coenzyme A which leads to reduced usage of the Tricarboxylic Acid (TCA) cycle in aerobic pathways. Upregulation of HIF may promote the use of anaerobic pathways for energy production even in normal extracellular oxygen conditions. Higher use of glycolysis even in normal oxygen conditions is called the Warburg effect. In this paper, we focus on HIF variations during tumour growth and study, through a mathematical model, its impact on the two metabolic key genes PDH and LDH, to investigate its role in the emergence of the Warburg effect. Mathematical equations describing the enzymes regulation pathways were solved for each cell of the tumour represented in an agent-based model to best capture the spatio-temporal oxygen variations during tumour development caused by cell consumption and reduced diffusion inside the tumour. Simulation results show that reduced HIF degradation in normoxia can induce higher lactic acid production. The emergence of the Warburg effect appears after the first period of hypoxia before oxygen conditions return to a normal level. The results also show that targeting the upregulation of LDH and the downregulation of PDH could be relevant in therapy.

Heba Saber

and 1 more

Cristina Teixeira

and 2 more

Objective: To estimate time trends in the frequency of severe perineal tears (SPT) in Portugal and its relationship with episiotomy. Design: Nationwide register-based study by using the national inpatient database. Setting: All Portuguese public hospitals Population: All women with a singleton vaginal delivery between 2000 and 2015 Methods: Time-trend analysis using joinpoint regression models was performed to identify time trends in the prevalence of SPT and of risk factors, including episiotomy. Poisson regression models were fitted to assess the association between episiotomy and SPT. Main Outcome Measures: Annual percentage change (APC) with 95% Confidence Interval (95% CI) in the prevalence of SPT and its risk factors. Adjusted relative risk (RR) and respective 95% CI. Results: From 908,889 singleton vaginal deliveries, 20.6% were instrumental deliveries, 76.7% with episiotomy and 0.56% were complicated by SPT. SPT decreased among women with non-instrumental deliveries and no episiotomy from 2009 onwards (1.3% to 0.7%), whereas SPT kept increasing in women with episiotomy for both non-instrumental (0.1% in 2000 to 0.4% in 2015) and instrumental deliveries (0.7% in 2005 to 2.3% in 2015). Episiotomy was associated with a decrease in SPT with adjusted RR varying between 2000 and 2015 from 0.18 (95%CI:0.13-0.25) to 0.59 (95%CI:0.44-0.79) for non-instrumental deliveries and from 0.45 (95%CI:0.25-0.81) to 0.50 (95%CI:0.40-0.72) for instrumental deliveries. Conclusions: Episiotomy rate could safely further decrease as the main factor driving SPT rates seems to be an increase in awareness and reporting of SPT particularly among women who underwent an episiotomy.

mariem gdoura

and 8 more

Introduction: SARS-CoV2 serology testing is multipurpose provided to choose an efficient test. We evaluated and compared 4 different commercial serology tests, three of them had the Food and Drug Administration (FDA) approval. Our goal was to provide new data to help to guide the interpretation and the choice of the serological tests. Methods: Four commercial tests were evaluated: Cobas®Roche®(total anti-N antibodies), VIDAS®Biomerieux®(IgM and IgG anti-RBD antibodies), Mindray®(IgM and IgG anti-N and anti-RBD antibodies) and Access®Beckman Coulter®(IgG anti-RBD antibodies). Were tested: a positive panel (n=72 sera) obtained from COVID-19 confirmed patients and a negative panel (n=119) of pre-pandemic sera. Were determined the analytical performances and was drawn the ROC curve to assess the manufacturer’s threshold. Results: A large range of variability between the tests was found. Mindray®IgG and Cobas® tests showed the best overall sensitivity 79,2%CI95%[67,9-87,8]. Cobas® showed the best sensitivity after D14; 85,4%CI95%[72,2-93,9]. The best specificity was noted for Cobas®, VIDAS®IgG and Access® IgG(100%CI95%[96,9-100]). Access® had the lower sensitivity even after D14 (55,5% CI95%[43,4-67,3]). VIDAS®IgM and Mindray®IgM tests showed the lowest specificity and sensitivity rates. Overall, only 43 out of 72 sera gave concordant results (59,7%). Retained cut-offs for a significantly better sensitivity and accuracy, without altering significantly the specificity, were: 0,87 for Vidas®IgM(p=0,01), 0,55 for Vidas®IgG(p=0,05) and 0,14 for Access®(p<10-4). Conclusion: Although FDA approved, each laboratory should realize its own evaluation for commercial tests. Tests variability may raise some concerns that seroprevalence studies may vary significantly based on the used serology test.

Browse more published preprints

Featured templates
Featured and interactive
Journals with direct submission
Explore All Templates