AUTHOREA
Log in Sign Up Browse Preprints
BROWSE LOG IN SIGN UP

Preprints

Explore 11,744 preprints on the Authorea Preprint Repository

A preprint on Authorea can be a complete scientific manuscript submitted to a journal, an essay, a whitepaper, or a blog post. Preprints on Authorea can contain datasets, code, figures, interactive visualizations and computational notebooks.
Read more about preprints.

Heritage Connector: A Machine Learning Framework for Building Linked Open Data from M...
Kalyan Dutia
John Stack

Kalyan Dutia

and 1 more

January 06, 2021
As with almost all data, museum collection catalogues are largely unstructured, variable in consistency and overwhelmingly composed of thin records. The form of these catalogues means that the potential for new forms of research, access and scholarly enquiry that range across multiple collections and related datasets remains dormant. In the project Heritage Connector: Transforming text into data to extract meaning and make connections, we are applying a battery of digital techniques to connect similar, identical and related items within and across collections and other publications. In this paper we describe a framework to create a Linked Open Data knowledge graph (KG) from digital museum catalogues, connect entities within this graph to Wikidata, and create new connections in this graph from text. We focus on the use of machine learning to create these links at scale with a small amount of labelled data, on a mid-range laptop or a small cloud virtual machine. We publish open-source software providing tools to perform the tasks of KG creation, entity matching and named entity recognition under these constraints.
Thinking outside the cavity: effusion lymphoma primary to bone marrow
Sean Gu
Zenggang Pan

Sean Gu

and 2 more

January 06, 2021
Primary effusion lymphoma (PEL) is a distinct disease entity of large B-cell lymphomas most often occurring in immunocompromised patients. We present a rare case of extracavitary PEL primary to the bone marrow in a HIV-positive patient.
Historical surveys reveal a long-term decline in muskrat populations
Carrie Sadowski
Jeff Bowman

Carrie Sadowski

and 1 more

January 06, 2021
The muskrat (Ondatra zibethicus) is an iconic species in Canada, valued for both its fur and its integral role in wetland ecosystems, and widely regarded for its perseverance. However, the resilience of this semi-aquatic mammal seems to be in question now as increasing evidence points to widespread population declines. Recent analyses of harvest data across North America suggest a reduction in their numbers, but this has not been widely corroborated by population surveys. In this study we replicated historic muskrat house count surveys at two large Great Lakes coastal wetlands and present confirmation that declines in muskrat harvest correspond to actual declines in muskrat abundance. At the Point Pelee National Park marsh and the Matchedash Bay-Gray Marsh wetland we found that mean muskrat house counts declined by 93% and 91% respectively between historic surveys 40-50 years ago and contemporary surveys over the past five years. The factors responsible for these dramatic declines remain unclear but there may be a relationship with changes in the habitat quality of these wetlands that have occurred over the same time frame. Not only is the loss of muskrats an issue for the resulting loss of the wetland ecosystem services they provide, but it may be an indication of broader marsh ecosystem degradation. As such, a scarcity of muskrats should be considered a red flag for the state of biodiversity in our wetlands. Continued surveys and ongoing research are needed to shed more light on the current status of muskrat populations and their marsh habitats across their native range. Keywords: Fur harvest; Muskrat; Ondatra; Population decline; Typha; Wetlands
Trustworthy computational evidence through transparency and reproducibility
Lorena A. Barba

Lorena A. Barba

January 06, 2021
Many high-performance computing applications are of high consequence to society. Global climate modeling is a historic example of this. In 2020, the societal issue of greatest concern, the still-raging COVID-19 pandemic, saw a legion of computational scientists turning their endeavors to new research projects in this direction. Applications of such high consequence highlight the need for building trustworthy computational models. Emphasizing transparency and reproducibility has helped us build more trust in computational findings. In the context of supercomputing, however, we may ask: how do we trust results from computations that cannot be repeated? Access to supercomputers is limited, computing allocations are finite (and competitive), and machines are decommissioned after a few years. In this context, we might ask how reproducibility can be ensured, certified even, without exercising the original digital artifacts used to obtain new scientific results. This is often the situation in HPC. It is compounded now with greater adoption of machine learning techniques, which can be opaque. The ACM in 2017 issued a Statement on Algorithmic Transparency and Accountability, targeting algorithmic decision-making using data models \cite{council2017}. Among its seven principles, it calls for data provenance, auditability, validation and testing. These principles can be applied not only to data models, but to HPC in general. I want to discuss the next steps for reproducibility: how we may adapt our practice to achieve what I call unimpeachable provenance, and full auditability and accountability of scientific evidence produced via computation.An invited talk at SC20I was invited to speak at SC20 about my work and insights on transparency and reproducibility in the context of HPC. The session's theme was Responsible Application of HPC, and the title of my talk was "Trustworthy computational evidence through transparency and reproducibility." At the previous SC, I had the distinction to serve as Reproducibility Chair, leading an expansion of the initiative, which was placed under the Technical Program that year. We moved to make Artifact Description appendices required for all SC papers, created a template and an author kit for the preparation of the appendices, and introduced three new Technical Program tracks in support of the initiative. These are: the Artifact Description & Evaluation Appendices track—with an innovative double-open constructive review process—, the Reproducibility Challenge track, and the Journal Special Issue track, for managing the publication of select papers on the reproducibility benchmarks of the Student Cluster Competition. This year, the initiative was augmented to address issues of transparency, in addition to reproducibility, and a community sentiment study was launched to assess the impact of the effort, six-years in, and canvas the community's outlook on various aspects of it.Allow me to thank here Mike Heroux, Reproducibility Chair for SC in 2017 and 2018, Michela Taufer, SC19 General Chair—who put her trust in me to inherit the role from Mike—, and Beth Plale, the SC20 Transparency and Reproducibility Chair. I had countless inspiring and supportive conversations with Mike and Michela about the topic during the many months of planning for SC19, and more productive conversations with Beth during the transition to her leadership. Mike, Michela and I have served on other committees and working groups together, in particular, the group that met in July 2017 at the National Science Foundation (convened by Almadena Chtchelkanova) for the Workshop on Reproducibility Taxonomies for Computing and Computational Science. My presentation at that event condensed an inventory of uses of various terms like reproducibility and replication, across many fields of science \cite{barba2017}. I then wrote the review article "Terminologies for Reproducible Research," and posted it on arXiv \cite{barba2018}. It informed our workshop's report, which came out a few months later as a Sandia technical report \cite{taufer2018}. In it, we highlighted that the fields of computational and computing sciences provided two opposing definitions of the terms reproducible and replicable, representing an obstacle to progress in this sphere.The Association of Computing Machinery (ACM), representing computer science and industry professionals, had recently established a reproducibility initiative, and adopted diametrically opposite definitions to those used in computational sciences for more than two decades. In addition to raising awareness about the contradiction, we proposed a path to a compatible taxonomy. Compatibility is needed here because the computational sciences—astronomy, physics, epidemiology, biochemistry and others that use computing as a tool for discovery—and computing sciences (where algorithms, systems, software, and computers are the focus of study) have community overlap and often intersect in the venues of publication. The SC conference series is one example. Given the historical precedence and wider adoption of the definitions of reproducibility and replicability used in computational sciences, our Sandia report recommended that the ACM definitions be reversed. Several ACM-affiliated conferences were already using the artifact review and badging system (approved in 2016), so this was no modest suggestion. The report, however, was successful in raising awareness of the incompatible definitions, and the desirability of addressing it.A direct outcome of the Sandia report was a proposal to the National Information Standards Organization (NISO) for a Recommended Practice Toward a Compatible Taxonomy, Definitions, and Recognition Badging Scheme for Reproducibility in the Computational and Computing Sciences. NISO is accredited by the American National Standards Institute (ANSI) to develop, maintain, and publish consensus-based standards for information management. The organization has more than 70 members; publishers, information aggregators, libraries and other content providers use its standards. I co-chaired this particular working group, with Gerry Grenier from IEEE and Wayne Graves from ACM; Mike Heroux was also a member. The goal of the NISO Reproducibility Badging and Definitions Working group was to develop a Recommended Practice document—a step before development of a standard. As part of our joint work, we prepared a letter addressed to the ACM Publications Board, delivered in July 2019. It described the context and need for compatible reproducibility definitions and made the concrete request that ACM consider a change. By that time, not only did we have the Sandia report as justification, but the National Academies of Sciences, Engineering and Medicine (NASEM) had just released the report Reproducibility and Replicability in Science \cite{medicine2019}. It was the product of a long consensus study conducted by 15 experts, including myself, and sponsored by the National Science Foundation responding to Congressional decree. The NASEM report put forth its definitions as:Reproducibility is obtaining consistent results using the same input data, computational steps, methods and code, and conditions of analysis.Replicability is obtaining consistent results across studies aimed at answering the same scientific question, each of which has obtained its own data.The key contradiction with the ACM badging system resides on which term comprises using the author-created digital artifacts (e.g., data and code). We stated in the NISO working-group letter that if the ACM definitions of reproducible and replicable could be interchanged, the working group could move forward towards its goal of drafting recommended practices for badging that would lead to wider adoption in other technical societies and publishers. The ACM Publications Board responded positively, and began working through the details on how to make changes to items already published in the Digital Library with the "Results Replicated" badge—about 188 items existed at that time that were affected. Over the Summer of 2020, the ACM applied changes to the published Artifact Review and Badging web pages, and added a version number. From version 1.0, we see a note added that, as a result of discussions with NISO, the ACM was harmonizing its terminologies with those used in the broader scientific research community.All this background serves to draw our attention to the prolonged, thoughtful, and sometimes arduous efforts that have been directed at charting paths for adoption and giving structure to reproducibility and replicability in our research communities. Let us move now to why and how might the HPC community move forward.Insights on transparent, reproducible HPC researchDeployed barely over a year ago, the NSF-funded Frontera system at the Texas Advanced Computing Center (TACC) came in as the 8th most powerful supercomputer in the world, and the fastest on a university campus. Up to 80% of the available time on the system is allocated through the NSF Petascale Computing Resource Allocation program. The latest round of Frontera allocations (as of this writing) was just announced on October 25, 2020. I read through the fact sheet on the 15 newly announced allocations, to get a sense for the types of projects in this portfolio. Four projects are machine-learning or AI-focused, the same number as those in astronomy and astrophysics, and one more than those in weather or climate modeling. Other projects are single instances spanning volcanology/mantle mechanics, molecular dynamics simulations of ion channels, quantum physics in materials science, and one engineering project in fluid-structure interactions. One could gather these HPC projects in four groups:Astronomy and astrophysics are mature fields that in general have high community expectations of openness and reproducibility. As I'll highlight below, however, even these communities with mature practices benefit from checks of reproducibility that uncover areas of improvement. The projects tackling weather and climate modeling are candidates for being considered of high consequence to society. One example from the Frontera allocations concerns the interaction of aerosols caused by industrial activity with clouds, which can end up composed of smaller droplets, and become more reflective, resulting in a cooling effect on climate. Global climate models tend to overestimate the radiative forcing, potentially underestimating global warming: why? This is a question of great consequence for science-informed policy, in a subject that is already under elevated scrutiny from the public. Another project in this cluster deals with real-time high-resolution ensemble forecasts of high-impact winter weather events. I submit that high standards of transparency, meticulous provenance capture, and investments of time and effort in reproducibility and quality assurance are justified in these projects. Four of the winning projects are applying techniques from machine learning to various areas of science. In one case, the researchers seek to bridge the gap in the trade-off between accuracy of prediction and model interpretability, to make ML more applicable in clinical and public health settings. This is clearly also an application of high consequence, but in addition all the projects in this subset face the particular transparency challenges of ML techniques, requiring new approaches to provenance capture and transparent reporting. The rest of the projects are classic high-performance computational science applications, such as materials science, geophysics, and fluid mechanics. Reproducible-research practices vary broadly in these settings, but I feel confident saying that all or nearly all those efforts would benefit from prospective data management, better software engineering, and more automated workflows. And their communities would grow stronger with more open sharing. The question I have is: how could the merit review of these projects nudge researchers towards greater transparency and reproducibility? Maybe that is a question for later, and a question to start with is how could support teams at cyberinfrastructure facilities work with researchers to facilitate their adoption of better practices in this vein? I'll revisit these questions later.I also looked at the 2019 Blue Waters Annual Report, released on September 15, 2020, with highlights from a multitude of research projects that benefitted from computing allocations on the system. Blue Waters went into full service in 2013 and has provided over 35 billion core-hour equivalents to researchers across the nation. The highlighted research projects fall into seven disciplinary categories, and include 32 projects in space science, 20 in geoscience, 45 in physics and engineering, and many more. I want to highlight just one out of the many dozens of projects featured in the Blue Waters Annual Report, for the following reason. I did a word search on the PDF with Zenodo, and that project was the only one listing Zenodo entries in the "Publications & Data Sets" section that ends each project feature. One other project (in the domain of astrophysics) mentions that data is available through the project website and in Zenodo, but doesn't list any data sets in the report. Zenodo is an open-access repository funded by the European Union's Framework Programs for Research, and operated by CERN. Some of the world’s top experts in running large-scale research data infrastructure are at CERN, and Zenodo is hosted on top of infrastructure built in service of what is the largest high-energy physics laboratory of the world. Zenodo hosts any kind of data, under any license type (including closed-access). It has become one of the most used archives for open sharing of research objects, including software.The project I want to highlight is "Molten-salt reactors and their fuel cycles," led by Prof. Kathryn Huff at UIUC. I've known Katy since 2014, and she and I share many perspectives on computational science, including a strong commitment to open-source software. This project deals with modeling and simulation of nuclear reactors and fuel cycles, combining multiple physics and multiple scales, with the goal of improving design of nuclear reactors in terms of performance and safety. As part of the research enabled by Blue Waters, the team developed two software packages: Moltres, described as a first-of-its-kind finite-element code for simulating the transient neutronics and thermal hydraulics in a liquid-fueled molten-salt reactor design; and SaltProc: a Python tool for fuel salt reprocessing simulation. The references listed in the project highlight include research articles in the Annals of Nuclear Energy, as well as the Zenodo deposits for both codes, and a publication about Moltres in the Journal of Open Source Software, JOSS. (As one of the founding editors of JOSS, I'm very pleased.) It is possible, of course, that other projects of the Blue Waters portfolio have also made software archives in Zenodo or published their software in JOSS, but they did not mention it in this report and did not cite the artifacts. Clearly, the research context of the project I highlighted is of high consequence: nuclear reactor design. The practices of this research group show a high standard of transparency that should be the norm in such fields. Beyond transparency, the publication of the software in JOSS ensures that it was subject to peer review and that it satisfies standards of quality. JOSS reviewers install the software, run tests, and comment on usability and documentation, leading to quality improvements.Next, I want to highlight the work of a group that includes CiSE editors Michela Taufer and Ewa Deelman, posted last month on arXiv \cite{e2020}[6]. The work sought to directly reproduce the analysis that led to the 2016 discovery of gravitational waves, using the data and codes that the LIGO collaboration had made available to the scientific community. The data had previously been re-analyzed by independent teams using different codes, leading to replication of the findings, but no attempt had yet been made at reproducing the original results. In this paper, the authors report on challenges they faced during the reproduction effort, even with availability of data and code supplementing the original publication. A first challenge was the lack of a single public repository with all the information needed to reproduce the result. The team had the cooperation of one of the original LIGO team members, who had access to unpublished notes that ended up being necessary in the process of iteratively filling in the gaps of missing public information. Other highlights of the reproduction exercise include: the original publication did not document the precise version of the code used in the analysis; the script used to make the final figure was not released publicly (but one co-author gave access to it privately); the original documented workflow queried proprietary servers to access data, which needed to be modified to run with the public data instead. In the end, the result—the statistical significance of the gravitational-wave detection from a black-hole merger—was reproduced, but not independently of the original team, as one researcher is co-author in both publications. The message here is that even a field that is mature in its standards of transparency and reproducibility needs checks to ensure that these practices are sufficient or can be improved.Science policy trendsThe National Academies study on Reproducibility and Replicability in Science was commissioned by the National Science Foundation under Congressional mandate, with the charge coming from the Chair of the Science, Space, and Technology Committee. NASEM reports and convening activities have a range of impacts on policy and practice, and often guide the direction of federal programs. NSF is in the process of developing its agency response to the report, and we can certainly expect to hear more in the future about requirements and guidance for researchers seeking funding.The recommendations in the NASEM report are directed at all the various stakeholders: researchers, journals and conferences, professional societies, academic institutions and national laboratories, and funding agencies. Recommendation 6-9, in particular, prompts funders to ask that grant applications discuss how they will assess and report uncertainties, and how the proposed work will address reproducibility and/or replicability issues. It also recommends that funders incorporate reproducibility and replicability in the merit-review criteria of grant proposals. Combined with related trends urging for more transparency and public access to the fruits of government-funded research, we need to be aware of the shifting science-policy environment.One more time, I have a reason to thank Mike Heroux, who took time for a video call with me as I prepared my SC20 invited talk. In his position as Senior Scientist at Sandia, 1/5 of his time is spent in service to the lab's activities, and this includes serving in the review committee of the internal Laboratory Directed Research & Development (LDRD) grants. As it is an internal program, the Calls for Proposals are not available publicly, but Mike told me that they now contain specific language asking proposers to include statements on how the project will address transparency and reproducibility. These aspects are discussed in the proposal review and are a factor in the decision-making. As community expectations grow, it could happen that between two proposals equally ranked in the science portion the tie-break comes from one of them better addressing reproducibility. Already some teams at Sandia are performing at a high level, e.g., they produce an Artifact Description appendix for every publication they submit, regardless of the conference or journal requirements.We don't know if or when NSF might add similar stipulations to general grant proposal guidelines, asking researchers to describe transparency and reproducibility in the project narrative. One place where we see the agency start responding to shifting expectations about open sharing of research objects is the section on results from prior funding. NSF currently requires here a listing of publications from prior awards, and "evidence of research products and their availability, including …data [and] software."I want to again thank Beth Plale, who took time to meet with me over video and sent me follow-up materials to use in preparing my SC20 talk. In March 2020, NSF issued a "Dear Colleague Letter" on Open Science for Research Data, with Beth then acting as the public access program director. The DCL says that NSF is expanding its Public Access Repository (NSF PAR) to accept metadata records, leading to data discovery and access. It requires research data to be deposited in an archival service and assigned a Digital Object Identifier (DOI), a global and persistent link to the object on the web. A grant proposal's Data Management Plan should state the anticipated archive to be used, and include any associated cost in the budget. Notice this line: "Data reporting will initially be voluntary." This implies that it will later be mandatory! The DCL invited proposals aimed at growing community readiness to advance open science. At the same time, the Office of Science and Technology Policy (OSTP) issued a Request for Information early this year asking what could Federal agencies do to make the results from research they fund publicly accessible. The OSTP sub-committee on open science is very active. An interesting and comprehensive response to the OSTP RFI comes from the MIT Libraries. It recommends (among other things): Policies that default to open sharing for data and code, with opt-out exceptions available [for special cases]… Providing incentives for sharing of data and code, including supporting credentialing and peer-review; and encouraging open licensing. Recognizing data and code as “legitimate, citable products of research” and providing incentives and support for systems of data sharing and citation… The MIT Libraries response addresses various other themes like responsible business models for open access journals, and federal support for vital infrastructure needed to make open access to research results more efficient and widespread. It also recommends that Federal agencies provide incentives for documenting and raising quality of data and code, and also "promote, support, and require effective data practices, such as persistent identifiers for data, and efficient means for creating auditable and machine readable data management plans."To boot, the National Institutes of Health (NIH) just announced on October 29 a new policy on data management and sharing. It requires researchers to plan prospectively for managing and sharing scientific data openly, saying: "we aim to shift the culture of research to make data sharing commonplace and unexceptional."Another setting where we could imagine expectations to discuss reproducibility and open research objects is proposals for allocation of computing time. For this section, I need to thank John West, Director Of Strategic Initiatives at the Texas Advanced Computing Center (and CiSE Associate EiC), who took time for a video call with me on this topic. We bounced ideas about how cyber-infrastructure providers might play a role in growing adoption of reproducibility practices. Currently, the NSF science proposal and the computing allocation proposal are awarded separately. The Allocation Submission Guidelines discuss review criteria, which include: intellectual merit (demonstrated by the NSF science award), methodology (models, software, analysis methods), research plan and resource request, and efficient use of the computational resources. For the most part, researchers have to show that their application scales to the size of the system they are requesting time on. Interestingly, the allocation award is not tied to performance, and researchers are not asked to show that their codes are optimized, only that they scale and that the research question is feasible to be answered in the allocated time. The responsible stewardship of the supercomputing system is provided for via a close collaboration between the researchers and the members of the supercomputing facility. Codes are instrumented under the hood with low-overhead collection of system-wide performance data (in the UT facility, with TACC-Stats) and a web interface for reports.I see three opportunities here: 1) workflow-management and/or system monitoring could be extended to also supply automated provenance capture; 2) the expert staff at the facility could broaden their support to researchers to include advice and training in transparency and reproducibility matters; and 3) cyber-infrastructure facilities could expand their training initiatives to include essential skills for reproducible research. John floated other ideas, like the possibility that some projects be offered a bump on their allocations (say, 5% or 10%) to engage in R&R activities; or, more drastic perhaps, that projects may not be awarded allocations over a certain threshold unless they show commitment and a level of maturity in reproducibility.Next steps for HPCThe SC Transparency and Reproducibility Initiative is one of the innovative, early efforts to gradually raise the expectations and educate a large community about how to address it and why it matters. Over six years, we have built community awareness, and buy-in. This year's community sentiment study shows frank progress: 90% of the respondents are aware of the issues around reproducibility, and only 15% thought the concerns are exaggerated. Importantly, researchers report that they are consulting the artifact appendices of technical papers, signaling impact. As a community, we are better prepared to adapt to raising expectations from funders, publishers, and readers.The pandemic crisis has unleashed a tide of actions to increase access and share results: the Covid-19 Open Research Dataset (CORD-19) is an example \cite{al2020}; the COVID-19 Molecular Structure and Therapeutics Hub at MolSSI is another. Facing a global challenge, we as a society are strengthened by facilitating immediate public access to data, code, and published results. This point has been made by many in recent months, but perhaps most eloquently by Rommie Amaro and Adrian Mulholland in their Community Letter Regarding Sharing Biomolecular Simulation Data for COVID-19—signed by more than a hundred researchers from around the world \cite{j2020}. It says: "There is an urgent need to share our methods, models, and results openly and quickly to test findings, ensure reproducibility, test significance, eliminate dead-ends, and accelerate discovery." Then it follows with several commitments: to making results available quickly via pre-prints; to make available input files, model-building and analysis scripts (e.g., Jupyter notebooks), and data necessary to reproduce the results; to use open data-sharing platforms to make available results as quickly as possible; to share algorithms and methods in order to accelerate reuse and innovation; and to apply permissive open-source licensing strategies. Interestingly, these commitments are reminiscent of the pledges I made in my Reproducibility PI Manifesto \cite{barba2012} eight years ago!One thing the pandemic instantly provided is a strong incentive to participate in open science and attend to reproducibility. The question is how much will newly adopted practices persist once the incentive of a world crisis is removed.I've examined here several issues of incentives for transparent and reproducible research. But social epistemologists of science know that so-called Mertonian norms (for sharing widely the results of research) are supported by both economic and ethical factors—incentives and norms—in close interrelation. Social norms require a predominant normative expectation (for example, sharing of food in a given situation and culture). In the case of open sharing of research results, those expectations are not prime, due to researchers' sensitivity to credit incentives. Heesen \cite{heesen2017} concludes: "Give sufficient credit for whatever one would like to see shared ... and scientists will indeed start sharing it."In HPC settings, where we can hardly ever reproduce results (due to machine access, cost, and effort), a vigorous alignment with the goals of transparency and reproducibility will develop a blend of incentives and norms, will consider especially the applications of high consequence to society, and will support researchers with infrastructure (human and cyber). Over time, we will arrive at a level of maturity to achieve the goal of trustworthy computational evidence, not by actually exercising the open research objects (artifacts) shared by authors (data and code), but by a research process that ensures unimpeachable provenance.
Geographic mosaics of interactions via heterospecific pollen transfer may contribute...
Gerardo Arceo-Gomez

Gerardo Arceo-Gomez

January 06, 2021
Studies that aim to understand the processes that generate and organize plant diversity in nature have a long history in Ecology. Among these, pollinator-mediated plant-plant interactions that occur by altering pollinator floral preferences have been at the forefront in this field. Current evidence however indicates that plants can interact directly via heterospecific pollen (HP) transfer, that these interactions are ubiquitous, and can have strong fitness effects with implications for floral evolution, speciation and community assembly. Hence, interest in understanding their role in the diversification and organization of plant communities is rapidly rising. The existence of geographic mosaics of species interactions and their role in shaping patterns of diversity is also well recognized. However, after 40 years of research, the importance of geographic mosaics in HP intensity and effects remain poorly known, thus ignoring its potential in shaping patterns of diversity at local and global scales. Here, I develop a conceptual framework and summarize existing evidence for the ecological and evolutionary consequences of geographic mosaics in HP transfer interactions and outline future directions in this field.
The Effects of Spatial Configurations of Simulated Shrubs on Wind-proof Effectiveness
Xia Pan
Zhenyi Wang

Xia Pan

and 5 more

January 06, 2021
Maximizing the benefits of windbreaks requires a thorough understanding of the physical interaction between the wind and the barrier. In this experiment, a profiling set of Pitot tubes was used to measure the airflow field and wind velocity of simulated shrubs in a wind tunnel. The effects of form configurations and row spaces of simulated shrubs on wind-proof effectiveness were in-depth studied. We come to the following results: the weakening intensity of hemisphere-shaped and broom-shaped shrubs on wind velocity was mainly reflected below 2 cm in the root and 6-14 cm in the middle-upper, respectively, while the wind-proof effect of the spindle-shaped shrubs at the canopy (0.2-14 cm height) was the best. Besides, the simulated shrubs under 26.25 cm had the best protection effect on the wind velocity. Moreover, the designed windbreaks with Nitraria tangutorum, more effectively reduced the wind velocity among the windbreak compared to behind the windbreak. In the wind control system, the hemisphere-shaped windbreaks should be applied as near-surface barriers, and the windbreaks of broom-shaped and spindle-shaped can be used as a sheltered forest. The results could offer theoretical guidelines on how to arrange the windbreaks for preventing wind erosion in the most convenient and efficient ways.
Comparing headwater stream thermal sensitivity across two contrasting lithologies in...
Austin Wissler
Catalina Segura

Austin Wissler

and 2 more

January 06, 2021
Understanding drivers of thermal regimes in headwater streams is critical for a comprehensive understanding of freshwater ecological condition and habitat resilience to disturbance, and to inform sustainable forest management policies and decisions. However, stream temperatures may vary depending on characteristics of the stream, catchment, or region. To improve our knowledge of the key drivers of stream thermal regime, we collected stream and air temperature data along eight headwater streams in two regions with distinct lithology, climate, and riparian vegetation. Five streams were in the Northern California Coast Range at the Caspar Creek Experimental Watershed Study, which is characterized by permeable sandstone lithology. Three streams were in the Cascade Range at the LaTour Demonstration State Forest, which is characterized by fractured and resistant basalt lithology. We instrumented each stream with 12 stream temperature and four air temperature sensors during summer 2018. Our objectives were to compare stream thermal regimes and thermal sensitivity—slope of the linear regression relationship between daily stream and air temperature—within and between both study regions. Mean daily stream temperatures were ~4.7 °C warmer in the Coast Range but were less variable (SD = 0.7 °C) compared to the Cascade Range (SD = 2.3 °C). Median thermal sensitivity was 0.33 °C °C-1 in the Coast Range and 0.23 °C °C-1 in the Cascade Range. We posit that the volcanic lithology underlying the Cascade streams likely supported discrete groundwater discharge locations, which dampened thermal sensitivity. At locations of apparent groundwater discharge in these streams, median stream temperatures rapidly decreased by 2.0 °C, 3.6 °C, and 7.0 °C relative to adjacent locations, approximately 70–90 meters upstream. In contrast, thin friable soils in the Coast Range likely contributed baseflow from shallow subsurface sources, which was more sensitive to air temperature and generally warmed downstream (up to 2.1 °C km-1). Our study revealed distinct longitudinal thermal regimes in streams draining contrasting lithology, suggesting that streams in these different regions may respond differentially to forest disturbances or climate change.
Emergency aortic valve replacement complicated by unsuspected pheochromocytoma
Rihito Tamaki
Manabu Yamasaki

Rihito Tamaki

and 5 more

January 06, 2021
A 53-year-old male undergoing emergency aortic valve replacement for infective endocarditis developed a hypertensive crisis early during the operation. Suspecting a pheochromocytoma, intravenous phentolamine was immediately administered, after which the procedure was completed as scheduled. Although quite rare, a pheochromocytoma can be encountered during emergency open heart surgery, thus early recognition of abnormal blood pressure change and appropriate management are important. Here, we present details of blood pressure control mainly by phentolamine use in this case to demonstrate effective management of a hypertensive crisis during emergency cardiac surgery because of a pheochromocytoma.
Contemporary evolution of the viral-sensing TLR3 gene in an isolated vertebrate popul...
Charli Davies
Martin Taylor

Charli Davies

and 6 more

January 06, 2021
Understanding where and how genetic variation is maintained within populations is important from an evolutionary and conservation perspective. Signatures of past selection suggest that pathogen-mediated balancing selection is a key driver of immunogenetic variation, but studies tracking contemporary evolution are needed to help resolve the evolutionary forces and mechanism at play. Previous work in a bottlenecked population of Seychelles warblers (Acrocephalus sechellensis) show that functional variation has been maintained at the viral-sensing Toll-like receptor 3 (TLR3) gene. Here, we characterise evolution at this TLR3 locus over a 25-year period within the original remnant population of the Seychelles warbler, and in four other derived, contained populations. Results show a significant and consistent temporal decline in the frequency of the TLR3C allele in the original population, and that similar declines in the TLR3C allele frequency occurred in all the derived populations. Individuals (of both sexes) with the TLR3CC genotype had lower survival, and males - but not females - that carry the TLR3C allele had significantly lower lifetime reproductive success than those with only the TLR3A allele. These results indicate that positive selection, caused by an as yet unknown agent, is driving TLR3 evolution in the Seychelles warblers. No evidence of heterozygote advantage was detected. However, whether the positive selection observed is part of a longer-term pattern of balancing selection (through fluctuating selection or rare-allele advantage) cannot be resolved without tracking the TLR3C allele in the populations over an extended period of time.
The blood sucking on human by Placobdella costata (O. F. Müller, 1846) (Hirudinida: G...
Joanna Cichocka
Aleksander Bielecki

Joanna Cichocka

and 8 more

January 06, 2021
Abstract: 1. In our paper four events of blood sucking on human by Placobdella costata were described. 2. Human blood was sucked by both adults and juvenile specimens of P. costata. 3. The feeding strategies of juveniles under parental care are presented. 4. New data of juvenile specimens body form are presented. 5. Information on the potential role of mammals in species dispersion and habitat preferences of leeches are under consideration.
Novel use of Ketotifen as a cardio-protective agent in patients undergoing anthracycl...
hosny elewa
Naser elberay

hosny elewa

and 4 more

January 06, 2021
Objective: The present study aimed to investigate the possible cardioprotective effects of ketotifen and to assess its activity as an iron-chelating agent in patients receiving anthracyclines for the treatment of breast cancer. Patients & Methods: This was a randomized, prospective, controlled clinical trial. 111 eligible patients with breast cancer (age range, 30-60 year) were scheduled to receive anthracycline chemotherapy. The patients divided into two groups: Patients (n=56) assigned to The ketotifen group received ketotifen 1 mg three times daily for six consecutive cycles of treatment, and patients assigned to The control group (n= 55) without ketotifen treatment. The echocardiogram for each patient was recorded two times at baseline and at the end of the study. As well, blood samples were collected from all patients. Results: The findings showed a statistically significant reduction in the mean serum levels of common cardiotoxicity accompanied biomarkers in The ketotifen group compared with The control group (P ≤ 0.05). The mean serum levels of total iron-binding capacity was significantly elevated in The ketotifen group (P ≤ 0.001). There was a direct correlation between the mean serum levels of iron and that of lactate dehydrogenase (LDH) (r = + 0.79). On the other hand, there were indirect correlations between mean serum levels of LDH and both the percentage of ejection fraction and the total iron-binding capacity (r = - 0.69 and -0.697, respectively). Conclusion: Oral administration of ketotifen appears to be efficient and safe as a novel cardioprotective agent for the prevention of anthracyclines induced cardiotoxicity. Additionally, ketotifen suggested a beneficial effect in iron overload inducing diseases such as COVID-19.
Lymphopenia and lung complications in patients with coronavirus disease-2019 (COVID-1...
Ehsan  Zaboli
Hadi Majidi

Ehsan Zaboli

and 7 more

January 06, 2021
Background: A rapid outbreak of novel coronavirus, COVID-19, made it a global pandemic. This study focused on the possible association between lymphopenia and Computed tomography (CT) scan features and COVID-19 patient mortality. Method: The clinical data of 596 COVID-19 patients were collected from February 2020 to September 2020. The patients’ serological survey and CT scan features were retrospectively explored. Results: The median age of the patients was 56.7±16.4 years old. Lung involvement was more than 50% in 214 COVID-19 patients (35.9%). The average blood lymphocyte percentage was 20.35 ±10.16. The levels of C-reactive protein (CRP), erythrocyte sedimentation rate (ESR), and platelet-to-lymphocyte ratio (PLR) may not indicate the severity and prognosis of COVID-19. Patients with severe lung involvement and lymphopenia were found to be significantly associated with increased odds of death (odds ratio [OR], 9.24; 95% confidence interval [95 CI%], 4.32- 19.78). These results indicated that lymphopenia <20% along with pulmonary involvement >50% impose a multiplicative effect on the risk of mortality. The in-hospital mortality rate of this group was significantly higher than other COVID-19 hospitalized cases. Furthermore, they meaningfully experienced a prolonged stay in the hospital (P= 0.00). Conclusion: The Lymphocyte count less than 20% and chest CT scan findings with more than 50% involvement might be related to the patient’s mortality. It could act as laboratory and clinical indicators of disease severity and mortality.
Same virus, different course: The relationship between monocyte chemoattractant prote...
Ferhan Kerget
Buğra  Kerget

Ferhan Kerget

and 5 more

January 06, 2021
Objective: To date, over 7 million people have been infected in the COVID-19 pandemic caused by the novel coronavirus SARS-CoV-2 which emerged in Wuhan, China in December 2019. This study examined the relationships between serum monocyte chemoattractant protein-1 (MCP-1) and surfactant protein-A (SP-A) levels and the clinical course and prognosis of COVID-19. Method: The study included a total of 108 subjects. Those in the patient group (n=88) were diagnosed with COVID-19 using real-time PCR analysis of nasopharyngeal swab samples and treated in the Atatürk University Pulmonary Diseases and the City Hospital Infectious Diseases department between March 24 and April 15. The control group (n=20) included asymptomatic healthcare workers whose real-time PCR results during routine COVID-19 screening in our hospital were negative. Results: The COVID-19 patient group had significantly higher MCP-1 and SP-A levels compared to the control group (p=0.001, p=0.001). Patients who developed macrophage activation syndrome had significantly higher MCP-1 and SP-A levels than those who did not both at admission (p=0.001, p=0.001) and on day 5 of treatment (p=0.05, p=0.04). Similarly, MCP-1 and SP-A levels were significantly higher in patients who developed acute respiratory distress syndrome compared to those who did not at both time points (p=0.001 for all). Both parameters were significantly higher in nonsurviving COVID-19 patients compared to survivors (p=0.001 for both). Conclusion: MCP-1 and SP-A are on opposing sides of the inflammatory balance, and SP-A may be a pneumoprotein of importance in the presentation, course, prognosis, and possibly the treatment of COVID-19 in the future.
Diel timing of nest predation changes across season in a subtropical shorebird
Martin Sládeček
Kateřina Brynychová

Martin Sládeček

and 10 more

January 06, 2021
Predation is the most common cause of nest failure in birds. While nest predation is relatively well studied in general, our knowledge is unevenly distributed across globe and taxa, with for example limited information on shorebirds breeding in sub-tropics. Importantly, we know fairly little about the timing of predation within a day and season. Here, we followed 499 nests of red-wattled lapwings (Vanellus indicus), a ground-nesting shorebird, to estimate a nest predation rate, and continuously monitored 231 of these nests for a sum of 2951 days to reveal how timing of predation changes over the day and season in a sub-tropical desert. We found that 324 nests hatched, 77 nests were predated, 38 failed for other reasons and 60 had unknown fate. Daily predation rate was 0.97% (95%CrI: 0.77% – 1.2%), which for a 30-day long incubation period translates into ~25% chance of nest being predated. Such predation rate is low compared to most other species. Predation events were distributed evenly across day and night, with a tendency for increased predation around sunrise. Predation rate and events were distributed evenly also across the season, although night predation was more common later in the season, perhaps because predators reduce their activity during daylight to avoid extreme heat. Indeed, nests were never predated upon when mid-day ground temperatures exceeded 45°C. Whether the activity pattern of predators indeed changes across the breeding season and whether the described predation patterns hold for other populations, species and geographical regions awaits future investigations.
Array modified PdAg/Al2O3 catalyst for selective acetylene hydrogenation: Kinetics an...
Chenglin Miao
Luoyu Cai

Chenglin Miao

and 7 more

January 06, 2021
Aiming at the low surface area of high-temperature-calculated alumina limits the dispersion of active metals, an in-situ growth method is applied to fabricate the alumina array modified spherical alumina. Taking the modified alumina as support, the highly dispersed PdAg catalyst for selective acetylene hydrogenation is synthesized, which exhibits a remarkable enhanced intrinsic activity. Moreover, when the acetylene conversion reached 90%, the ethylene selectivity remains 89%. Preferred selectivity is assigned to more isolated Pd sites and high electronic density, which facilitates the desorption of the resulting ethylene. More importantly, the modified catalyst exhibits good structural stability and resistance to carbon deposition. From one aspect, the decrease of heat production rate over active site is conducive to reduce the reaction heat accumulation, thereby avoiding the formation of hot-spots over the catalyst. From another point of view, the outer opening pore structure of the modified alumina are benefit for the heat transfer.
Boosting selective hydrogenation through hydrogen spillover on supported-metal cataly...
Sai Zhang
Zhaoming Xia

Sai Zhang

and 5 more

January 06, 2021
Highly efficient hydrogenation of unsaturated substrates with strong absorption on metals at low temperatures is a long-term pursuit. However, due to the scaling relationship of high binding energies on metals, the poor activity and/or selectivity are frequently observed. Herein, we described a strategy of hydrogen spillover to break this scaling relationship to enable highly performed hydrogenation at low temperatures by constructing the dual-active site in supported-metal catalysts. Hydrogen and reactants are selectively activated on metal and the second active sites on support, respectively. Hydrogenation sequentially occurs on the second active sites via hydrogen spillover from metal to support. Easy desorption of surface-bounded products substantially re-generates the active sites. Guided by this design, for cinnamaldehyde hydrogenation, PtCo alloys (for H2 dissociation) supported on hydroxyl-abundant CoBOx (for aldehyde activation) delivered a high turnover frequency of 2479 h-1 (two orders of magnitude over PtCo/C) and 94.5% selectivity of cinnamyl alcohol at room temperature.
High-throughput computational screening of porous polymer networks for natural gas sw...
Xuanjun Wu
Yujing Wu

Xuanjun Wu

and 5 more

January 06, 2021
17,846 PPNs with the diamond-like topology were computationally screened to identify the optimal adsorbents for the removal of H2S and CO2 from humid natural gas based on the combination of molecular simulation and machine learning algorithms. The top-performing PPNs with the highest adsorption performance scores (APS) were identified based on their adsorption capacities and selectivity for H2S and CO2. The strong affinity between water molecules and the framework atoms has a significant impact on the adsorption selectivity of acid gases. We proposed two main design paths (LCD ≤ 4.648 Å, Vf ≤ 0.035, PLD ≤ 3.889 Å or 4.648 Å ≤ LCD ≤ 5.959 Å, ρ ≤ 837 kg·m-3) of high-performing PPNs. We also found that artificial neural network (ANN) could accurately predict the APS of PPNs. N-rich organic linkers and highest isosteric adsorption heat of H2S and CO2 are main factors that could enhance natural gas sweetening performance.
Accurate Prediction of Deprotonation and pH value of Acids in Aqueous Solutions over...
Bong-Seop Lee
Shiang-Tai Lin

Bong-Seop Lee

and 1 more

January 06, 2021
The pKa of an acid is important for determining the dissociation and thermodynamic properties of solutions containing it. However, the value of pKa is typically determined at dilute limit and cannot be used to describe properties of the solution at high concentrations. In this work, we propose an approach to determine the concentration independent equilibrium constant Keq based on pKa and predicted activity coefficients. The Keq determined is applied to predict the degree of dissociation over whole concentration range for weak to strong acids. The pH of acid aqueous solution is predicted over whole concentration range, showing a good agreement with experiments. Based on this approach, we found that the vapor pressures of acid aqueous solutions strongly depend on the degree of dissociation of acids. The proposed model provides useful insights to link the macroscopic properties of acid aqueous solutions to its microscopic dissociation phenomena over the whole concentration range.
DEM--CFD modeling and simulations of hydrodynamic characteristics and flow resistance...
Yaping Li
Le Xie

Yaping Li

and 4 more

January 06, 2021
The ability to predict void fraction, pressure drop, and flow resistance coefficient in fixed-bed reactors is significant to their optimal design. In this study, the discrete element method (DEM) is combined with computational fluid dynamics (CFD) to simulate the hydrodynamic characteristics of fixed-beds. A realistic random packing structure for fixed-beds with spherical particles was generated via the DEM method and then meshed using Ansys ICEM software for the CFD simulation. A grid independency study was performed to select appropriate grid model parameters. A large set of numerical experiments was conducted to investigate the hydrodynamic characteristics with respect to different inlet velocities and particle sizes, and the simulated pressure drop data were used to calculate the flow resistance coefficient. The output flow resistance coefficients agreed well with those calculated by the classical models in laminar and turbulent flow regimes, thereby indicating the accuracy and advantage of the proposed DEM–CFD approach.
Traceable Surveillance and Genetic Diversity Analysis of Coronaviruses in Poultry fro...
Yang Li
Qingye Zhuang

Yang Li

and 18 more

January 06, 2021
Coronavirus disease 2019 (COVID-19) caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) was first reported in Wuhan, China, and rapidly spread worldwide. This new emerging pathogen is highly transmittable and can cause fatal disease. More than 35 million cases have been confirmed and the fatality was about 2.9% up to October 9 2020. However, the original and intermediate hosts of SARS-CoV-2 remain unknown. Here, a total of 3160 poultry samples collected from 14 provinces between September and December 2019 in China were tested for the purpose of traceable surveillance for SARS-CoV-2 infection. The results indicated that all samples were SARS-CoV-2 negative, and a total of 593 avian coronaviruses were detected, including 485 avian infectious bronchitis viruses, 72 duck coronaviruses and 36 pigeon coronaviruses. The positive rates of avian infectious bronchitis virus, duck coronavirus, and pigeon coronavirus were 15.35%, 2.28% and 1.14%, respectively. Our surveillance demonstrated the diversities of avian coronaviruses in China, and higher prevalence were also recognized in some regions. The possibility of SARS-CoV-2 originating from the known avian-origin coronaviruses can be preliminarily ruled out. More surveillance and research on avian coronaviruses should be strengthened for better understanding the diversity, distribution, cross-species transmission and clinical significance of these viruses.
Spatial Analysis of COVID-19 Risk Based on Different Lockdown Strategies - a Case Stu...
Weijia Wang

Weijia Wang

January 06, 2021
1. IntroductionIn the United States, the number of cases of COVID-19 is continuously increasing. Until October 28, 2020, according to John Hopkins Coronavirus Resource Center, there were 8,856,413 confirmed cases in the United States, and 72,183 cases were reported in one day. After a few months of the lockdown in March, most states are reopening, including retail stores, restaurants, and recreation. As a college student, COVID-19 is affecting everyday student life and residence life.According to the CDC, seasonal influenza viruses are expected during the late fall and peak between December to February [1]. There are some explanations about why flu season usually spreads in the winter. First, people spend more time indoors, which increases the chance to closer contact others who might be carrying the virus. Students, for example, would prefer using public transportation, such as buses, instead of walking to class. Second, in the short days of the winter, people may run low on Vitamin D and weaken our immune system [2]. UConn is located in the northeast of the United States; the temperature is low during the fall and winter. Students and the university need to be prepared and preclude the new wave from spreading. Typically, international students and out-of-state students are more vulnerable to get infected due to limited access to testing [3].The government of Connecticut announced the reopening policy phase 2 began on June 17, which up to 50% capacity indoors with 6 feet spacing for restaurants, personal services, libraries, and indoor recreation and up to 25% capacity capped at 100 people for indoor religious gatherings. Phase 3 began on October 8. Restaurants, personal services, and libraries are up to 75% capacity indoors and up to 50% capacity for indoor and outdoor religious gatherings. However, due to the increasing number of cases of COVID-19 in Connecticut, the Connecticut government updated the latest reopening rule, which was phase 2.1 started on November 6. Phase 2.1 is slightly different from the phase 2 version, in which restaurants can accommodate up to 50%, while personal service and libraries can accommodate up to 75% [6].This article focuses on a local scale, which is the UConn main campus. It is important because college campuses are places with high dense population and easily get infected. From a student’s perspective, building spatial models of campus areas are necessary and help us create a safe community. This study article focuses on building a mathematical model, the Susceptible-Infected-Recovered (SIR) model, and estimates the infectious rate and recovery rate at the University of Connecticut (UConn) Storrs. The model generates the number of cases from August 16, when students who live on campus check-in, to September 7. After finding out the parameters using SIR, we use Agent-Based Modeling (ABM) to simulate different cases to predict and evaluate the risks of different places on campus.  UConn, located in Storrs, has approximately 5,000 students living on campus. Such a population would increase the chances of interaction between students in public places such as academic buildings, dining halls, grocery stores, residential halls, and apartments. Before the semester began, UConn had already announced reopening policies. Most of the classes are moving online or distance learning to prevent the spreading of disease. In-person classes require students wearing a mask and maintaining at least six feet of physical distancing from others. Dining halls are switching to take-out and limited dining models. However, for those students who live in residential halls, even though UConn policy requires one person per dorm room, they are still sharing bathrooms. For those who live in apartments or off-campus, students have approximately one to four roommates, which increases the chance of infection. Our primary goal is to extend the SIR model into the spatial form and using QGIS and NetLogo to visualize the spreading. Because the covid-19 disease varies a great deal with places, we consider leveraging this when we estimate covid information for policy-makers to make lockdown or reopening business strategies. We extend the traditional mathematical SIR model into a spatially-explicit model to simulate the spatial dynamics of covid-19 over discrete-time and across discrete space at the Uconn Storrs campus. The spatially-explicit models may provide useful insights into the epidemiological characteristics of the disease and identification of disease hotspots across the campus, thus can inform and guide policy-makers for targeted interventions and targeted reopening the business in specific locations of the campus. This paper focuses on a specific area, rather than a state or a country, with a smaller population size. We are using the data to predict the cases and infection rates in the next few months, evaluating each building’s risk and ranking the score with a higher chance of getting infected. Based on the policies that have been implemented at UConn, we also make some suggestions to the university about forestalling the new wave coming in winter. 2. Data and MethodologyTo simulate the spreading of epidemics, we are building the SIR model. The SIR model was first introduced by Kermack and McKendrick by separating people into three different categories: susceptible (S), infected (I), and recovered (R) [4]. In this case, the population in Storrs is susceptible (S). Individuals who get infected move from susceptible stage to infected stage (I). Eventually, people who were removed from the infected status recovered (R). The SIR model using the parameters β, the infection rate, and γ, the recovery rate, can be presented by the ordinary differential equation (ODE). 
The role of echocardiography in predicting technical problems and complications of tr...
Dorota Nowosielecka
Wojciech Jacheć

Dorota Nowosielecka

and 5 more

January 05, 2021
Introduction Transesophageal echocardiography (TEE) is a useful tool in preoperative observation of patients undergoing transvenous leads extraction (TLE) due to complications associated with implanted devices. Echocardiographic phenomena may determine the safety of the procedure. Methods and results Data from 936 transesophageal examinations (TEE) performed at a high volume center in patients awaiting TLE from 2015 to 2019 were assessed. TEE revealed a total of 1156 phenomena associated with the implanted leads in 697 (64.85%) patients, including: asymptomatic masses on endocardial leads (AMEL) (58.65%), vegetations (12,73%), fibrous tissue binding the lead to the vein or heart wall (33.76%), lead-to-lead binding sites (18.38%), excess lead loops (19.34%), intramural penetration of the lead tip (16.13%), lead-dependent tricuspid dysfunction (LDTD) (6.41%). Risk factors for technical difficulties during TLE in multivatiate analysis were: fibrous tissue binding the lead to atrial wall (OR=1.738; p<0.05), to right ventricular wall (OR=2.167; p<0.001), lead-to-lead binding sites (OR=1.628; p<0.01) and excess lead loops (OR=1.488; p<0.05). Lead-to-lead binding sites increased probability of major complications (OR=3.034; p<0.05). Presence of fibrous tissue binding the lead to the superior vena cava (OR=0.296; p<0.05), right atrial wall (OR=323; p<0.05) and right ventricular wall (OR=0.297; p<0.05) reduced the probability of complete procedural success, whereas fibrous tissue binding the lead to the tricuspid apparatus decreased the probability of clinical success (OR=0.307; p<0.05), Conclusions: Careful preoperative TEE evaluation of the consequences of extended lead implant duration (enhanced fibrotic response) increases the probability of predicting the level of difficulty of TLE procedures, their efficacy and risk of major complications.
COMPARISON OF THE OUTCOMES AFTER ENDOSCOPIC VEIN HARVESTING VERSUS OPEN VEIN HARVESTI...
Afnan ALMalki
Ahmed Arifi

Afnan ALMalki

and 1 more

January 05, 2021
Minimally invasive endoscopic vein harvesting (EVH) was first reported in 1996 as an alternative to open vein harvesting (OVH). Making coronary artery bypass surgery a less invasive procedure, shortly after its introduction, it became the standard of care for conduit harvesting. When compared to the conventional technique, the incidence of site infections wound dehiscence, delayed healing, duration of hospitalization, and postoperative pain were markedly reduced. However, the long-term outcomes, safety, and graft patency remain uncertain. Herein is an extensive literature review discussing the outcomes following endoscopic vein harvesting for Coronary Artery Bypass Surgery (CABG) as well as its advantages and disadvantages.
Postnatal cardiovascular morbidity following preterm pre-eclampsia: an observational...
Laura Ormesher
Suzanne Higson

Laura Ormesher

and 8 more

January 05, 2021
Objective Explore the nature of postnatal cardiovascular morbidity following pregnancies complicated by preterm pre-eclampsia and identify associations between pregnancy characteristics and postnatal cardiovascular function. Design Observational sub-study of a single-centre feasibility randomised double-blind placebo-controlled trial. Setting Tertiary maternity hospital, UK. Population Women with preterm pre-eclampsia, delivering <37 weeks. Methods Eligible women underwent echocardiography, arteriography and blood pressure monitoring <3 days, 6 weeks and 6 months postpartum. Correlations between pregnancy and cardiovascular characteristics were assessed using Spearman’s correlation. Main Outcome Measure Prevalence of cardiovascular dysfunction and remodelling 6 months following preterm pre-eclampsia. Results Forty-four women completed the study. At 6 months, 27 (61%) had diastolic dysfunction, 33 (75%) had raised total vascular resistance (TVR) and 18 (41%) had left ventricular remodelling. Sixteen (46%) women had de novo hypertension by 6 months and only 2 (5%) women had a completely normal echocardiogram. Echocardiography did not change significantly from 6 weeks to 6 months. Earlier gestation at delivery and lower birthweight centile were associated with worse 6-month diastolic dysfunction (E/E’: rho=-0.39, p=0.001 & rho=-0.42, p=0.005) and TVR (rho=-0.34, p=0.02 & rho=-0.37, p=0.01). Conclusions Preterm pre-eclampsia is associated with persistent cardiovascular morbidity 6 months postpartum in the majority of women. These cardiovascular changes have significant implications to long-term cardiovascular health. The graded severity of diastolic dysfunction and TVR with worsening pre-eclampsia phenotype suggests a dose-effect. However, the mechanistic link remains uncertain. Funding Medical Research Council (MR/R001693/1). Registration https://www.clinicaltrials.gov; NCT03466333. Key words Pre-eclampsia: clinical research; radiological imaging: ultrasound; medical disorders in pregnancy.
← Previous 1 2 … 5 6 7 8 9 10 11 12 13 … 489 490 Next →
Authorea
  • Home
  • About
  • Product
  • Preprints
  • Pricing
  • Blog
  • Twitter
  • Help
  • Terms