This paper proposes a novel algorithm that accurately predicts market trends and trading entry points for US 30-year-treasury bonds using a hybrid approach of 1-Dimensional Convolutional Neural Network (1DCNN), Long-Short Term Memory (LSTM), and XGBoost algorithms. We compared the performance of various strategies using 1DCNN and LSTM and found that existing state-of-the-art methods based on LSTM have excellent results in market movement prediction tasks, but the effectiveness of 1DCNN and LSTM in terms of trading entry point and market perturbations has not been studied thoroughly. We demonstrate, through experiments that our proposed 1DCNN-BiLSTM-XGBoost algorithm combined with moving averages crossover effectively mitigates noise and market perturbations, leading to high accuracy in spotting trading entry points and trend signals for US 30-year-treasury-bonds. Our experimental study shows that the proposed approach achieves an average of 0.0001% Root Mean Squared Error and 100% R-Square, making it a promising method for predicting the market trends and trading entry points.
The paper proposes an alternative approach to improve the performance of image retrieval. In this work, a framework for image retrieval based on machine learning and semantic retrieval is proposed. In the preprocessing phase, the image is segmented objects by using Graph-cut, and the feature vectors of objects presented in the image and their visual relationships are extracted using R-CNN. The feature vectors, visual relationships, and their symbolic labels are stored in KD-Tree data structures which can be used to predict the label of objects and visual relationships later. To facilitate semantic query, the images use the RDF data model and create an ontology for the symbolic labels annotated. For each query image, after extracting their feature vectors, the KD-Tree is used to classify the objects and predict their relationship. After that, a SPARQL query is built to extract a set of similar images. The SPARQL query consists of triple statements describing the objects and their relationship which were previously predicted. The evaluation of the framework with the MS-COCO dataset and Flickr showed that the precision achieved scores of 0.9218 and 0.9370 respectively.
Purpose: Poly-medicated patients, especially those over 65, have increased. Multiple drug use and inappropriate prescribing increase drug-drug interactions, adverse drug reactions, morbidity, and mortality. This issue was addressed with several CDSS alerts. Health professionals have not followed these systems due to their poor alert quality and incomplete databases. Methods: Recent research shows a growing interest in using Text Mining via NLP to extract drug-drug interactions from unstructured data sources to support clinical prescribing decisions. NLP text mining and machine learning classifier training for drug relation extraction were used in this process. Results: In this context, the proposed solution allows to develop an extraction system for drug-drug interactions from unstructured data sources. The system produces structured information, which can be inserted into a database that contains information acquired from three different data sources. Conclusion: The architecture outlined for the drug-drug interaction extraction system is capable of receiving unstructured text, identifying drug entities sentence by sentence, and determining whether or not there are interactions between them.
Online Social Networks (OSNs) have grown exponentially in the last few years due to their applications in real life like marketing, recommendation systems, and social awareness campaigns. One of the most important research areas in this field is Influence Maximization (IM). IM pertains to finding methods to maximize the spread of information (or influence) across a social network. Previous works in IM have focused on using a pre-defined edge propagation probability or using the Hurst exponent (H) to identify which nodes to be activated. This is calculated on the basis of self-similarity in the time series depicting a user’s (node) past temporal interaction behaviour. In this work, we propose a Time Series Characteristic based Hurst-based Diffusion Model (TSC-HDM). The model calculates Hurst Exponent (H) based on the stationary or non-stationary characteristic of the time series. Furthermore, our model selects a handful of seed nodes and activates every seed node’s inactive successor only if H>0.5 . The process is continued until the activation of successor nodes is not possible. The proposed model was tested on 4 datasets - UC Irvine messages, Email EU-Core, Math Overflow 3, and Linux Kernel mailing list. We have also compared the results against 4 other Influence Maximisation models - Independent Cascade (IC), Weighted Cascade (WC), Trivalency (TV), and Hurst-based Influence Maximisation (HBIM). Our model achieves as much as 590% higher expected influence spread as compared to the other models. Moreover, our model attained 344% better average influence spread than other state-of-the-art models.
The prediction of news popularity is having substantial importance for the digital advertisement community in terms of selecting and engaging users. Traditional approaches are based on empirical data collected through surveys and applied statistical measures to prove a hypothesis. However, predicting news popularity based on statistical measures applied to past data is highly questionable. Therefore, in this paper, we predict news popularity using machine learning classification models and deep residual neural network models. Articles are usually made up of textual content and in many cases, images are also used. Although it is evident that the appropriate amount of textual data is required to extract features and create models, image data is also helpful in gaining useful information. In this paper, we present a novel multimodal online news popularity prediction model based on ensemble learning. This research work acts as a guide for extensive feature engineering, feature extraction, feature selection, and effective modeling to create a robust news popularity Prediction Model. Three kinds of features – meta features, text features, and image features are used to design an influential and robust model. The performance measure Root Mean Squared logarithmic error (RMSLE) is used to validate the outcome of the proposed model. Further, the most important features are sought out for the proposed model to verify the dependence of the model on text and image features.
The repair process of devices is an important part of the business of many original equipment manufacturers. The consumption of spare parts, during the repair process, is driven by the defects found during inspection of the devices, and these parts are a big part of the costs in the repair process. In previous work we proposed a data-driven method for Supply Chain Control Tower solutions to provide support for the automatic check of spare parts consumption in the repair process. In this paper, we continue our investigation of a multi-label classification problem and explore alternatives in the learning-to-rank approach, where we simulate the passage of time using more data while training and comparing hundreds of Machine Learning models to provide an automatic check in the consumption of spare parts. We investigate the effects of different train set sizes, retraining intervals, models and hyper-parameter search using Bayesian Optimization. The results show that we were able to improve the trained models and achieve a higher mean NDCG@20 score of 86% when ranking the expected parts. Focusing on the most recent data, we achieve a NDCG@20 score of 90%, while obtaining a ratio of marked parts of just 4% of the consumed parts for use in alert generation.
Developing analytical systems imposes several challenges related not only to the amount and heterogeneity of the involved data but also to the constant need to readapt and evolve to overcome new business challenges. Data is a determinant factor in the success of analytical and decision-making applications, being its nature, availability, and quality, crucial aspects for planning and structuring populating analytical systems. Today’s users are more demanding, requiring adaptable and flexible analytical applications, which impose serious challenges on ETL systems design and development for ensuring flexible and robust data populating services, operating 24/7, and managing and processing large volumes of data. Thus, we should design and implement ETL processes using innovative and up-to-date approaches, having real application evidence. In this paper, we present a service-oriented implementation for ETL design and development. We mapped and implemented some of the most conventional ETL processes in a service-oriented architecture, for demonstrating the application and benefits that this kind of approach will provide to ETL systems project development.
Background: Due to the highly coarse chromatin, multi-dimensionality of the histo image, irregularity of shape and size, texture, and appearance, nuclei extraction is challenging. To address these complexities, a deep learning algorithm called a stacked sparse autoencoder had been considered a research factor in this paper. Methods and Material: This paper focuses on detecting the epithelial regions and extracting high-level features to segment the patches based on the nuclei and classify the biomarkers concerning the nuclei patches. We used 6,53,400 microscopic image patches of 363 patients sourced from the BreakHis database, of which 4,90,050 prominent image patches containing only nuclei were utilized for Biomarker classification (Basically eliminating the non-nuclei patches from 363 Whole slide Images (WSI)). The non-nuclei patches were eliminated due to imbalanced class distribution. Results: The classifier finally classifies if the nuclei detected based on the features are benign or malignant, or normal with an accuracy of 99.73%, using which the early prediction is performed by extracting and classifying the biomarkers HER2 and ER. The overall classification rate of classifying HER-2 and ER is 97.52%. Conclusion: The HER2 +ve was classified with intensity above 23%, and Total nuclei in the range 150-1000 are termed ER positive. Based on these 40 patients with HER2 +ve and 25 patients with ER +ve were detected out of 363 patients. From the observation, it is concluded that 25-40 patients are risked of breast cancer in the next 5 years due to the cell proliferation rate of 7000.
The growing use of technology and social media has resulted in the emergence of digital influencers, a new profession capable of changing the mentalities and behaviours of those who follow them. This study arises to better understand the potential impact digital influencers might have on the Portuguese population’s purchase behaviour and patterns, and for this purpose, seven hypotheses were formulated. An online questionnaire was conducted to respond to these theoretical assumptions and collected data from 175 respondents. A total of 129 valid answers were considered. It was possible to conclude that purchase intention does not necessarily translate into a purchase action. It was also concluded that the relationship between social network use and the purchase of products/services recommended by influencers is only significant for Instagram. Furthermore, individuals’ Generation is not significantly linked with purchasing a product/service recommended by influencers. Furthermore, a small percentage of respondents have also identified themselves as impulsive shoppers and perceived Instagram as their favourite social network. With the results of this study, it is also possible to state that the influencer’s opinion was classified as the last factor considered in the purchase decision process. Additionally, there is a weak negative association between purchasing a product/service recommended by influencers with sponsorship disclosure and remunerated partnership, which decreases credibility and discourages purchasing.
Analysis of human emotions from multimodal data for making critical decisions is an emerging area of research. The evolution of deep learning algorithms has improved the potential for extracting value from multimodal data. However, these algorithms do not often explain how certain outputs from the data are produced. This study focuses on the risks of using black-box deep learning models for critical tasks, such as emotion recognition, and describes how human understandable interpretations of these models are extremely important. This study utilizes one of the largest multimodal datasets available - CMU-MOSEI. Many researchers have used the pre-extracted features provided by the CMU Multimodal SDK with black-box deep learning models making it difficult to interpret the contribution of individual features. This study describes the implications of individual features from various modalities (audio, video, text) in Context-Aware Multimodal Emotion Recognition. It describes the process of curating reduced feature models by using the GradientSHAP XAI method. These reduced models with highly contributing features achieve comparable and even better results compared to their corresponding all feature models as well as the baseline model GraphMFN proving that carefully selecting significant features can help improve the model robustness and performance and in turn make it trustworthy.
Competitive Intelligence allows an organization to keep up with market trends and foresee business opportunities. This practice is mainly performed by analysts scanning for any piece of valuable information in a myriad of dispersed and unstructured sources. Here we present MapIntel, a system for acquiring intelligence from vast collections of text data by representing each document as a multidimensional vector that captures its own semantics. The system is designed to handle complex Natural Language queries and visual exploration of the corpus, potentially aiding overburdened analysts in finding meaningful insights to help decision-making. The system searching module uses a retriever and re-ranker engine that first finds the closest neighbors to the query embedding and then sifts the results through a cross-encoder model that identifies the most relevant documents. The browsing or visualization module also leverages the embeddings by projecting them onto two dimensions while preserving the multidimensional landscape, resulting in a map where semantically related documents form topical clusters which we capture using topic modeling. This map aims at promoting a fast overview of the corpus while allowing a more detailed exploration and interactive information encountering process. We evaluate the system and its components on the 20 newsgroups dataset, using the semantic document labels provided, and demonstrate the superiority of Transformer-based components. Finally, we present a prototype of the system in Python and show how some of its features can be used to acquire intelligence from a news article corpus we collected during a period of 8 months.