Search millions of research papers, visualize citation networks, and discover groundbreaking insights with our intelligent graph system
Research Papers
Citations Mapped
Graph Visualization
We also introduce the VoiceAssistant-400K dataset to fine-tune models optimized for speech output. Science mapping software tools: Review, analysis, and cooperative study among tools (Cobo, 2011)
Advances in deep learning have greatly improved structure prediction of molecules. However, many macroscopic observations that are important for real-world applications are not functions of a single molecular structure but rather determined from the equilibrium distribution of structures. Conventional methods for obtaining these distributions, such as molecular dynamics simulation, are computationally expensive and often intractable. Here we introduce a deep learning framework, called Distributional Graphormer (DiG), in an attempt to predict the equilibrium distribution of molecular systems. Inspired by the annealing process in thermodynamics, DiG uses deep neural networks to transform a simple distribution towards the equilibrium distribution, conditioned on a descriptor of a molecular system such as a chemical graph or a protein sequence. This framework enables the efficient generation of diverse conformations and provides estimations of state densities, orders of magnitude faster than conventional methods. We demonstrate applications of DiG on several molecular tasks, including protein conformation sampling, ligand structure sampling, catalyst–adsorbate sampling and property-guided structure generation. DiG presents a substantial advancement in methodology for statistically understanding molecular systems, opening up new research opportunities in the molecular sciences. Methods for predicting molecular structure predictions have so far focused on only the most probable conformation, but molecular structures are dynamic and can change when performing their biological functions, for example. Zheng et al. use a graph transformer approach to learn the equilibrium distribution of molecular systems and show that this can be helpful for a number of downstream tasks, including protein structure prediction, ligand docking and molecular design.
Today, the deep learning-based side-channel analysis represents a widely researched topic, with numerous results indicating the advantages of such an approach. Indeed, breaking protected implementations while not requiring complex feature selection made deep learning a preferred option for profiling side-channel analysis. Still, this does not mean it is trivial to mount a successful deep learning-based side-channel analysis. One of the biggest challenges is to find optimal hyperparameters for neural networks resulting in powerful side-channel attacks. This work proposes an automated way for deep learning hyperparameter tuning based on Bayesian optimization. We build a custom framework denoted AutoSCA supporting machine learning and side-channel metrics. Our experimental analysis shows that our framework performs well regardless of the dataset, leakage model, or neural network type. We find several neural network architectures outperforming state-of-the-art attacks. Finally, while not considered a powerful option, we observe that neural networks obtained via random search can perform well, indicating that the publicly available datasets are relatively easy to break.
Ionizable lipid nanoparticles (LNPs) are seeing widespread use in mRNA delivery, notably in SARS-CoV-2 mRNA vaccines. However, the expansion of mRNA therapies beyond COVID-19 is impeded by the absence of LNPs tailored for diverse cell types. In this study, we present the AI-Guided Ionizable Lipid Engineering (AGILE) platform, a synergistic combination of deep learning and combinatorial chemistry. AGILE streamlines ionizable lipid development with efficient library design, in silico lipid screening via deep neural networks, and adaptability to diverse cell lines. Using AGILE, we rapidly design, synthesize, and evaluate ionizable lipids for mRNA delivery, selecting from a vast library. Intriguingly, AGILE reveals cell-specific preferences for ionizable lipids, indicating tailoring for optimal delivery to varying cell types. These highlight AGILE’s potential in expediting the development of customized LNPs, addressing the complex needs of mRNA delivery in clinical practice, thereby broadening the scope and efficacy of mRNA therapies. In this work, the authors develop a platform called AGILE or AI Guided Ionizable Lipid Engineering, which streamlines ionizable lipid development. Using AGILE, they rapidly design, synthesize, and evaluate ionizable lipids for mRNA delivery.
Social media is used to categorise products or services, but analysing vast comments is time-consuming. Researchers use sentiment analysis via natural language processing, evaluating methods and results conventionally through literature reviews and assessments. However, our approach diverges by offering a thorough analytical perspective with critical analysis, research findings, identified gaps, limitations, challenges and future prospects specific to deep learning-based sentiment analysis in recent times. Furthermore, we provide in-depth investigation into sentiment analysis, categorizing prevalent data, pre-processing methods, text representations, learning models, and applications. We conduct a thorough evaluation of recent advances in deep learning architectures, assessing their pros and cons. Additionally, we offer a meticulous analysis of deep learning methodologies, integrating insights on applied tools, strengths, weaknesses, performance results, research gaps, and a detailed feature-based examination. Furthermore, we present in a thorough discussion of the challenges, drawbacks, and factors contributing to the successful enhancement of accuracy within the realm of sentiment analysis. A critical comparative analysis of our article clearly shows that capsule-based RNN approaches give the best results with an accuracy of 98.02% which is the CNN or RNN-based models. We implemented various advanced deep-learning models across four benchmarks to identify the top performers. Additionally, we introduced the innovative CRDC (Capsule with Deep CNN and Bi structured RNN) model, which demonstrated superior performance compared to other methods. Our proposed approach achieved remarkable accuracy across different databases: IMDB (88.15%), Toxic (98.28%), CrowdFlower (92.34%), and ER (95.48%). Hence, this method holds promise for automated sentiment analysis and potential deployment.
Given the large volume of remote sensing images collected daily, automatic object detection and segmentation have been a consistent need in Earth observation (EO). However, objects of interest vary in shape, size, appearance, and reflecting properties. This is not only reflected by the fact that these objects exhibit differences due to their geographical diversity but also by the fact that these objects appear differently in images collected from different sensors (optical and radar) and platforms (satellite, aerial, and unmanned aerial vehicles (UAV)). Although there exists a plethora of object detection methods in the area of remote sensing, given the very fast development of prevalent deep learning methods, there is still a lack of recent updates for object detection methods. In this paper, we aim to provide an update that informs researchers about the recent development of object detection methods and their close sibling in the deep learning era, instance segmentation. The integration of these methods will cover approaches to data at different scales and modalities, such as optical, synthetic aperture radar (SAR) images, and digital surface models (DSM). Specific emphasis will be placed on approaches addressing data and label limitations in this deep learning era. Further, we survey examples of remote sensing applications that benefited from automatic object detection and discuss future trends of the automatic object detection in EO.
Many wireless sensors are placed ad hoc to form a wireless sensor network (WSN), monitoring system, physical, and environmental conditions. Base stations and nodes constitute the system. The WSN's base station connects to the Internet, facilitating data sharing. These networks cooperatively transfer data to the base station while monitoring factors like sound, pressure, and temperature. The collected data undergo processing, analysis, storage, and mining. This study employs additional optimization and a deep learning approach to identify and isolate a rogue node from the busiest one based on various criteria. The deep learning model calculates probabilities using a sum-rule weighted method for request forwarding, reply forwarding, and data dropping. The planned task exhibits high throughput and reduced necessary time. Packet loss rates have decreased, with delay-related hyper metrics dropping from 70 to 42 ms. The percentage of missing packages has nearly threefold reduced from 23 to 8%. The adoption of deep learning eliminates hostile node behaviour, mitigating potential network failures. Enhanced Study optimizes with deep learning to pinpoint rogue nodes in the busiest, using sum-rule weighted probabilities for request and reply forwarding, and data dropping. A deep learning model uses sum-rule weighting to compute probabilities for request handling, reply forwarding, and data dropping. This ensures high throughput and minimizes processing time in the planned tasks. The proposed work achieves high throughput, reduced processing time, and a significantly lower packet loss rate, with nearly a threefold decrease in the percentage of missing packages. Enhanced Study optimizes with deep learning to pinpoint rogue nodes in the busiest, using sum-rule weighted probabilities for request and reply forwarding, and data dropping. A deep learning model uses sum-rule weighting to compute probabilities for request handling, reply forwarding, and data dropping. This ensures high throughput and minimizes processing time in the planned tasks. The proposed work achieves high throughput, reduced processing time, and a significantly lower packet loss rate, with nearly a threefold decrease in the percentage of missing packages.
Forecasting solar power production accurately is critical for effectively planning and managing renewable energy systems. This paper introduces and investigates novel hybrid deep learning models for solar power forecasting using time series data. The research analyzes the efficacy of various models for capturing the complex patterns present in solar power data. In this study, all of the possible combinations of convolutional neural network (CNN), long short-term memory (LSTM), and transformer (TF) models are experimented. These hybrid models also compared with the single CNN, LSTM and TF models with respect to different kinds of optimizers. Three different evaluation metrics are also employed for performance analysis. Results show that the CNN–LSTM–TF hybrid model outperforms the other models, with a mean absolute error (MAE) of 0.551% when using the Nadam optimizer. However, the TF–LSTM model has relatively low performance, with an MAE of 16.17%, highlighting the difficulties in making reliable predictions of solar power. This result provides valuable insights for optimizing and planning renewable energy systems, highlighting the significance of selecting appropriate models and optimizers for accurate solar power forecasting. This is the first time such a comprehensive work presented that also involves transformer networks in hybrid models for solar power forecasting.
The decline in water conditions contributes to the crisis in clean water biodiversity. The interactions between water conditions indicators and the correlations among these variables and taxonomic groupings are intricate in their impact on biodiversity. However, since there are just a few kinds of Internet of Things (IoT) that are accessible to purchase, many chemical and biological measurements still need laboratory studies. The newest progress in Deep Learning and the IoT allows for the use of this method in the real-time surveillance of water quality, therefore contributing to preserving biodiversity. This paper presents a thorough examination of the scientific literature about the water quality factors that have a significant influence on the variety of freshwater ecosystems. It selected the ten most crucial water quality criteria. The connections between the quantifiable and valuable aspects of the IoT are assessed using a Generalized Regression-based Neural Networks (G-RNN) framework and a multi-variational polynomial regression framework. These models depend on historical data from the monitoring of water quality. The projected findings in an urbanized river were validated using a combination of traditional field water testing, in-lab studies, and the created IoT-depend water condition management system. The G-RNN effectively differentiates abnormal increases in variables from typical scenarios. The assessment coefficients for the system for degree 8 are as follows: 0.87, 0.73, 0.89, and 0.79 for N-O3-N, BO-D5, P-O4, and N-H3-N. The suggested methods and prototypes were verified against laboratory findings to assess their efficacy and effectiveness. The general efficacy was deemed suitable, with most forecasting mistakes smaller than 0.3 mg/L. This validation offers valuable insights into IoT methods' usage in pollutants released observation and additional water quality regulating usage, specifically for freshwater biodiversity preservation.
Within the scope of the research, we put forward a technique of exactly confirming the distinctiveness of agricultural leaf pathologies with the assist of deep mastering algorithms and switch getting to know generation. We have pre-skilled models like VGG19, MobileNet, InceptionV3, EfficientNetB0, Simple CNN where we are seeking to increase the utility for the crop disorder type. Through searching at some metrics as cited Accuracy, Precision, Recall and F1 score for a better knowledge of a crop leaf photo category, we observe how each version performs. Our paper shows that artificial intelligence is fairly useful for the obligations of the automatic disease detection and switch mastering (as a method for reusing the existing understanding in the new software) is also beneficial. The contribution of this work to the development of reliable systems of save you sicknesses in production touches upon the rural exercise to achieve superiority fits into precision agriculture and sustainable farming. Future research ought to possibly include centered regions concerning a stability of datasets and stepped forward model interpretability which in turn will improve the fulfillment of these strategies in agricultural contexts.
Abstract Evaluating pharmacokinetic properties of small molecules is considered a key feature in most drug development and high-throughput screening processes. Generally, pharmacokinetics, which represent the fate of drugs in the human body, are described from four perspectives: absorption, distribution, metabolism and excretion—all of which are closely related to a fifth perspective, toxicity (ADMET). Since obtaining ADMET data from in vitro, in vivo or pre-clinical stages is time consuming and expensive, many efforts have been made to predict ADMET properties via computational approaches. However, the majority of available methods are limited in their ability to provide pharmacokinetics and toxicity for diverse targets, ensure good overall accuracy, and offer ease of use, interpretability and extensibility for further optimizations. Here, we introduce Deep-PK, a deep learning-based pharmacokinetic and toxicity prediction, analysis and optimization platform. We applied graph neural networks and graph-based signatures as a graph-level feature to yield the best predictive performance across 73 endpoints, including 64 ADMET and 9 general properties. With these powerful models, Deep-PK supports molecular optimization and interpretation, aiding users in optimizing and understanding pharmacokinetics and toxicity for given input molecules. The Deep-PK is freely available at https://biosig.lab.uq.edu.au/deeppk/.
This article explores the integration of automation and deep learning in modern manufacturing to address critical challenges such as redundancy, defects, vibration analysis, and material strength. As manufacturing processes evolve, the need for more sophisticated methods to optimize production efficiency and product quality becomes paramount. Automation, coupled with deep learning techniques, offers powerful tools for enhancing manufacturing processes. These technologies enable predictive maintenance, reducing downtime by identifying potential equipment failures before they occur. Furthermore, deep learning algorithms can analyse complex data sets to detect defects in products with greater accuracy and speed than traditional methods. Vibration analysis, a key aspect of predictive maintenance, benefits from automated systems that monitor and diagnose issues in real-time, preventing costly disruptions. Additionally, deep learning models can assess material strength and predict potential failures, ensuring that products meet rigorous safety and quality standards. The synergy between automation and deep learning not only streamlines manufacturing processes but also enhances the ability to adapt to changing conditions, thereby minimizing operational inefficiencies. This article highlights the transformative impact of these technologies on the manufacturing industry, illustrating their potential through case studies and practical examples. By addressing key challenges such as redundancy and defects, automation and deep learning contribute to the creation of more reliable, efficient, and resilient manufacturing systems. The insights provided in this study underscore the importance of continued innovation in integrating these technologies to maintain a competitive edge in the rapidly evolving manufacturing landscape.
The incidence and mortality rates of cardiovascular disease worldwide are a major concern in the healthcare industry. Precise prediction of cardiovascular disease is essential, and the use of machine learning and deep learning can aid in decision-making and enhance predictive abilities. The goal of this paper is to introduce a model for precise cardiovascular disease prediction by combining machine learning and deep learning. Two public heart disease classification datasets with 70,000 and 1190 records besides a locally collected dataset with 600 records were used in our experiments. Then, a model which makes use of both machine learning and deep learning was proposed in this paper. The proposed model employed CNN and LSTM, as the representatives of deep learning models, besides KNN and XGB, as the representatives of machine learning models. As each classifier defined the output classes, majority voting was then used as an ensemble learner to predict the final output class. The proposed model obtained the highest classification performance based on all evaluation metrics on all datasets, demonstrating its suitability and reliability in forecasting the probability of cardiovascular disease.
Cyberbullying is a serious problem in online communication. It is important to find effective ways to detect cyberbullying content to make online environments safer. In this paper, we investigated the identification of cyberbullying contents from the Bangla and Chittagonian languages, which are both low-resource languages, with the latter being an extremely low-resource language. In the study, we used both traditional baseline machine learning methods, as well as a wide suite of deep learning methods especially focusing on hybrid networks and transformer-based multilingual models. For the data, we collected over 5000 both Bangla and Chittagonian text samples from social media. Krippendorff’s alpha and Cohen’s kappa were used to measure the reliability of the dataset annotations. Traditional machine learning methods used in this research achieved accuracies ranging from 0.63 to 0.711, with SVM emerging as the top performer. Furthermore, employing ensemble models such as Bagging with 0.70 accuracy, Boosting with 0.69 accuracy, and Voting with 0.72 accuracy yielded promising results. In contrast, deep learning models, notably CNN, achieved accuracies ranging from 0.69 to 0.811, thus outperforming traditional ML approaches, with CNN exhibiting the highest accuracy. We also proposed a series of hybrid network-based models, including BiLSTM+GRU with an accuracy of 0.799, CNN+LSTM with 0.801 accuracy, CNN+BiLSTM with 0.78 accuracy, and CNN+GRU with 0.804 accuracy. Notably, the most complex model, (CNN+LSTM)+BiLSTM, attained an accuracy of 0.82, thus showcasing the efficacy of hybrid architectures. Furthermore, we explored transformer-based models, such as XLM-Roberta with 0.841 accuracy, Bangla BERT with 0.822 accuracy, Multilingual BERT with 0.821 accuracy, BERT with 0.82 accuracy, and Bangla ELECTRA with 0.785 accuracy, which showed significantly enhanced accuracy levels. Our analysis demonstrates that deep learning methods can be highly effective in addressing the pervasive issue of cyberbullying in several different linguistic contexts. We show that transformer models can efficiently circumvent the language dependence problem that plagues conventional transfer learning methods. Our findings suggest that hybrid approaches and transformer-based embeddings can effectively tackle the problem of cyberbullying across online platforms.