3260 papers • 126 benchmarks • 313 datasets
Mapping an input graph to a sequence of vectors.
(Image credit: Papersgraph)
These leaderboards are used to track progress in graph-to-sequence-9
Use these libraries to find graph-to-sequence-9 models and implementations
No subtasks available.
This work presents a novel training procedure that can lift the limitation of the relatively limited amount of labeled data and the non-sequential nature of the AMR graphs, and presents strong evidence that sequence-based AMR models are robust against ordering variations of graph-to-sequence conversions.
This work introduces a novel general end-to-end graph- to-sequence neural encoder-decoder model that maps an input graph to a sequence of vectors and uses an attention-based LSTM method to decode the target sequence from these vectors.
This work proposes a new model that encodes the full structural information contained in the graph, couples the recently proposed Gated Graph Neural Networks with an input transformation that allows nodes and edges to have their own hidden representations, while tackling the parameter explosion problem present in previous work.
This paper proposes an alternative encoder based on graph convolutional networks that directly exploits the input structure and reports results on two graph-to-sequence datasets that empirically show the benefits of explicitly encoding the input graph structure.
The extent to which reentrancies (nodes with multiple parents) have an impact on AMR-to-text generation is investigated by comparing graph encoder to tree encoders, where reENTrancies are not preserved.
Results show that the physics-informed neural network is able to learn the correction in the original fatigue model due to corrosion and predictions are accurate enough for ranking damage in different airplanes in the fleet (which can be used to prioritizing inspection).
Unexpected main bearing failure on a wind turbine causes unwanted maintenance and increased operation costs (mainly due to crane, parts, labor, and production loss). Unfortunately, historical data indicates that failure can happen far earlier than the component design lives. Root cause analysis investigations have pointed to problems inherent from manufacturing as the major contributor, as well as issues related to event loads (e.g., startups, shutdowns, and emergency stops), extreme environmental conditions, and maintenance practices, among others. Altogether, the multiple failure modes and contributors make modeling the remaining useful life of main bearings a very daunting task. In this paper, we present a novel physics-informed neural network modeling approach for main bearing fatigue. The proposed approach is fully hybrid and designed to merge physics-informed and data-driven layers within deep neural networks. The result is a cumulative damage model where the physics-informed layers are used model the relatively well-understood physics (L10 fatigue life) and the data-driven layers account for the hard to model components (i.e., grease degradation).
This paper first proposes to use the syntactic graph to represent three types of syntactic information, i.e., word order, dependency and constituency features; then employs a graph-to-sequence model to encode the Syntactic graph and decode a logical form.
This work introduces a neural graph-to-sequence model, using a novel LSTM structure for directly encoding graph-level semantics, and shows superior results to existing methods in the literature.
This paper proposes a graph-to-sequence model to encode the global structure information into node embeddings that can effectively learn the correlation between the SQL query pattern and its interpretation.
Adding a benchmark result helps the community track progress.