In this article, we introduce <inline-formula><tex-math notation="LaTeX">${\sf PROPHET}$</tex-math><alternatives><mml:math><mml:mi mathvariant="sans-serif">PROPHET</mml:mi></mml:math><inline-graphic xlink:href="pasquadibisceglie-ieq3-3463487.gif"/></alternatives></inline-formula>, an innovative approach to predictive process monitoring based on Heterogeneous Graph Neural Networks. <inline-formula><tex-math notation="LaTeX">${\sf PROPHET}$</tex-math><alternatives><mml:math><mml:mi mathvariant="sans-serif">PROPHET</mml:mi></mml:math><inline-graphic xlink:href="pasquadibisceglie-ieq4-3463487.gif"/></alternatives></inline-formula> is designed to strike a balance between accurate predictions and interpretability, particularly focusing on the next-activity prediction task. For this purpose, we represent the event traces recorded for different business process executions as heterogeneous graphs within a multi-view learning scheme combined with a heterogeneous graph learning approach. Using heterogeneous Graph Attention Networks (GATs), we achieve good accuracy by incorporating different characteristics of several events into graphs with different node types and leveraging different types of graph links to express relationships between event characteristics, as well as relationships between events. In addition, the use of a GAT model enables the integration of a modified version of the GNN Explainer algorithm to add the explainable component to the predictive model. In particular, the GNN Explainer algorithm is modified to disclose explainable information related to characteristics, events and relationships between events that mainly influenced the prediction. Experiments with various benchmark event logs prove the accuracy of <inline-formula><tex-math notation="LaTeX">${\sf PROPHET}$</tex-math><alternatives><mml:math><mml:mi mathvariant="sans-serif">PROPHET</mml:mi></mml:math><inline-graphic xlink:href="pasquadibisceglie-ieq5-3463487.gif"/></alternatives></inline-formula> compared to several current state-of-the-art methods and draw insights from explanations recovered through the GNN Explainer algorithm.