Statistical modeling of multivariate time-series data poses significant challenges due to their high dimensionality and complex inter-variable relationships. Reliable forecasts or anomaly detection on these datasets require capturing such relationships within and between the features. While traditional deep learning architectures are good at capturing temporal non-linear patterns within features, they are less efficient at modeling inter-variable relationships explicitly structured as graphs-a capability where Graph Neural Networks (GNNs) excel. Inspired by the success of GNNs, Graph Deviation Network (GDN) was originally proposed for anomaly detection on industrial multivariate time-series data. After proving its merits through experiments with real-world data, GDN gained significant popularity in the research community, claiming to learn the hidden graph structure in any multivariate time-series data. Various modifications to GDN were proposed over the years, but essentially all of them kept its Graph Structure Learning (GSL) module intact. However, until now, this module has never been rigorously evaluated. This work scrutinizes the contribution of the GSL module. Our experiments reveal that the graph learned by GSL is relatively ineffective, and the key to the overall performance achieved by GDN lies almost entirely in the downstream Graph Attention Network (GAT) module. We hope our findings will garner attention for further development of the GSL module of GDN, whose fidelity can improve the performance of GDN variants. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.