Abstract
Modern network environments-spanning 5G cores, industrial IoT, and spine-leaf data-centers-offer rich hierarchical structure and multi-layer telemetry, yet deep learning models applied to these settings remain black-box predictors that ignore domain logic and struggle with scarce data. We introduce LogiK-Net, a neurosymbolic framework that bridges data and knowledge by decoupling the learning process into (i) a forward-discovery module based on Kolmogorov-Arnold Networks (KANs) that yields interpretable edge activations for feature pruning and rule mining, and (ii) a backward-validation module that employs differentiable first-order network logic to enforce domain axioms and the rules mined on-the-fly. This modular design allows practitioners to swap in richer feature extractors or stricter logical rule sets in the machine learning model as needed, scaling smoothly from supervised traffic-classification to unsupervised, open-world network management. Extensive experiments on reliable feature pruning, IoT threat detection, and topology discovery demonstrate LogiK-Net's generality, interpretability, and reliability, outperforming standard neural network baselines employed in network analysis.