Logo image
SflLLM: Efficient Split Federated Learning for Large Language Model over Wireless Networks
Conference proceeding

SflLLM: Efficient Split Federated Learning for Large Language Model over Wireless Networks

Kai Zhao, Chen Zhu, Mingzhe Chen, Chongwen Huang, Zhaohui Yang and Zhaoyang Zhang
IEEE Global Communications Conference (Online), pp.1835-1840
2025-12-08

Abstract

Adaptation models Computational modeling Convergence Federated learning LoRa Resource management Servers Training Wireless networks Optimization
Fine-tuning large language models (LLM) in a distributed manner over edge devices with limited communication and computational resources presents substantial challenges in wireless networks. To tackle these issues, this paper proposes a novel Split Federated Learning framework tailored for LLM (SflLLM), which integrates split federated learning with parameter-efficient fine-tuning techniques. By employing model partitioning and low-rank adaptation (LoRA), SflLLM significantly reduces the computational load on edge devices. Moreover, the introduction of the federated server not only facilitates parallel training but also enhances privacy preservation. To accommodate the heterogeneous communication conditions and diverse computational capacities of edge devices-while accounting for the influence of LoRA rank selection on model convergence and training overhead-we formulate a joint optimization problem. This problem simultaneously optimizes subchannel allocation, power control, model split point selection, and LoRA rank configuration, with the objective of minimizing the overall training latency. An alternating optimization algorithm is developed to efficiently solve the proposed problem and accelerate the training process. Simulation results demonstrate that, compared to conventional methods, the proposed resource allocation scheme and adaptive LoRA rank selection strategy significantly reduce training latency.

Metrics

1 Record Views

Details

Logo image