Federated Reinforcement-Learning-based Action Prediction for Scheduling Optimizations in Client-Server-based Online Games

A Client-Server-based architecture depends on frequent and reliable updates between the clients and the server. Too frequent updates can lead to network congestion and too little in a sub-optimal experience for the players. Finding a good balance and maximizing the amount of work not done is crucial. Our proposed idea is to use a machine-learning model, that is either trained and adapted for each player and/or for each map in the game, to predict the movements of the players at their current positions and to give a confidence percentage of how sure the AI is that his movement will actually occur. If the percentage is high enough then there is not really a need for state propagation but the local player can just rely on the deterministic prediction of the physics engine to be accurate. This scenario would be calculated on both the client and server side and should yield the exact same results. In case the server notices that the prediction is actually inaccurate, it will notify the players/clients about it and they can reconcile their state and replay it to get the correct state again. This way the amount of updates in very predictable scenarios (like a jump from one platform to another, which rarely gets interrupted) can be reduced which can improve the state of the network.

 

 

Contact