I am an Assistant Professor in the college of big data and Internet at Shenzhen Technology University. Previously, I was a senior researcher at Sangfor SRI Lab, Shenzhen, where I worked on optimizing complex networked systems using optimization techniques and machine learning techniques. Also, I was a Postdoc at Shenzhen International Graduate School, Tsinghua University, and advised by Prof. Shutao Xia. Before that, I did my PhD and bachelor at school of CCST, Jilin University, where I was advised by Prof. Liang Hu I was a visiting PhD at Shenzhen International Graduate School, Tsinghua University during 2017-2020, where advised by Prof. Zhi Wang.
Email  /  Google Scholar  /  DBLP
I'm leading the SINX (SecurIty and Network + X) Group, and looking for self-motivated students to work with me at SZTU. Please feel free to drop me an email with your CV.
Email: jiangjingyan@sztu.edu.cn
My current research aims at Edge intelligence, focus on deep models training and inference on edge, or using machine learning techniques to optimize the edge comupting.
we tackle the dual challenge of computationbandwidth trade-off and cost-effectiveness by proposing A2, an efficient joint Adaptive model, and Adaptive data deep learning serving solution across the geo-datacenters. Inspired by the insight that a trade-off between computational cost and bandwidth cost in achieving the same accuracy, we design a real-time inference serving framework, which selectively places different “versions” of the deep learning models at different geolocations, and schedules different data sample versions to be sent to those model versions for inference. We deploy A2 on Amazon EC2 for experiments, which shows that A2 achieves 30%-50% serving cost reduction under the same required latency and accuracy as compared to baselines
We everages DRL with a distillation module to drive learning efficiency for edge computing with partial observation. We formulate the deadline-aware offloading problem as a decentralized partially observable Markov decision process (Dec-POMDP) with distillation, called fast decentralized reinforcement distillation(Fast-DRD). compared with naive Policy Distillation, Fast-DRD’s two-stage distillation dramatically reduces the amount of exchanging data, and the learning time and data interaction cost decrease nearly 90%. In a complex environment of heterogeneous users with partial observation, offloading models learned by decentralized learning in Fast-DRD still maintain offloading efficiency.
The initial verison of BACombo was accepted by IJCAI 19' FL workshop, which is the first try of segmented aggregation for FL . In the current version, we extend the worker selection scheme.
To avoid the drawback of network congestion in centralized parameter servers architecture, which is adopted in today’s FL systems, we explore the possibility of decentralized FL solution, called BACombo. Taking the insight that the peer-to-peer bandwidth is much smaller than the worker’s maximum network capacity, BACombo could fully utilize the bandwidth by saturating the network with segmented gossip aggregation. The experiments show that BACombo significantly reduces (up to 18×). the training time and maintains a good convergence performance.