Multi-Task Distributed Learning Using Vision Transformer With Random Patch Permutation

Cited 1 time in webofscience Cited 0 time in scopus
  • Hit : 358
  • Download : 0
The widespread application of artificial intelligence in health research is currently hampered by limitations in data availability. Distributed learning methods such as federated learning (FL) and split learning (SL) are introduced to solve this problem as well as data management and ownership issues with their different strengths and weaknesses. The recent proposal of federated split task-agnostic (F eSTA) learning tries to reconcile the distinct merits of FL and SL by enabling the multi-task collaboration between participants through Vision Transformer (ViT) architecture, but they suffer from higher communication overhead. To address this, here we present a multi-task distributed learning using ViT with random patch permutation, dubbed ${p}$ -F eSTA. Instead of using a CNN-based head as in F eSTA, ${p}$ -F eSTA adopts a simple patch embedder with random permutation, improving the multi-task learning performance without sacrificing privacy. Experimental results confirm that the proposed method significantly enhances the benefit of multi-task collaboration, communication efficiency, and privacy preservation, shedding light on practical multi-task distributed learning in the field of medical imaging.
Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Issue Date
2023-07
Language
English
Article Type
Article
Citation

IEEE TRANSACTIONS ON MEDICAL IMAGING, v.42, no.7, pp.2091 - 2105

ISSN
0278-0062
DOI
10.1109/TMI.2022.3218783
URI
http://hdl.handle.net/10203/311227
Appears in Collection
AI-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 1 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0