dc.contributor.author |
Damle, Sankarshan |
|
dc.contributor.author |
Rokvic, Ljubomir |
|
dc.contributor.author |
Bhamidi, Venugopal |
|
dc.contributor.author |
Padala, Manisha |
|
dc.contributor.author |
Faltings, Boi |
|
dc.coverage.spatial |
United States of America |
|
dc.date.accessioned |
2025-07-11T08:30:50Z |
|
dc.date.available |
2025-07-11T08:30:50Z |
|
dc.date.issued |
2025-06 |
|
dc.identifier.citation |
Damle, Sankarshan; Rokvic, Ljubomir; Bhamidi, Venugopal; Padala, Manisha and Faltings, Boi, "LoRA-FL: a low-rank adversarial attack for compromising group fairness in federated learning", OpenReview, University of Massachusetts, DOI: openreview.net/pdf?id=cUp9yvJdPG, Jun. 2025. |
|
dc.identifier.uri |
https://openreview.net/pdf?id=cUp9yvJdPG |
|
dc.identifier.uri |
https://repository.iitgn.ac.in/handle/123456789/11620 |
|
dc.description.abstract |
Federated Learning (FL) enables collaborative model training without sharing raw data, but agent distributions can induce unfair outcomes across sensitive groups. Existing fairness attacks often degrade accuracy or are blocked by robust aggregators like KRUM. We propose LoRA-FL: a stealthy adversarial attack that uses low-rank adapters to inject bias while closely mimicking benign updates. By operating in a compact parameter subspace, LoRA-FL evades standard defenses without harming accuracy. On standard fairness benchmarks (Adult, Bank, Dutch), LoRA-FL reduces fairness metrics (DP, EO) by over 40% with only 10–20% adversarial agents, revealing a critical vulnerability in FL’s fairness-security landscape. Our code base is available at: https://github.com/ sankarshandamle/LoRA-FL. |
|
dc.description.statementofresponsibility |
by Sankarshan Damle, Ljubomir Rokvic, Venugopal Bhamidi, Manisha Padala and Boi Faltings |
|
dc.language.iso |
en_US |
|
dc.publisher |
University of Massachusetts |
|
dc.title |
LoRA-FL: a low-rank adversarial attack for compromising group fairness in federated learning |
|
dc.type |
Article |
|
dc.relation.journal |
OpenReview |
|