Towards practical model fairness for trustworthy and safe AI신뢰할 수 있는 인공지능을 위한 실용적인 모델 공정성

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 3
  • Download : 0
As artificial intelligence (AI) has an increasing societal impact in our world, developing fair AI becomes important to avoid adapting or even amplifying social biases and discrimination. Although many techniques for training fair models have been proposed, most of them face significant limitations that make it challenging to apply them in practice. To address these challenges, this thesis provides fundamental solutions for 1) lowering the technical barriers of fair AI development and 2) achieving high fairness even when the training and test data contain errors or change over time, and thus lays the foundation for practical and trustworthy AI. Furthermore, we aim to extend our techniques to mitigate ethical concerns associated with foundation models, which are recently being used in many applications at an explosive pace, and suggest new opportunities for making foundational models fair and safe to use.
Advisors
황의종researcher
Description
한국과학기술원 :전기및전자공학부,
Publisher
한국과학기술원
Issue Date
2024
Identifier
325007
Language
eng
Description

학위논문(박사) - 한국과학기술원 : 전기및전자공학부, 2024.2,[xi, 183 p. :]

Keywords

신뢰할 수 있는 인공지능▼a안전한 인공지능▼a모델 공정성▼a모델 견고성; Trustworthy AI▼aAI Safety▼aModel Fairness▼aModel Robustness

URI
http://hdl.handle.net/10203/322188
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1100094&flag=dissertation
Appears in Collection
EE-Theses_Ph.D.(박사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0