My research interest focuses on designing trustworthy AI/ML systems by understanding from theory to implementation and by considering practical applications in computer security, computer vision, natural language processing, and cyber-physical systems, including robotics.
In particular, I’m currently interested in the following questions but very open to broader topics.
How to mitigate the hallucination problem of LLMs?
LMMs confidently generate wrong information, which undermines the trust of LLMs as a knowledge base. How to mitigate this? One way could be leveraging conformal prediction to measure uncertainty as a basis for trust (e.g., [arXiv23]). What other possibilities?
Keywords:uncertainty quantification
,conformal prediction
,LLMs
Related Work: ICLR20, AISTATS20, ICLR21, ICLR22, arXiv22, NeurIPS22, Security23, arXiv23
Can we discover and unlearn security and privacy issues in LLMs?
The power of LLMs and their daily life use bring concerns on security and privacy issues (e.g., vulnerable code generation and privacy leakage). Then, how severe are the security and privacy issues? How to unlearn the issues in LLMs?
keywords:LLMs
,prompt tuning
,machine unlearning
Related Work: CVPR23
Can we leverage trustworthy LLMs to mitigate system security issues?
Finding source or binary code vulnerabilities is a long-standing and never-ending problem. The recent advance in LLMs potentially provides clues to further advance the current code vulnerability discovery performance. Can we leverage trustworthy LLMs for vulnerability discovery?
keywords:LLMs
,vulnerability analysis
Related Work: arXiv22
Can we rigorously learn and quantify the uncertainty of AI models, e.g., Large Language Models (LLMs), price predictors, or drones, under distribution shift and adversarial manipulation?
Quantified uncertainty of AI models’ predictions provides a basis of the trust on predictions. To rigorously quantify uncertainty, we have mainly leveraged learning theory and conformal prediction.
Keywords:uncertainty quantification
,learning theory
,distribution shift
,adversarial learning
,conformal prediction
,secure conformal prediction for security
,LLMs
Related Work: ICLR20, AISTATS20, ICLR21, ICLR22, arXiv22, NeurIPS22, Security23, arXiv23