You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
<ahref="/projects/project1/" style="font-size: 1.5em; font-weight: bold; color: #4682B4;">Trustworthy and Responsible AI
34
+
<ahref="/projects/project1/" style="font-size: 1.5em; font-weight: bold; color: #b44695;">Trustworthy and Responsible AI
31
35
</a>
32
36
<br>
33
37
<p>Trustworthy and Responsible AI is about building AI systems that operate reliably and ethically, aligning with societal values and expectations. Ensuring that these systems behave as expected under various conditions is essential to minimize the risk of unintended outcomes. Equally important is controlling how AI models are used, ensuring they adhere to specific guidelines and are not misapplied. Moreover, it's crucial to maintain a clear focus on the intended purposes of these models, preventing their use in ways that could lead to ethical or legal concerns. By integrating these principles, AI systems can be developed and deployed in a manner that is both reliable and responsible.
<p>Privacy-preserving machine learning addresses the critical need to protect sensitive data in machine learning applications. As models are increasingly deployed in sensitive areas like healthcare and finance, threats such as membership inference, where an attacker can determine if specific data was used in training, and gradient inversion attacks, which can reconstruct input data from model gradients, pose serious risks. Additionally, model extraction attacks can replicate a model's functionality, compromising both data privacy and intellectual property. Privacy-preserving techniques aim to mitigate these risks, ensuring that the benefits of machine learning are realized without sacrificing privacy.
<ahref="/projects/project4/" style="font-size: 1.5em; font-weight: bold; color: #4682B4;">AI for Software Engineering</a>
53
57
<br>
54
-
<p>Defect testing has always been a critical and highly focused topic in the field of software engineering. TrustLab has conducted in-depth and systematic research primarily on Web-based collaboration platforms, Deep Learning libraries, and the emerging third-party applications integrated with LLMs. Our rigorous defect testing has particularly focused on these platforms' performance in complex scenarios such as permission invocation, memory consumption, computational errors, data transmission, and secure API calls. Our research not only uncovers high-risk vulnerabilities hidden within these systems but also provides specific recommendations for improvement. These insights serve as valuable guidance for developers and engineers, helping them optimize system design and enhance software security and robustness.
58
+
<p>TrustLab has conducted in-depth and systematic research primarily on Web-based collaboration platforms, Deep Learning libraries, and the emerging third-party applications integrated with LLMs. Our rigorous defect testing has particularly focused on these platforms' performance in complex scenarios such as permission invocation, memory consumption, computational errors, data transmission, and secure API calls. Our research not only uncovers high-risk vulnerabilities hidden within these systems but also provides specific recommendations for improvement. These insights serve as valuable guidance for developers and engineers, helping them optimize system design and enhance software security and robustness.
0 commit comments