You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
TrustLab's primary objective is to develop theories, techniques, and systems that can ensure the trustworthiness of complex software and intelligence systems.
27
27
It strives to enhance security, robustness and reliability of software and intelligence systems, and make them more trustworthy for the individuals and organizations that rely on them.</p>
<ahref="/projects/project1/" style="font-size: 1.5em; font-weight: bold; color: #b44695;">Trustworthy and Responsible AI
35
+
<ahref="/projects/project1/" style="font-size: 1.5em; font-weight: bold; color: #b44695;">Trustworthy and Responsible ML
35
36
</a>
36
37
<br>
37
-
<p>Trustworthy and Responsible AI is about building AI systems that operate reliably and ethically, aligning with societal values and expectations. Ensuring that these systems behave as expected under various conditions is essential to minimize the risk of unintended outcomes. Equally important is controlling how AI models are used, ensuring they adhere to specific guidelines and are not misapplied. Moreover, it's crucial to maintain a clear focus on the intended purposes of these models, preventing their use in ways that could lead to ethical or legal concerns. By integrating these principles, AI systems can be developed and deployed in a manner that is both reliable and responsible.
38
+
<p>Trustworthy and Responsible ML is about building ML systems that operate reliably and ethically, aligning with societal values and expectations. Ensuring that these systems behave as expected under various conditions is essential to minimize the risk of unintended outcomes. Equally important is controlling how ML models are used, ensuring they adhere to specific guidelines and are not misapplied. Moreover, it's crucial to maintain a clear focus on the intended purposes of these models, preventing their use in ways that could lead to ethical or legal concerns. By integrating these principles, ML systems can be developed and deployed in a manner that is both reliable and responsible.
<ahref="/projects/project4/" style="font-size: 1.5em; font-weight: bold; color: #4682B4;">AI for Software Engineering</a>
57
+
<ahref="/projects/project4/" style="font-size: 1.5em; font-weight: bold; color: #4682B4;">ML for Software Engineering</a>
57
58
<br>
58
59
<p>TrustLab has conducted in-depth and systematic research primarily on Web-based collaboration platforms, Deep Learning libraries, and the emerging third-party applications integrated with LLMs. Our rigorous defect testing has particularly focused on these platforms' performance in complex scenarios such as permission invocation, memory consumption, computational errors, data transmission, and secure API calls. Our research not only uncovers high-risk vulnerabilities hidden within these systems but also provides specific recommendations for improvement. These insights serve as valuable guidance for developers and engineers, helping them optimize system design and enhance software security and robustness.
0 commit comments