Skip to content

Commit 0b576a2

Browse files
committed
project overview update
1 parent 42a0bf2 commit 0b576a2

File tree

2 files changed

+11
-7
lines changed

2 files changed

+11
-7
lines changed

static/covers/trustlab.svg

Lines changed: 1 addition & 1 deletion
Loading

templates/projects.html

Lines changed: 10 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -4,9 +4,13 @@
44

55

66
<section class="section container">
7-
<h1 class="title is-3" style="text-align: center; font-size: 2.5em;">TrustLab Projects</h1>
8-
<hr>
9-
<img src="https://raw.githubusercontent.com/UQ-Trust-Lab/UQ-Trust-Lab.github.io/master/static/covers/trustlab.svg" alt="" width="80%">
7+
<!-- <h1 class="title is-3" style="text-align: left; font-size: 2.5em;">TrustLab Projects (SEC4AI, AI4SEC)</h1> -->
8+
<h1 class="title is-3" style="text-align: left; font-size: 2.5em;">
9+
TrustLab Projects (<span style="color: #b44695;">Security4AI</span>, <span style="color: #4682B4;">AI4Security</span>)
10+
</h1>
11+
<div style="text-align: center; margin-top: -3em; margin-bottom: -3em;">
12+
<img src="https://raw.githubusercontent.com/UQ-Trust-Lab/UQ-Trust-Lab.github.io/master/static/covers/trustlab.svg" alt="" width="100%">
13+
</div>
1014
<br><br>
1115
<!-- <p style="background-color: #E6E6FA; padding: 10px;">
1216
Hi ALL, <br>
@@ -27,14 +31,14 @@ <h1 class="title is-3" style="text-align: center; font-size: 2.5em;">TrustLab Pr
2731
<br>
2832
Zihan
2933
</p> -->
30-
<a href="/projects/project1/" style="font-size: 1.5em; font-weight: bold; color: #4682B4;">Trustworthy and Responsible AI
34+
<a href="/projects/project1/" style="font-size: 1.5em; font-weight: bold; color: #b44695;">Trustworthy and Responsible AI
3135
</a>
3236
<br>
3337
<p>Trustworthy and Responsible AI is about building AI systems that operate reliably and ethically, aligning with societal values and expectations. Ensuring that these systems behave as expected under various conditions is essential to minimize the risk of unintended outcomes. Equally important is controlling how AI models are used, ensuring they adhere to specific guidelines and are not misapplied. Moreover, it's crucial to maintain a clear focus on the intended purposes of these models, preventing their use in ways that could lead to ethical or legal concerns. By integrating these principles, AI systems can be developed and deployed in a manner that is both reliable and responsible.
3438
</p>
3539
<br>
3640

37-
<a href="/projects/project2/" style="font-size: 1.5em; font-weight: bold; color: #4682B4;">Privacy-preserving Machine Learning</a>
41+
<a href="/projects/project2/" style="font-size: 1.5em; font-weight: bold; color: #b44695;">Privacy-preserving Machine Learning</a>
3842
<br>
3943
<p>Privacy-preserving machine learning addresses the critical need to protect sensitive data in machine learning applications. As models are increasingly deployed in sensitive areas like healthcare and finance, threats such as membership inference, where an attacker can determine if specific data was used in training, and gradient inversion attacks, which can reconstruct input data from model gradients, pose serious risks. Additionally, model extraction attacks can replicate a model's functionality, compromising both data privacy and intellectual property. Privacy-preserving techniques aim to mitigate these risks, ensuring that the benefits of machine learning are realized without sacrificing privacy.
4044
</p>
@@ -51,7 +55,7 @@ <h1 class="title is-3" style="text-align: center; font-size: 2.5em;">TrustLab Pr
5155

5256
<a href="/projects/project4/" style="font-size: 1.5em; font-weight: bold; color: #4682B4;">AI for Software Engineering</a>
5357
<br>
54-
<p>Defect testing has always been a critical and highly focused topic in the field of software engineering. TrustLab has conducted in-depth and systematic research primarily on Web-based collaboration platforms, Deep Learning libraries, and the emerging third-party applications integrated with LLMs. Our rigorous defect testing has particularly focused on these platforms' performance in complex scenarios such as permission invocation, memory consumption, computational errors, data transmission, and secure API calls. Our research not only uncovers high-risk vulnerabilities hidden within these systems but also provides specific recommendations for improvement. These insights serve as valuable guidance for developers and engineers, helping them optimize system design and enhance software security and robustness.
58+
<p>TrustLab has conducted in-depth and systematic research primarily on Web-based collaboration platforms, Deep Learning libraries, and the emerging third-party applications integrated with LLMs. Our rigorous defect testing has particularly focused on these platforms' performance in complex scenarios such as permission invocation, memory consumption, computational errors, data transmission, and secure API calls. Our research not only uncovers high-risk vulnerabilities hidden within these systems but also provides specific recommendations for improvement. These insights serve as valuable guidance for developers and engineers, helping them optimize system design and enhance software security and robustness.
5559
</p>
5660
<br>
5761
</section>

0 commit comments

Comments
 (0)