Skip to content

Commit 1c08e99

Browse files
committed
Update readme
1 parent db16da9 commit 1c08e99

File tree

4 files changed

+27
-22
lines changed

4 files changed

+27
-22
lines changed

README.md

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -24,17 +24,17 @@ lk app create --template agent-starter-swift --sandbox <token_server_sandbox_id>
2424
Then, build and run the app from Xcode by opening `VoiceAgent.xcodeproj`. You may need to adjust your app signing settings to run the app on your device.
2525

2626
> [!NOTE]
27-
> To setup without the LiveKit CLI, clone the repository and then either create a `VoiceAgent/.env.xcconfig` with a `LIVEKIT_SANDBOX_ID` (if using a [Sandbox Token Server](https://cloud.livekit.io/projects/p_/sandbox/templates/token-server)), or open `TokenService.swift` and add your [manually generated](#token-generation) URL and token.
27+
> To setup without the LiveKit CLI, clone the repository and then either create a `VoiceAgent/.env.xcconfig` with a `LIVEKIT_SANDBOX_ID` (if using a [Sandbox Token Server](https://cloud.livekit.io/projects/p_/sandbox/templates/token-server)), or modify `VoiceAgent/VoiceAgentApp.swift` to replace the `SandboxTokenSource` with a custom token source implementation.
2828
2929
## Feature overview
3030

3131
This starter app has support for a number of features of the agents framework, and is configurable to easily enable or disable them in code based on your needs as you adapt this template to your own use case.
3232

3333
### Text, video, and voice input
3434

35-
This app supports text, video, and/or voice input according to the needs of your agent. To update the features enabled in the app, edit `VoiceAgent/VoiceAgentApp.swift` and update `AgentFeatures.current` to include or exclude the features you need.
35+
This app supports text, video, and/or voice input according to the needs of your agent. To update the features enabled in the app, edit `VoiceAgent/VoiceAgentApp.swift` and update `Features` to include or exclude the features you need.
3636

37-
By default, only voice and text input are enabled.
37+
By default, all features (voice, video, and text input) are enabled.
3838

3939
Available input types:
4040
- `.voice`: Allows the user to speak to the agent using their microphone. **Requires microphone permissions.**
@@ -43,23 +43,23 @@ Available input types:
4343

4444
If you have trouble with screensharing, refer to [the docs](https://docs.livekit.io/home/client/tracks/screenshare/) for more setup instructions.
4545

46-
### Preconnect audio buffer
46+
### Session
47+
48+
The app is built on top of two main observable components from the [LiveKit Swift SDK](https://github.com/livekit/client-sdk-swift):
49+
- `Session` object to connect to the LiveKit infrastructure, interact with the `Agent`, its local state, and send/receive text messages.
50+
- `LocalMedia` object to manage the local media tracks (audio, video, screen sharing) and their lifecycle.
4751

48-
This app uses `withPreConnectAudio` to capture and buffer audio before the room connection completes. This allows the connection to appear "instant" from the user's perspective and makes your app more responsive. To disable this feature, remove the call to `withPreConnectAudio` as below:
52+
### Preconnect audio buffer
4953

50-
- Location: `VoiceAgent/App/AppViewModel.swift``connectWithVoice()`
51-
- To disable preconnect buffering but keep voice:
52-
- Replace the `withPreConnectAudio { ... }` block with a standard `room.connect` call and enable the microphone after connect, for example:
53-
- Connect with `connectOptions: .init(enableMicrophone: true)` without wrapping in `withPreConnectAudio`, or
54-
- Connect with microphone disabled and call `room.localParticipant.setMicrophone(enabled: true)` after connection.
54+
This app enables `preConnectAudio` by default to capture and buffer audio before the room connection completes. This allows the connection to appear "instant" from the user's perspective and makes your app more responsive. To disable this feature, set `preConnectAudio` to `false` in `SessionOptions` when creating the `Session`.
5555

5656
### Virtual avatar support
5757

58-
If your agent publishes a [virtual avatar](https://docs.livekit.io/agents/integrations/avatar/), this app will automatically render the avatars camera feed in `AgentParticipantView` when available.
58+
If your agent publishes a [virtual avatar](https://docs.livekit.io/agents/integrations/avatar/), this app will automatically render the avatar's camera feed in `AgentView` when available.
5959

6060
## Token generation in production
6161

62-
In a production environment, you will be responsible for developing a solution to [generate tokens for your users](https://docs.livekit.io/home/server/generating-tokens/) which is integrated with your authentication solution. You should disable your sandbox token server and modify `TokenService.swift` to use your own token server.
62+
In a production environment, you will be responsible for developing a solution to [generate tokens for your users](https://docs.livekit.io/home/server/generating-tokens/) which is integrated with your authentication solution. You should replace your `SandboxTokenSource` with an `EndpointTokenSource` or your own `TokenSourceConfigurable` implementation. Additionally, you can use the `.cached()` extension to cache valid tokens and avoid unnecessary token requests.
6363

6464
## Running on Simulator
6565

VoiceAgent/Chat/ChatView.swift

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,9 +2,12 @@ import LiveKitComponents
22
import SwiftUI
33

44
struct ChatView: View {
5+
@EnvironmentObject private var session: Session
6+
57
var body: some View {
68
ChatScrollView(messageBuilder: message)
79
.padding(.horizontal)
10+
.animation(.default, value: session.messages)
811
}
912

1013
@ViewBuilder

VoiceAgent/ControlBar/ControlBar.swift

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -18,17 +18,17 @@ struct ControlBar: View {
1818
var body: some View {
1919
HStack(spacing: .zero) {
2020
biggerSpacer()
21-
if AppFeatures.voice {
21+
if VoiceAgentApp.Features.voice {
2222
audioControls()
2323
flexibleSpacer()
2424
}
25-
if AppFeatures.video {
25+
if VoiceAgentApp.Features.video {
2626
videoControls()
2727
flexibleSpacer()
2828
screenShareButton()
2929
flexibleSpacer()
3030
}
31-
if AppFeatures.text {
31+
if VoiceAgentApp.Features.text {
3232
textInputButton()
3333
flexibleSpacer()
3434
}

VoiceAgent/VoiceAgentApp.swift

Lines changed: 9 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,21 +1,23 @@
11
import LiveKit
22
import SwiftUI
33

4-
enum AppFeatures {
5-
static let voice = true
6-
static let video = true
7-
static let text = true
8-
}
9-
104
@main
115
struct VoiceAgentApp: App {
6+
/// Enable or disable input modes in the app based on the supported agent features.
7+
enum Features {
8+
static let voice = true
9+
static let video = true
10+
static let text = true
11+
}
12+
1213
// To use the LiveKit Cloud sandbox (development only)
1314
// - Enable your sandbox here https://cloud.livekit.io/projects/p_/sandbox/templates/token-server
1415
// - Create .env.xcconfig with your LIVEKIT_SANDBOX_ID
1516
private static let sandboxID = Bundle.main.object(forInfoDictionaryKey: "LiveKitSandboxId") as! String
1617

18+
/// For production use, replace the `SandboxTokenSource` with an `EndpointTokenSource` or your own `TokenSourceConfigurable` implementation.
1719
private let session = Session(
18-
tokenSource: SandboxTokenSource(id: Self.sandboxID),
20+
tokenSource: SandboxTokenSource(id: Self.sandboxID).cached(),
1921
options: SessionOptions(room: Room(roomOptions: RoomOptions(defaultScreenShareCaptureOptions: ScreenShareCaptureOptions(useBroadcastExtension: true))))
2022
)
2123

0 commit comments

Comments
 (0)