Skip to content
Introducing AI Conversations: Natural Language Interaction for Your Apps! Learn More

Acknowledgements & Sound

The IAiConversationService provides four acknowledgement modes that control how the AI delivers responses — from silent text-only to full text-to-speech.

aiService.Acknowledgement = AiAcknowledgement.Full;
ModeBehavior
NoneNo audio feedback. Responses are text-only via the AiResponded event.
AudioBlipShort sound effects play at state transitions (thinking, responding, ok, cancel, error). No text-to-speech.
LessWordyText-to-speech reads the response aloud. A system prompt is added requesting concise responses.
FullText-to-speech reads the full, unmodified response aloud.

When Acknowledgement is set to AudioBlip, the service plays sounds at each state transition. Configure sounds with string file names and a resolver callback:

// Set the resolver that converts file names to streams
aiService.SoundResolver = name => FileSystem.OpenAppPackageFileAsync(name);
// Assign sound file names
aiService.OkSound = "ok.mp3";
aiService.ThinkSound = "think.mp3";
aiService.RespondingSound = "responding.mp3";
aiService.CancelSound = "cancel.mp3";
aiService.ErrorSound = "error.mp3";
SoundWhen it plays
ThinkSoundAI begins processing (state → Thinking)
RespondingSoundAI starts streaming response (state → Responding)
OkSoundRequest completes successfully
CancelSoundOperation is cancelled
ErrorSoundAn error occurs during processing

The SoundResolver property is a Func<string, Task<Stream>>? that the service calls with the sound file name to get a playable audio stream. This keeps the library platform-agnostic — you provide the resolver appropriate for your platform:

MAUI:

aiService.SoundResolver = name => FileSystem.OpenAppPackageFileAsync(name);

Blazor / ASP.NET:

var env = app.Services.GetRequiredService<IWebHostEnvironment>();
aiService.SoundResolver = name =>
Task.FromResult<Stream>(File.OpenRead(Path.Combine(env.WebRootPath, "sounds", name)));

The service exposes its current state and fires events you can observe:

// Current state
var state = aiService.Status; // Idle, Listening, Thinking, Responding
// State change event
aiService.StateChanged += () =>
{
Console.WriteLine($"State: {aiService.Status}");
};
// AI response event
aiService.AiResponded += response =>
{
Console.WriteLine($"AI said: {response.Message}");
Console.WriteLine($"At: {response.Timestamp}");
Console.WriteLine($"Was read aloud: {response.WasReadAloud}");
};

The AiResponse record includes WasReadAloud so your UI can decide whether to display a visual notification for responses that were already spoken.