Skip to content
Introducing AI Conversations: Natural Language Interaction for Your Apps! Learn More

Blog

Introducing Shiny.AiConversation — AI Conversation

NuGet package Shiny.AiConversation

Building an AI-powered app today means stitching together a chat client, speech recognition, text-to-speech, audio playback, message persistence, and state management — across platforms, with proper lifecycle handling. That’s a lot of plumbing before you write your first prompt.

Shiny.AiConversation wraps all of that into a single IAiConversationService interface. Text chat, voice chat, hands-free wake word activation, configurable audio feedback, and persistent chat history — registered with one DI call, consumed through one service.


Every AI chat app ends up building the same infrastructure:

  • An authenticated chat client that handles token refresh
  • Speech-to-text so users can talk instead of type
  • Text-to-speech so the AI can respond out loud
  • Sound effects for state transitions (thinking, responding, error)
  • A wake word listener for hands-free mode
  • Message persistence for chat history
  • State management so the UI knows what’s happening
  • Thread safety so nothing blows up

Each of these is a separate library, a separate abstraction, and a separate set of platform quirks. You spend weeks on infrastructure before you ship a single feature.

// Register your chat client in DI
builder.Services.AddChatClient(new OpenAIClient("your-api-key").GetChatClient("gpt-4o").AsIChatClient());
builder.Services.AddShinyAiConversation(opts =>
{
opts.SetMessageStore<MyMessageStore>(); // optional
});

That’s it. The service registers IAiConversationService with all the wiring — speech services from Shiny.Speech, chat completions from Microsoft.Extensions.AI, audio playback, time provider, and optional message persistence. The default IChatClientProvider resolves IChatClient straight from DI, so for most apps you just register your chat client and go. For advanced scenarios (on-demand auth, token refresh), you can still implement IChatClientProvider directly.

The simplest path. Send a message, get a streaming response:

aiService.AiResponded += response =>
{
if (response.Update.Text is { } text)
Console.Write(text);
if (response.IsResponseCompleted)
Console.WriteLine();
};
await aiService.TalkTo("What is .NET MAUI?", cancellationToken);

The service handles the full lifecycle — acquires the chat client, prepends system prompts, streams the response, stores both messages if a message store is configured, fires the event, and manages state transitions throughout.

One method call captures speech and sends it to the AI:

await aiService.ListenAndTalk(cancellationToken);

The service activates speech-to-text, waits for the user to stop speaking, sends the transcribed text through TalkTo(), and optionally reads the response aloud via text-to-speech.

This is the “Hey Siri” experience:

await aiService.StartWakeWord("Hey Copilot");

The service enters a continuous loop: listen for the wake phrase, capture the utterance that follows, send it to the AI, loop back. The user never touches the screen. Call StopWakeWord() when you’re done.

Control how the AI delivers responses:

ModeWhat Happens
NoneSilent — text only, delivered via the AiResponded event
AudioBlipShort sound effects at each state transition
LessWordyText-to-speech with a “be concise” system prompt
FullFull text-to-speech of the complete response

Sound effects are driven by string file names and a SoundResolver callback — the library stays platform-agnostic while you provide the stream:

aiService.SoundResolver = name => FileSystem.OpenAppPackageFileAsync(name);
aiService.ThinkSound = "think.mp3";
aiService.OkSound = "ok.mp3";

Register an IMessageStore and every message is automatically persisted. But the interesting part is the AI chat lookup tool — it’s an AITool that lets the AI search its own conversation history:

“What did we talk about yesterday?” “Find the recipe you gave me last week.”

The tool is registered automatically when you call SetMessageStore(). The AI gets search parameters (text, date range, limit) and queries your store directly.

The service exposes its current state and fires events:

aiService.StatusChanged += state =>
{
// state: Idle, Listening, Thinking, Responding
UpdateUI(state);
};

This is what powers the “Aura” visualization in our sample app — a pulsing orb that changes color based on what the AI is doing.

The library doesn’t care which AI you use. By default, it resolves IChatClient from DI — just register one and you’re done. For advanced auth scenarios, implement IChatClientProvider to return any IChatClient from Microsoft.Extensions.AI:

  • OpenAInew OpenAIClient(apiKey).GetChatClient("gpt-4o").AsIChatClient()
  • GitHub Copilot — OAuth device code flow with Copilot API token exchange
  • Azure OpenAI — Managed identity or API key
  • Ollama — Local model, no auth needed
  • Anything else — If it implements IChatClient, it works

The sample apps include a complete GitHub Copilot implementation with device code flow, token caching, automatic re-authentication, and the custom HTTP headers the Copilot API requires.

The library targets plain net10.0 — no MAUI dependency in the library itself. Shiny.Speech handles the platform abstraction for speech and audio, so the same IAiConversationService works on:

  • MAUI — Android, iOS, Windows, Mac Catalyst
  • Blazor — Server-side and WebAssembly (speech via Web Audio API)

We ship two sample apps that prove it: a full MAUI sample with chat, settings, and an animated aura visualization, plus a Blazor Server sample with the same features translated to Razor components and CSS animations.

The library is built with IsAotCompatible=true. Generic type parameters on SetChatClientProvider<T>() and SetMessageStore<T>() carry [DynamicallyAccessedMembers] attributes so the trimmer knows what to keep. No reflection surprises at runtime.

Terminal window
dotnet add package Shiny.AiConversation

The library is MIT licensed and open source. We’d love to hear what you build with it.

The Feedback Service — One Hook to Rule Them All

NuGet package Shiny.Maui.Controls

Every tap, swipe, and keystroke in your app is an opportunity. An opportunity to confirm the user’s action, guide their attention, or add a layer of polish that separates “functional” from “delightful.” Most apps handle this with scattered HapticFeedback.Default.Perform() calls sprinkled across code-behind files. It works — until you want text-to-speech for accessibility, sound effects for a kiosk app, analytics for product telemetry, or different feedback for different controls. Then you’re threading conditional logic through every view in your app.

Shiny Controls v1.0 ships with IFeedbackService — a single injectable service that every interactive control in the library already calls. You implement it once. Every control uses it automatically.


Every Shiny control that supports feedback has a UseFeedback property (default: true). When a user interaction occurs — a message sent, a pin digit entered, a panel opened — the control calls IFeedbackService.OnRequested() with three things:

public interface IFeedbackService
{
void OnRequested(object control, string eventName, object? args = null);
}
  • control — the actual control instance, not a Type. Pattern match directly: control is ChatView, control is SecurityPin.
  • eventName — what happened: "MessageReceived", "DigitEntered", "Opened".
  • args — contextual data. For ChatView, this is the full ChatMessage object. For standard MAUI controls, it’s the native EventArgs. For SecurityPin completion, it’s "LongPress".

The default HapticFeedbackService does what you’d expect — click haptic for most events, long press haptic for completion events. But the real power is in replacing it.

Here’s a real example from our sample app. One service, three behaviors — haptic, text-to-speech for incoming chat messages, and audio cues for PIN entry:

public class MyCustomFeedbackService(
ITextToSpeechService textToSpeech,
IAudioManager audioManager
) : HapticFeedbackService
{
public override async void OnRequested(object control, string eventName, object? args)
{
// haptic first — always
base.OnRequested(control, eventName, args);
// speak incoming chat messages aloud
if (control is ChatView && args is ChatMessage { IsFromMe: false } msg)
{
await textToSpeech.SpeakAsync(
$"Message from {msg.SenderId}. {msg.Text}"
);
}
// click and success sounds for PIN entry
else if (control is SecurityPin)
{
var sound = eventName.Equals("completed", StringComparison.OrdinalIgnoreCase)
? "pin_success.wav"
: "pin_click.wav";
var raw = await FileSystem.OpenAppPackageFileAsync(sound);
audioManager.CreatePlayer(raw).Play();
}
}
}

Register it in one line:

builder.UseShinyControls(cfg =>
{
cfg.SetCustomFeedback<MyCustomFeedbackService>();
});

Because control is the live instance and args carries typed data, you can make nuanced decisions without parsing strings. The ChatMessage gives you sender, timestamp, text, and image URL. The SecurityPin instance gives you its current value and length. Cast, match, and go.

Shiny’s own controls call IFeedbackService internally. But what about standard MAUI controls — Button, Slider, Entry? The MauiControlFeedbackBuilder hooks them in automatically, with an AOT-compatible, fully pluggable design:

cfg.AddDefaultMauiControlFeedback();

This registers hooks for 12 standard MAUI controls — Button.Clicked, Entry.TextChanged, Slider.ValueChanged, Switch.Toggled, and more. Each hook passes the control instance as control and the native event args as args.

cfg.AddDefaultMauiControlFeedback(x =>
{
x.Hook<MyCustomControl>(nameof(MyCustomControl.Tapped),
(c, h) => c.Tapped += h,
(c, h) => c.Tapped -= h);
});
cfg.AddMauiControlFeedback(x =>
{
x.Hook<Button>(nameof(Button.Clicked),
(btn, h) => btn.Clicked += h,
(btn, h) => btn.Clicked -= h);
x.Hook<Slider, ValueChangedEventArgs>(nameof(Slider.ValueChanged),
(s, h) => s.ValueChanged += h,
(s, h) => s.ValueChanged -= h);
});

Two overloads cover every case:

  • Hook<TControl>(eventName, subscribe, unsubscribe) for plain EventHandler events
  • Hook<TControl, TEventArgs>(eventName, subscribe, unsubscribe) for typed EventHandler<TEventArgs> events

Under the hood, each hook uses a ConditionalWeakTable to track handlers per control instance — no leaks, no dictionaries to manage, proper unsubscription when controls leave the visual tree. Zero reflection, fully AOT-safe.

Every Shiny control fires feedback through this system. Here’s the full event catalog:

ControlEvents
ChatViewMessageSent, MessageReceived, MessageTapped (all pass ChatMessage), AttachImage
SecurityPinDigitEntered, Completed
FloatingPanelOpened, Closed, DetentChanged
ImageViewerOpened, Closed, DoubleTapped
ImageEditorToolModeChanged, Undo, Redo, Rotate, Reset, CropApplied, Saved
Fab / FabMenuClicked, Toggled
SchedulerDaySelected, EventSelected, TimeSlotSelected
TableView CellsTapped
ToastShow

Any control’s feedback can be suppressed per-instance with UseFeedback="False".

Most feedback systems are either too simple (a global haptic toggle) or too complex (per-control event subscriptions scattered across your app). IFeedbackService sits in the sweet spot:

  1. One service, all controls. Implement once, every control calls it.
  2. Instance, not type. You get the actual control, not typeof(Button). Inspect properties, check state, make decisions.
  3. Typed args, not strings. ChatMessage, ValueChangedEventArgs, ToggledEventArgs — not "the message text".
  4. Pluggable hooks, not hardcoded events. Add your own controls to the system with three lambdas.
  5. AOT-safe. No reflection, no expressions, no Delegate.CreateDelegate. Just generics and delegates.

Whether you’re building an accessible app that speaks every incoming message, a kiosk that plays sound effects, or just want consistent haptic feedback across your entire UI — IFeedbackService is one implementation away.

Check out the full documentation and the sample app for a working demo with TTS and audio integration.

Turn Any Interface Into an AI Tool — Shiny DI 3.0

What if every service interface you already have could become an AI tool with a single attribute? Shiny Extensions DI 3.0 makes that happen — no adapter classes, no hand-rolled schemas, no registration boilerplate. Mark your interface with [Tool], add [Description] to the methods that matter, and the source generator handles the rest.

You’ve built your services. Clean interfaces, proper DI registration, everything wired up. Now someone asks you to expose a few of those operations as AI tools for an LLM agent. Suddenly you’re writing AIFunction subclasses by hand — one per operation — each with a constructor that takes the service, a metadata property with hand-written parameter schemas, and an InvokeCoreAsync override that extracts arguments from a dictionary and forwards them to your service method.

For one or two tools, it’s fine. For ten or twenty, it’s tedious. And every time you change a method signature, you have to remember to update the corresponding tool class. The schema drifts, the argument parsing breaks, and the bugs only show up when the LLM calls the tool at runtime.

[Tool]
[Description("Manages customer orders")]
public interface IOrderService
{
[Description("Places a new order for a customer")]
Task<OrderResult> PlaceOrderAsync(
[Description("The customer identifier")] Guid customerId,
[Description("The product SKU")] string sku,
[Description("Number of units to order")] int quantity
);
[Description("Cancels an existing order")]
Task CancelOrderAsync(
[Description("The order to cancel")] Guid orderId,
[Description("Reason for cancellation")] string reason
);
// No [Description] — not exposed as a tool
Task<List<Order>> GetInternalAuditLogAsync();
}

That’s it. The source generator produces a fully typed AIFunction subclass for each described method, wires up the parameter metadata, and generates a registration extension — all at compile time.

For PlaceOrderAsync above, the generator emits a class like this:

public class IOrderServicePlaceOrderAsyncAITool : AIFunction
{
private readonly IOrderService _service;
private static readonly AIFunctionMetadata _metadata =
new AIFunctionMetadata("IOrderServicePlaceOrderAsync")
{
Description = "Places a new order for a customer",
Parameters = new AIFunctionParameterMetadata[]
{
new("customerId")
{
Description = "The customer identifier",
ParameterType = typeof(Guid),
IsRequired = true
},
new("sku")
{
Description = "The product SKU",
ParameterType = typeof(string),
IsRequired = true
},
new("quantity")
{
Description = "Number of units to order",
ParameterType = typeof(int),
IsRequired = true
}
}
};
public Guid CustomerId { get; set; }
public string Sku { get; set; }
public int Quantity { get; set; }
public IOrderServicePlaceOrderAsyncAITool(IOrderService service)
{
_service = service;
}
public override AIFunctionMetadata Metadata => _metadata;
protected override async Task<object?> InvokeCoreAsync(
IEnumerable<KeyValuePair<string, object?>>? arguments,
CancellationToken cancellationToken)
{
// argument extraction and service call
return await _service.PlaceOrderAsync(
this.CustomerId, this.Sku, this.Quantity);
}
}

A second class is generated for CancelOrderAsync. The GetInternalAuditLogAsync method is skipped because it has no [Description].

All generated tools are registered with a single call:

services.AddGeneratedAITools();

This registers each tool as Transient<AITool, GeneratedToolClass>. You can then resolve all tools and pass them to any IChatClient:

var tools = serviceProvider.GetServices<AITool>().ToList();
var options = new ChatOptions { Tools = tools };
var response = await chatClient.GetResponseAsync(messages, options);

The AI tool code is only generated when Microsoft.Extensions.AI is referenced in your project. If you don’t reference it, the [Tool] attribute still compiles (it’s just an attribute), but no AIFunction classes or registration code are emitted. This means existing projects that add the DI package won’t get unexpected dependencies.

The generated InvokeCoreAsync handles the JsonElement-vs-already-deserialized argument problem that trips up most hand-written AI tools. For every standard type, the generator emits a direct JsonElement accessor:

TypeExtractionReflection-free
stringGetString()Yes
int, long, short, byteGetInt32(), GetInt64(), etc.Yes
boolGetBoolean()Yes
double, float, decimalGetDouble(), GetSingle(), GetDecimal()Yes
GuidGetGuid()Yes
DateTimeGetDateTime()Yes
DateTimeOffsetGetDateTimeOffset()Yes
DateOnly, TimeOnly, TimeSpanParse(GetString())Yes
EnumsEnum.Parse<T>(GetString())Yes
Complex typesJsonSerializer.Deserialize<T>()Needs JsonSerializerContext

If the argument arrives as a JsonElement (common when the framework hasn’t pre-deserialized), the correct accessor is used. If it arrives already typed (some frameworks do this), a direct cast is used. Both paths are handled with a single is JsonElement check — no try/catch, no Convert.ChangeType.

If your service method accepts a CancellationToken, the generator does the right thing automatically:

[Description("Searches products")]
Task<List<Product>> SearchAsync(
[Description("Search query")] string query,
CancellationToken cancellationToken // not exposed as a tool parameter
);

The CancellationToken is excluded from the tool’s parameter metadata and properties. In InvokeCoreAsync, it’s passed through from the framework’s cancellation token — not extracted from the argument dictionary.

Only methods with [Description] become tools. This gives you fine-grained control over what’s exposed to the LLM. Internal methods, admin operations, or anything you don’t want an AI agent calling — just don’t add the attribute.

The [Tool] attribute goes on interfaces, while [Singleton] / [Scoped] / [Transient] go on implementation classes — same as before. You keep using AddGeneratedServices() for your service registrations and add AddGeneratedAITools() alongside it:

services.AddGeneratedServices();
services.AddGeneratedAITools(); // only if M.E.AI is referenced

The two generators are independent. AI tool generation doesn’t affect or depend on your service registrations.

  1. Add [Tool] to the interface
  2. Add [Description] to the interface and the methods you want exposed
  3. Add [Description] to parameters (optional but recommended — it helps the LLM)
  4. Reference Microsoft.Extensions.AI in your project
  5. Call services.AddGeneratedAITools() at startup
  6. Resolve IEnumerable<AITool> and pass to your chat client

Check the DI documentation for the full setup guide and the release notes for the complete changelog.

One Contract, Three Transports — Mediator AI Tooling

What if you could write a single C# record and have it automatically become a fully typed AI tool — with zero adapter code? That’s what Shiny Mediator 6.3 delivers.

Building AI tool calling today means writing repetitive adapter code. You define a JSON schema by hand, parse arguments from the LLM response, validate them, call your business logic, and serialize the result back. If you already have a mediator contract for the same operation, you’re duplicating intent across two representations. Multiply that by every tool your agent needs — ten, twenty, fifty tools — and it becomes a real maintenance problem.

Worse, the schema and the code drift apart. You rename a property in your contract but forget to update the JSON schema. You add a new required parameter but the tool adapter still treats it as optional. The LLM hallucinates a parameter name that used to exist, and your hand-written parser silently swallows the error. These bugs are subtle, hard to test, and only surface at runtime.

In Shiny Mediator, a contract is a plain record that describes an operation:

[Description("Get the current weather forecast for a given city")]
public record GetWeather(
[property: Description("The city name to get weather for")]
string City,
[property: Description("Temperature unit: 'celsius' or 'fahrenheit'")]
string Unit = "celsius"
) : IRequest<WeatherResult>;
public record WeatherResult(string City, double Temperature, string Unit, string Condition);

And a handler implements the logic:

[MediatorSingleton]
public partial class GetWeatherHandler : IRequestHandler<GetWeather, WeatherResult>
{
public async Task<WeatherResult> Handle(
GetWeather request, IMediatorContext context, CancellationToken ct)
{
// your logic here
}
}

That’s the only code you write. From here, source generators take over.

Add a [Description] attribute to your contract and set ShinyMediatorGenerateAITools=true in your project:

<PropertyGroup>
<ShinyMediatorGenerateAITools>true</ShinyMediatorGenerateAITools>
</PropertyGroup>

The source generator produces a fully typed AIFunction subclass compatible with Microsoft.Extensions.AI:

// auto-generated
internal sealed class GetWeatherAIFunction : AIFunction
{
private readonly IMediator _mediator;
private static readonly JsonElement _jsonSchema =
JsonDocument.Parse("""
{
"type": "object",
"properties": {
"city": { "description": "The city name to get weather for", "type": "string" },
"unit": { "description": "Temperature unit", "type": "string", "default": "celsius" }
},
"required": ["city"]
}
""").RootElement.Clone();
public override string Name => "GetWeather";
public override string Description => "Get the current weather forecast for a given city";
public override JsonElement JsonSchema => _jsonSchema;
protected override async ValueTask<object?> InvokeCoreAsync(
AIFunctionArguments arguments, CancellationToken cancellationToken)
{
var json = JsonSerializer.SerializeToElement(arguments);
var contract = new GetWeather(
City: json.GetProperty("city").GetString()!,
Unit: json.TryGetProperty("unit", out var u) && u.ValueKind != JsonValueKind.Null
? u.GetString()! : "celsius"
);
var (_, result) = await _mediator.Request<WeatherResult>(contract, cancellationToken);
return result;
}
}

A registration extension is also generated:

builder.Services.AddShinyMediator(x => x
.AddMediatorRegistry()
.AddGeneratedAITools() // registers every [Description] contract as an AITool
);

Then pass the tools to any IChatClient:

var tools = services.GetServices<AITool>().ToList();
var options = new ChatOptions { Tools = tools };
var response = await chatClient.GetResponseAsync(history, options);

Because the generated AI tools dispatch through the mediator pipeline, every middleware you’ve already configured applies to AI tool calls automatically. Logging, validation, authorization, exception handling, caching — all of it fires without any extra wiring.

This is a significant advantage over hand-rolled AIFunction implementations. When you write a tool adapter manually, it typically calls your service layer directly, bypassing cross-cutting concerns. With the mediator approach, an AI tool call follows the same pipeline as a UI-triggered action or an API call. Your audit log captures it. Your validation middleware rejects bad input before the handler runs. Your error handling middleware catches exceptions and returns structured errors the LLM can interpret.

You can even write middleware that targets AI calls specifically — for example, injecting a MediatorContext value that tells the handler the call originated from an LLM, so you can apply tighter authorization or rate limiting for AI-initiated operations.

The real power shows when your agent needs many tools. Instead of maintaining dozens of AIFunction subclasses with hand-written schemas, you just add [Description] to your existing contracts. Every contract with a description attribute becomes a tool at the next build.

Adding a new tool to your agent is the same workflow as adding any new mediator operation:

  1. Define the contract record with [Description]
  2. Implement the handler
  3. Done — the tool is registered automatically

No schema files to maintain. No adapter classes to write. No registration code to update. The source generator handles the JSON schema, argument parsing, DI wiring, and AIFunction implementation.

This also means removing a tool is just deleting the [Description] attribute (or the contract itself). There are no orphaned adapters or stale schema definitions to clean up.

Beyond AI: The Same Contract Powers HTTP Too

Section titled “Beyond AI: The Same Contract Powers HTTP Too”

The same contract-first approach extends beyond AI tooling. Shiny Mediator also generates HTTP clients and ASP.NET endpoints from your contracts — meaning a single record and handler can serve as an AI tool, a typed HTTP client, and a REST endpoint simultaneously. The transports are generated; you write the logic once.

Traditional tool-calling setups require you to maintain parallel definitions:

LayerWithout MediatorWith Mediator
Business logicHandler classHandler class
AI tool schemaManual JSON schemaGenerated from contract
AI tool adapterManual AIFunction subclassGenerated
Argument parsingManual deserializationGenerated
DI registrationManual for each toolGenerated
Middleware/validationManual per toolAutomatic via pipeline

With the contract-first approach, adding a new capability to your application — whether it’s exposed as an AI tool, an HTTP endpoint, or both — is one record and one handler.

The generated AIFunction classes are fully Native AOT compatible. Here’s what makes that possible:

No reflection. The generator reads [Description] attributes, property types, nullability, and default values at compile time. It emits direct property access code — json.GetProperty("city").GetString()! — instead of relying on JsonSerializer.Deserialize<T>() or reflection-based binding.

Static JSON schema. The schema is a compile-time constant string parsed once into a JsonElement on first use. There’s no runtime schema construction, no JsonSerializerOptions configuration, and no dynamic type inspection.

Constructor-based hydration. The generated code constructs the contract using its primary constructor with named arguments. No Activator.CreateInstance, no FormatterServices, no property setters via reflection.

Concrete types throughout. Each generated class is a sealed, non-generic concrete type. The DI registrations are explicit AddSingleton<AITool>(sp => new GetWeatherAIFunction(...)) calls — no open generics or service descriptor scanning at runtime.

This means your AI tools work in trimmed, ahead-of-time compiled applications — including .NET MAUI apps targeting iOS and Android — without linker warnings or runtime failures. The same tools that power your cloud API also run on-device in a fully native binary.

The generator handles the full range of C# types in your contracts:

C# TypeJSON SchemaNotes
string, Guid, Uri, DateTime"string"
bool"boolean"
int, long, short, byte"integer"
float, double, decimal"number"
enum"string" with "enum" arrayAll values listed for the LLM
T[], IEnumerable<T>"array"
Nullable types (T?)Omitted from "required"
Default valuesIncluded as "default" in schemaFallback used when LLM omits the parameter

ICommand contracts are also supported — the generated tool returns a success message string instead of a typed result.

  1. Add the [Description] attribute to your contracts and their properties
  2. Set <ShinyMediatorGenerateAITools>true</ShinyMediatorGenerateAITools> in your project file
  3. Reference Microsoft.Extensions.AI
  4. Call .AddGeneratedAITools() during mediator setup
  5. Resolve IEnumerable<AITool> from DI and pass to your chat client

Every contract with a [Description] attribute automatically becomes a tool. Add a new contract, and the next build picks it up — no registration changes, no schema files, no adapter classes.

Check out the Sample.CopilotConsole for a working example that wires up AI tools with a chat loop, or browse the Mediator documentation for the full setup guide.