TurboMediator

Vibecoded with Opus

TurboMediator was written entirely through AI-assisted development using Claude Opus. Here's why that's not a problem.

Yes, This Library Is Vibecoded

TurboMediator was built entirely through AI-assisted development — specifically using Claude Opus, Anthropic's most capable model. Every module, every abstraction, every source generator, every pipeline behavior was authored through a collaborative loop between human intent and AI generation.

There was no traditional keyboard-driven coding session. The architecture was described, the constraints were stated, the tradeoffs were discussed — and the code emerged from that conversation.

This is vibecoding: directing software into existence through high-bandwidth dialogue with a model capable of holding an entire codebase's worth of context.


Why That Should Concern You (and Why It Doesn't)

The reasonable reaction to hearing a library is vibecoded is skepticism. AI models hallucinate. They produce plausible-looking code that subtly violates invariants. They miss edge cases that a battle-scarred engineer would catch in seconds. They optimize for code that reads correct, not code that is correct.

These are legitimate concerns. But they apply to the generation process, not the artifact.

The test suite is the equalizer.


The Test Suite

TurboMediator has a comprehensive suite of unit and integration tests covering every layer of the library:

Unit Tests

Every core behavior is tested in isolation:

AreaTest File
CommandsCommandTests.cs
QueriesQueryTests.cs
RequestsRequestTests.cs
NotificationsNotificationTests.cs, NotificationPublisherTests.cs
Pipeline behaviorsPipelineBehaviorTests.cs
Exception handlingExceptionHandlerTests.cs
StreamingStreamingTests.cs
Dependency injectionDependencyInjectionTests.cs
ValidationValidationTests.cs, FluentValidationTests.cs
CachingCachingTests.cs
BatchingBatchingTests.cs
Rate limitingRateLimiting/
ResilienceResilienceTests.cs, Resilience/
Feature flagsFeatureFlagTests.cs
Result typesResultTests.cs
SagasSagaTests.cs
State machinesStateMachineTests.cs
Telemetry / ObservabilityTelemetryTests.cs, Observability/
Distributed lockingDistributedLocking/
Inbox / OutboxInbox/, Outbox/
ProcessorsProcessorTests.cs
Enterprise featuresEnterprise/
SchedulingScheduling/
CLICli/
Testing utilitiesTesting/

Integration Tests

End-to-end scenarios test real interactions between components, including database-backed stores and external infrastructure:

TestWhat it covers
FullPipelineIntegrationTests.csA complete request flowing through all pipeline behaviors
CachingIntegrationTests.csDistributed cache read/write/invalidation under a real pipeline
TransactionIntegrationTests.csUnit-of-work and transactional consistency with EF Core
AuditStoreIntegrationTests.csAudit trail persistence for commands
SagaStoreIntegrationTests.csSaga state persistence and resumption
InboxStoreIntegrationTests.csInbox idempotency guarantees
OutboxStoreIntegrationTests.csOutbox reliable delivery semantics
RedisDistributedLockIntegrationTests.csDistributed locking via Redis

AOT Compatibility

A dedicated AotCompatibilityTests.cs verifies that the source-generated output is fully compatible with Native AOT — no reflection, no runtime code emission, no trimming surprises.


Benchmark Results

Numbers don't care who wrote the code. These results were produced by BenchmarkDotNet running against TurboMediator, MediatR, and the Mediator source-generator library (the closest architectural equivalent).

Intel Core i7-10750H, .NET 8.0.23, X64 RyuJIT AVX2

Send Command

MethodMeanRatioAllocatedAlloc Ratio
TurboMediator55.56 nsbaseline24 B
Mediator_SourceGen55.91 ns1.01x slower24 B1.00x more
MediatR219.16 ns3.94x slower296 B12.33x more

Send Query

MethodMeanRatioAllocatedAlloc Ratio
TurboMediator77.29 nsbaseline48 B
Mediator_SourceGen68.36 ns1.13x faster48 B1.00x more
MediatR235.85 ns3.05x slower464 B9.67x more

Send Command With Response

MethodMeanRatioAllocatedAlloc Ratio
TurboMediator72.76 nsbaseline48 B
Mediator_SourceGen66.50 ns1.09x faster48 B1.00x more
MediatR248.88 ns3.42x slower464 B9.67x more

Pipeline Behavior (validation + logging + caching behaviors active)

MethodMeanRatioAllocatedAlloc Ratio
TurboMediator74.81 nsbaseline48 B
Mediator_SourceGen78.64 ns1.05x slower48 B1.00x more
MediatR316.44 ns4.23x slower656 B13.67x more

Publish Notification

MethodMeanRatioAllocatedAlloc Ratio
Mediator_SourceGen57.20 ns1.48x faster24 B1.00x more
TurboMediator84.40 nsbaseline24 B
MediatR216.04 ns2.56x slower344 B14.33x more

TurboMediator is 3–4× faster than MediatR and allocates 10–14× less memory per operation. It runs at parity with the other source-generator-based implementation. This is what compile-time dispatch with zero reflection produces — and the AI wrote every line of it.


How the Development Process Worked

Vibecoding is not "paste requirements, accept all." The loop looked like this:

  1. Describe the behavior — state the interface contract, the invariants, the failure modes, and the performance constraints in precise terms
  2. Generate — the model produces an implementation against those constraints
  3. Run the tests — the test suite either passes or reveals where the model's mental model diverged from reality
  4. Review and challenge — every non-trivial design decision was questioned: why this abstraction? what does this break? is this the right tradeoff?
  5. Iterate — the loop repeats, refining until the tests pass and the design holds up under scrutiny

No code entered the codebase without passing the test suite. No design decision was accepted without a rationale that survived a challenge. The model generates fast; the human stays critical.


How It Compares

If the vibecoded origin still bothers you, the comparison page benchmarks TurboMediator against MediatR feature-by-feature — licensing, performance, Native AOT support, pipeline capabilities, and ecosystem coverage.

The argument against vibecoded code is that it might be subtly wrong. The comparison shows what "subtly wrong" would have to overcome: a 4× performance advantage, zero allocations in the hot path, Native AOT compatibility, and 20+ modules — all with a passing test suite.


The Point

Code quality is not a function of who — or what — wrote it. It is a function of how thoroughly it is specified, reviewed, and tested.

Every behavior in this library carries a test. Every edge case that was identified during development was encoded as an assertion. The test suite is the specification made executable.

An AI that produces a correct, well-tested implementation is more useful than a human who produces an untested one. The output is what matters, and the output can be verified.

Found a bug?

Open an issue. That's exactly what open source exists for — regardless of who wrote the code. The repository is public, the tests are the ground truth, and a reproducible bug report is always welcome.

Transparency

This page exists because hiding the development process would be dishonest. Every module was reviewed critically before being accepted — the AI generated, the human decided. If you want to audit a specific design decision, the source is open.

On this page