feat: audio support for google models with integration into playground#2213
feat: audio support for google models with integration into playground#2213RATCHAW wants to merge 13 commits into
Conversation
Accept `input_audio` on chat completions with provider-aware routing, modality-aware billing, and persistence into log/stats. Bad (format, provider) pairs return 400, not 500.
WalkthroughThis PR implements end-to-end audio input support: message/schema additions, Google audio MIME handling and explicit UnsupportedAudioFormatError, provider audio capability/pricing metadata, audio-aware routing/filtering, audio token extraction, audio-aware cost calculation and propagation, DB/schema + analytics changes, UI/playground updates, and unit/E2E tests. ChangesAudio Input Support
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip 💬 Introducing Slack Agent: The best way for teams to turn conversations into code.Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.
Built for teams:
One agent for your entire SDLC. Right inside Slack. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 6
🧹 Nitpick comments (4)
apps/gateway/src/chat/tools/transform-response-to-openai.ts (1)
326-565: 💤 Low valueForward-compat: only Google/Anthropic/AWS Bedrock branches forward
audioInputTokens.Other provider branches (
inference.net/together-ai/groq,alibaba,openai/azure/mistral/novita,bytedance,xai,embercloud/zai, default) still callbuildUsageObjectwithout forwardingaudioInputTokens. This is fine for this PR since only Google supports audio, but when adding audio to another provider later, both the relevantbuildUsageObjectandapplyExtendedUsageFieldscall sites must also be updated; otherwiseprompt_tokens_details.audio_tokenswill silently fall back to upstream/0 for those providers. No action required now.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@apps/gateway/src/chat/tools/transform-response-to-openai.ts` around lines 326 - 565, The review points out that only the Google/Anthropic/AWS Bedrock branches currently forward audioInputTokens into buildUsageObject and applyExtendedUsageFields, so to make this forward-compatible, update any other provider branches that construct or mutate transformedResponse (e.g., the inference.net/together-ai/groq branch, alibaba, openai/azure/mistral/novita, bytedance, xai, embercloud/zai, and the default case) to pass audioInputTokens into buildUsageObject and into applyExtendedUsageFields where usage is merged; locate these call sites by searching for buildUsageObject(...) and applyExtendedUsageFields(...) and add the audioInputTokens parameter and propagate it through the corresponding usage/extended-usage logic so prompt_tokens_details.audio_tokens is preserved if those providers later support audio.packages/actions/src/transform-google-messages.ts (1)
92-109: 💤 Low value
resolveGoogleAudioMimeerror target is wrong for unrecognised providersWhen
providerIdis not in either family the code correctly builds the union map (allGoogleAudioFormatvalues will always resolve), but if the dead-code error path at line 103 were somehow reached the target string would always say"Google AI Studio"regardless of the actual provider —VERTEX_FAMILY.has(providerId ?? "")isfalseso the ternary falls to theelsebranch.Given that the fallback map contains every value in
GoogleAudioFormat, this path is presently unreachable. If new formats are added toGoogleAudioFormatwithout updating the MIME maps the sentinel message becomes actively misleading.🛡️ Suggested fix – use the actual provider ID as the target label
const target = VERTEX_FAMILY.has(providerId ?? "") ? "Vertex AI" - : "Google AI Studio"; + : AI_STUDIO_FAMILY.has(providerId ?? "") + ? "Google AI Studio" + : (providerId ?? "unknown provider"); throw new UnsupportedAudioFormatError(format, target);🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@packages/actions/src/transform-google-messages.ts` around lines 92 - 109, The error message in resolveGoogleAudioMime uses a ternary that can incorrectly label the target as "Google AI Studio" for unrecognized providerIds; update the UnsupportedAudioFormatError call to use the actual providerId (or a clear fallback like "unknown provider") as the target instead of the current ternary, referencing resolveGoogleAudioMime, providerId, VERTEX_FAMILY/AI_STUDIO_FAMILY and UnsupportedAudioFormatError so the thrown error reflects the real provider when the dead-code path is reached.apps/gateway/src/audio.e2e.ts (1)
170-173: 💤 Low valueUnreachable
inputPricefallback
provider.inputAudioPriceis always defined here — line 55 already filters out any provider whereinputAudioPrice === undefined. The?? provider.inputPrice ?? 0fallback is dead code.♻️ Proposed cleanup
- const expected = - audioTokens * (provider.inputAudioPrice ?? provider.inputPrice ?? 0); + const expected = audioTokens * provider.inputAudioPrice;🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@apps/gateway/src/audio.e2e.ts` around lines 170 - 173, The test calculates expected using a fallback chain that is unreachable because providers with inputAudioPrice === undefined are already filtered; change the expected calculation to use provider.inputAudioPrice directly (e.g., expected = audioTokens * provider.inputAudioPrice), remove the "?? provider.inputPrice ?? 0" fallback, and ensure audioTokens/audioInputCost assertions (the expect call) remain unchanged; if the compiler complains about potential undefined, use a non-null assertion or refine the provider type where filtered to guarantee inputAudioPrice is defined.packages/actions/src/transform-google-messages.spec.ts (1)
105-122: 💤 Low valueConsider using the
.rejectspattern for consistencyThe other error-assertion tests in this file use
await expect(...).rejects.toBeInstanceOf(...). The manual try/catch here works but produces a confusing failure message if no error is thrown (the sentinelthrow new Error("expected throw")gets caught and the assertion reportsError≠UnsupportedAudioFormatErrorrather than "expected function to throw").♻️ Idiomatic rewrite using `.rejects`
- it("attaches format and providerTarget on the error", async () => { - try { - await transformGoogleMessages( - audioMessages("aiff"), - false, - 20, - null, - undefined, - "quartz", - ); - throw new Error("expected throw"); - } catch (err) { - expect(err).toBeInstanceOf(UnsupportedAudioFormatError); - const e = err as UnsupportedAudioFormatError; - expect(e.format).toBe("aiff"); - expect(e.providerTarget).toBe("Vertex AI"); - } - }); + it("attaches format and providerTarget on the error", async () => { + const promise = transformGoogleMessages( + audioMessages("aiff"), + false, + 20, + null, + undefined, + "quartz", + ); + const err = await promise.catch((e: unknown) => e); + expect(err).toBeInstanceOf(UnsupportedAudioFormatError); + const e = err as UnsupportedAudioFormatError; + expect(e.format).toBe("aiff"); + expect(e.providerTarget).toBe("Vertex AI"); + });🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@packages/actions/src/transform-google-messages.spec.ts` around lines 105 - 122, Replace the manual try/catch sentinel pattern in the test for transformGoogleMessages with the idiomatic Jest .rejects pattern: call expect(transformGoogleMessages(audioMessages("aiff"), false, 20, null, undefined, "quartz")) and use .rejects.toBeInstanceOf(UnsupportedAudioFormatError), then assert on the rejected error's properties (format and providerTarget) via .rejects.toMatchObject or by awaiting the rejection value; update references to the UnsupportedAudioFormatError and audioMessages("aiff")/transformGoogleMessages invocations accordingly so the test fails with a clear "expected promise to reject" message instead of catching the sentinel throw.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@apps/gateway/src/audio.e2e.ts`:
- Line 114: The test suite currently uses getConcurrentTestOptions() in the
describe call which enables concurrent:true; replace getConcurrentTestOptions()
with getTestOptions() so the suite does not run tests in parallel—update the
describe invocation that references getConcurrentTestOptions() to call
getTestOptions() instead (ensure any import or reference to getTestOptions() is
present and used in place of getConcurrentTestOptions()).
In `@apps/gateway/src/chat/chat.ts`:
- Around line 3759-3760: The cached-response replay paths that call
calculateCosts() are not reconstructing audio token splits, so cached Gemini
audio requests are billed as text-only; update the cache-hit replay logic (the
branches that set audioInputTokens/audioInputCost and later call
calculateCosts()) to rebuild prompt_tokens_details.audio_tokens and any per-part
split (e.g., prompt_tokens_details.audio_tokens.{input,replay} or the structure
calculateCosts() expects) from the cached metadata before invoking
calculateCosts(), ensuring costs.audioInputTokens and costs.audioInputCost are
derived from the reconstructed audio token fields; modify the replay code paths
referenced near the audioInputTokens/audioInputCost assignments (also apply the
same fix at the other occurrence noted) so cached audio requests retain their
audio token/cost data.
- Around line 1811-1823: The auto-selection path is excluding non-Claude auto
candidates for audio requests (because allowedAutoModels contains only Claude),
causing model: "auto" to fall through to the hardcoded claude/anthropic
fallback; to fix, when the original request model === "auto" and hasAudio is
true, relax the allowedAutoModels restriction so you only filter out providers
that explicitly cannot do audio (i.e., keep the provider.audio !== true check
and the audioFormats.every(...) googleProviderSupportsAudioFormat check) but
skip the allowedAutoModels membership test (or expand allowedAutoModels to
include other known audio-capable auto providers like Gemini); update the
selection logic around hasAudio, provider.audio, audioFormats,
googleProviderSupportsAudioFormat and allowedAutoModels so an audio-capable auto
candidate is preserved and the code does not drop into the claude-haiku-4-5 /
anthropic fallback incorrectly.
- Around line 2041-2042: The audio gating (hasAudio and audioFormats) is only
enforced inside the sameProviderMappings.length > 1 branch, allowing
direct/pinned providers or empty eligibleMappings to bypass validation; modify
the provider-selection logic (the code that iterates/filters eligibleMappings
and sameProviderMappings) to always filter a provider's mappings by audio
eligibility using hasAudio and audioFormats before choosing that provider, and
if a provider ends up with zero audio-eligible mappings reject that
provider/request (return an error) instead of falling back to
single-mapping/default-region logic; update the sameProviderMappings handling
and the fallback path so both respect the audio filter and use the filtered
mapping list for selection.
In `@apps/gateway/src/chat/tools/messages-contain-audio.spec.ts`:
- Around line 122-124: In the tests in messages-contain-audio.spec.ts where the
local variable model is checked (currently "if (!model) { continue }"), stop
silently skipping and assert presence instead; replace the continue with an
explicit assertion (e.g., expect(model).toBeDefined() or throw a clear error) so
the test fails if a model ID is missing—apply this change for each occurrence
where model is guarded (the three spots shown) to surface catalog regressions.
In `@packages/models/src/models/google.ts`:
- Line 1822: The Groq mapping for model "gemma2-9b-it" incorrectly sets audio:
true which routes audio input to a text-only model; locate the mapping entry
that references "gemma2-9b-it" in packages/models/src/models/google.ts and
remove the audio: true flag (or explicitly set audio: false/remove audio-related
content types) so this provider/model pair only accepts text inputs and won't be
selected for audio/input_audio chat completions.
---
Nitpick comments:
In `@apps/gateway/src/audio.e2e.ts`:
- Around line 170-173: The test calculates expected using a fallback chain that
is unreachable because providers with inputAudioPrice === undefined are already
filtered; change the expected calculation to use provider.inputAudioPrice
directly (e.g., expected = audioTokens * provider.inputAudioPrice), remove the
"?? provider.inputPrice ?? 0" fallback, and ensure audioTokens/audioInputCost
assertions (the expect call) remain unchanged; if the compiler complains about
potential undefined, use a non-null assertion or refine the provider type where
filtered to guarantee inputAudioPrice is defined.
In `@apps/gateway/src/chat/tools/transform-response-to-openai.ts`:
- Around line 326-565: The review points out that only the Google/Anthropic/AWS
Bedrock branches currently forward audioInputTokens into buildUsageObject and
applyExtendedUsageFields, so to make this forward-compatible, update any other
provider branches that construct or mutate transformedResponse (e.g., the
inference.net/together-ai/groq branch, alibaba, openai/azure/mistral/novita,
bytedance, xai, embercloud/zai, and the default case) to pass audioInputTokens
into buildUsageObject and into applyExtendedUsageFields where usage is merged;
locate these call sites by searching for buildUsageObject(...) and
applyExtendedUsageFields(...) and add the audioInputTokens parameter and
propagate it through the corresponding usage/extended-usage logic so
prompt_tokens_details.audio_tokens is preserved if those providers later support
audio.
In `@packages/actions/src/transform-google-messages.spec.ts`:
- Around line 105-122: Replace the manual try/catch sentinel pattern in the test
for transformGoogleMessages with the idiomatic Jest .rejects pattern: call
expect(transformGoogleMessages(audioMessages("aiff"), false, 20, null,
undefined, "quartz")) and use
.rejects.toBeInstanceOf(UnsupportedAudioFormatError), then assert on the
rejected error's properties (format and providerTarget) via
.rejects.toMatchObject or by awaiting the rejection value; update references to
the UnsupportedAudioFormatError and
audioMessages("aiff")/transformGoogleMessages invocations accordingly so the
test fails with a clear "expected promise to reject" message instead of catching
the sentinel throw.
In `@packages/actions/src/transform-google-messages.ts`:
- Around line 92-109: The error message in resolveGoogleAudioMime uses a ternary
that can incorrectly label the target as "Google AI Studio" for unrecognized
providerIds; update the UnsupportedAudioFormatError call to use the actual
providerId (or a clear fallback like "unknown provider") as the target instead
of the current ternary, referencing resolveGoogleAudioMime, providerId,
VERTEX_FAMILY/AI_STUDIO_FAMILY and UnsupportedAudioFormatError so the thrown
error reflects the real provider when the dead-code path is reached.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: cbe06e32-bab5-4ab9-a048-4f05afbba3c7
⛔ Files ignored due to path filters (1)
apps/gateway/src/test-fixtures/test-audio.wavis excluded by!**/*.wav
📒 Files selected for processing (25)
apps/gateway/src/app.tsapps/gateway/src/audio.e2e.tsapps/gateway/src/chat/chat.tsapps/gateway/src/chat/schemas/completions.tsapps/gateway/src/chat/tools/extract-token-usage.tsapps/gateway/src/chat/tools/messages-contain-audio.spec.tsapps/gateway/src/chat/tools/messages-contain-audio.tsapps/gateway/src/chat/tools/parse-provider-response.tsapps/gateway/src/chat/tools/transform-response-to-openai.tsapps/gateway/src/lib/costs.spec.tsapps/gateway/src/lib/costs.tsapps/gateway/src/responses/tools/convert-chat-to-responses.tsapps/gateway/src/responses/tools/convert-streaming-to-responses.tsapps/worker/src/services/global-stats-aggregator.tsapps/worker/src/services/project-stats-aggregator.tspackages/actions/src/prepare-request-body.tspackages/actions/src/transform-google-messages.spec.tspackages/actions/src/transform-google-messages.tspackages/db/migrations/1778260730_nervous_karnak.sqlpackages/db/migrations/meta/1778260730_snapshot.jsonpackages/db/migrations/meta/_journal.jsonpackages/db/src/schema.tspackages/models/src/models.tspackages/models/src/models/google.tspackages/models/src/types.ts
| if (!model) { | ||
| continue; | ||
| } |
There was a problem hiding this comment.
Don’t silently skip missing model IDs in these assertions.
Lines 122, 148, and 168 currently continue on missing models, which can hide catalog regressions and still pass the suite. Assert model existence instead.
Suggested fix
- if (!model) {
- continue;
- }
+ expect(model, `Missing model definition for ${id}`).toBeDefined();
+ if (!model) {
+ throw new Error(`Missing model definition for ${id}`);
+ }Also applies to: 148-150, 168-170
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@apps/gateway/src/chat/tools/messages-contain-audio.spec.ts` around lines 122
- 124, In the tests in messages-contain-audio.spec.ts where the local variable
model is checked (currently "if (!model) { continue }"), stop silently skipping
and assert presence instead; replace the continue with an explicit assertion
(e.g., expect(model).toBeDefined() or throw a clear error) so the test fails if
a model ID is missing—apply this change for each occurrence where model is
guarded (the three spots shown) to surface catalog regressions.
- When the request includes input_audio, only consider provider mappings with audio: true and a supported audio format; return 400 if none qualify. - Let non-allowlisted models participate in auto-routing when they support audio. - Record audio_tokens for streaming and cached responses. - Run the audio e2e suite sequentially. - Remove the incorrect audio: true flag on Groq gemma2-9b-it.
- Introduced `audioInputCost` and `audioInputTokens` fields in the activity and logs schemas. - Updated related calculations and API responses to include audio input costs. - Ensured consistency across various API interfaces for audio input handling.
- Upgraded `@llmgateway/ai-sdk-provider` to version 3.8.0, introducing new audio capabilities. - Added `audio` field to model provider mapping schema and updated related components to handle audio attachments. - Enhanced chat UI to support audio messages, including playback controls for audio files. - Updated internal models and types to reflect audio capabilities across the application. This update improves the overall functionality and user experience for audio input handling.
…eat/google-audio-input
There was a problem hiding this comment.
🧹 Nitpick comments (1)
apps/playground/src/components/playground/chat-ui.tsx (1)
798-809: ⚡ Quick winConsider setting
accept=""when neither images nor audio are supported.When
supportsImagesandsupportsAudioare bothfalse, theacceptprop isundefined, which causesmatchesAcceptinprompt-input.tsxto returntruefor all file types. Users could paste files, see them appear in the attachments list, but those files won't be sent with the message (lines 597-608 only attach files when the model supports them).Consider setting
accept=""to explicitly reject all files when neither capability is supported, which would trigger the "No files match the accepted types" error and provide clearer feedback.📋 Suggested improvement
accept={ supportsImages && supportsAudio ? "image/*,audio/*" : supportsImages ? "image/*" : supportsAudio ? "audio/*" - : undefined + : "" }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@apps/playground/src/components/playground/chat-ui.tsx` around lines 798 - 809, The accept prop currently becomes undefined when supportsImages and supportsAudio are both false, allowing matchesAccept to accept any file; update the accept expression in the JSX (the prop named accept in chat-ui.tsx where key uses supportsImages/supportsAudio) to return an empty string ("") instead of undefined when neither supportsImages nor supportsAudio is true so files are explicitly rejected; keep multiple and globalDrop as-is (or also set globalDrop to false if you want to prevent drops) and ensure prompt-input.tsx's matchesAccept will then correctly show the "No files match the accepted types" error.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Nitpick comments:
In `@apps/playground/src/components/playground/chat-ui.tsx`:
- Around line 798-809: The accept prop currently becomes undefined when
supportsImages and supportsAudio are both false, allowing matchesAccept to
accept any file; update the accept expression in the JSX (the prop named accept
in chat-ui.tsx where key uses supportsImages/supportsAudio) to return an empty
string ("") instead of undefined when neither supportsImages nor supportsAudio
is true so files are explicitly rejected; keep multiple and globalDrop as-is (or
also set globalDrop to false if you want to prevent drops) and ensure
prompt-input.tsx's matchesAccept will then correctly show the "No files match
the accepted types" error.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 76f38cf5-034d-413c-8706-a53e666454fa
⛔ Files ignored due to path filters (5)
apps/code/src/lib/api/v1.d.tsis excluded by!**/v1.d.tsapps/playground/src/lib/api/v1.d.tsis excluded by!**/v1.d.tsapps/ui/src/lib/api/v1.d.tsis excluded by!**/v1.d.tsee/admin/src/lib/api/v1.d.tsis excluded by!**/v1.d.tspnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (8)
apps/api/src/routes/internal-models.tsapps/playground/package.jsonapps/playground/src/components/ai-elements/prompt-input.tsxapps/playground/src/components/playground/chat-page-client.tsxapps/playground/src/components/playground/chat-ui.tsxapps/playground/src/lib/fetch-models.tsapps/playground/src/lib/mapmodels.tsapps/playground/src/lib/types.ts
✅ Files skipped from review due to trivial changes (2)
- apps/playground/package.json
- apps/playground/src/lib/types.ts
- Introduced `audios` field in message schemas across various routes to handle audio attachments. - Updated validation schemas to ensure audio content can be included in messages. - Enhanced chat UI and related components to support audio playback and management. - Added database migration to accommodate the new `audios` column in the message table. This update improves the functionality of chat messages by allowing users to send and receive audio files, enhancing the overall user experience.
- Introduced a new UI component for audio attachments in the PromptInput, featuring a Music2Icon for visual representation. - Updated the attachment handling logic to support audio type, enhancing user interaction with audio files. - Removed the previous audio playback element to streamline the attachment display. This update improves the user experience by providing a clearer interface for managing audio attachments.
|
@coderabbitai resume |
✅ Actions performedReviews resumed. |
Summary
Adds Google Gemini audio input support to the gateway, end-to-end.
input_audioblocks on/v1/chat/completionsand routes them to AI Studio or Vertex with the correct provider-specific MIME.audio_input_costin the response, and persists it on the log row plus hourly/daily aggregates.model: "auto"only routes to providers that accept the requested format. Unsupported pairs (e.g. Vertex +aiff) return400, not500.What's included
Request path
chat/schemas/completions.tsaccepts the union of formats supported by either Google variant.transform-google-messages.tsmaps formats to provider-correct MIMEs (AI Studio:audio/aac,audio/aiff; Vertex:audio/x-aacplus Vertex-only formats). ExportsgoogleProviderSupportsAudioFormatandUnsupportedAudioFormatError.Routing
chat.ts+messages-contain-audio.tsdetect audio and formats;filterEligibleModelProvidersskips providers withoutaudio: trueand Google providers that can’t handle every requested format.app.tsmapsUnsupportedAudioFormatErrorto400.Pricing & billing
ProviderModelMapping:audio?: boolean,inputAudioPrice?: number,cachedInputAudioPrice?: number. 43 chat-capable Gemini mappings flaggedaudio: true; image-only and Veo entries unflagged.extract-token-usage.ts/parse-provider-response.tspull per-modality counts fromusageMetadata.{promptTokensDetails, cacheTokensDetails}.costs.tspartitions cached text vs cached audio and bills cached audio atcachedInputAudioPrice.Response shape
transform-response-to-openai.tsemitsaudio_input_costincost_details. OpenAPI schema and Responses API mirrors declare it.Persistence & analytics
log(audioInputTokens,audioInputCost) and on the four hourly stats tables (audioInputCost). Worker aggregators roll it up.Playground
prompt-input.tsxaccepts audio attachments alongside images, with aMusic2Iconchip representing each audio file in the attachment row.audioscolumn on themessagetable; chat read/write API routes (chats.ts,public-chat-shares.ts) surface it on every message payload.message.audios) consolidated into a single file:1778503042_brave_tomorrow_man.sql.Tests
audio.e2e.tsPOSTs an audio message to every audio-priced Google provider and assertsaudioTokens × inputAudioPricematches the returned cost.Summary by CodeRabbit
New Features
Bug Fixes
Tests
Chores