Skip to content

Conversation

@necrodome
Copy link
Collaborator

No description provided.

@charliecreates charliecreates bot requested a review from CharlieHelps December 2, 2025 22:15
@necrodome necrodome marked this pull request as draft December 2, 2025 22:15
@netlify
Copy link

netlify bot commented Dec 2, 2025

Deploy Preview for fireproof-ai-builder canceled.

Name Link
🔨 Latest commit f166dc4
🔍 Latest deploy log https://app.netlify.com/projects/fireproof-ai-builder/deploys/6931f8a3d05525000825b3b5

@github-actions
Copy link

github-actions bot commented Dec 2, 2025

🎉 Hosting Preview Deployment

Preview URL: https://pr-717-vibes-hosting-v2.jchris.workers.dev
Worker: pr-717-vibes-hosting-v2
Commit: 343c3d5

Test the preview:

  1. Visit the preview URL above
  2. Test hosting functionality (KV assets, queue operations)
  3. Verify API integrations work correctly
  4. Check domain routing is disabled (workers.dev only)

This preview will be automatically updated on each push to this PR

Copy link
Contributor

@charliecreates charliecreates bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new OpenRouter image endpoints are generally well-structured, but there are several contract mismatches and behavioral gaps between the public API and what is actually sent upstream (e.g., unused parameters like quality, background, and n, and always-undefined revised_prompt). Error handling currently returns the full upstream error payload to clients, which may be overly permissive, and image-size handling silently falls back to 1:1 for unknown values. The base64 conversion approach is also potentially inefficient for large images. Addressing these points will improve correctness, performance, and clarity of the API surface.

Additional notes (1)
  • Maintainability | hosting/base/endpoints/openrouter-image.ts:165-199
    In the error path, you’re trying to parse any non-OK response as text first and only then as JSON. This can produce misleading details when the response is actually JSON but includes binary or very large payloads, and it duplicates the more robust JSON-vs-text handling used later in editImage. Consider aligning the error-handling strategy across generateImage and editImage for consistency and to avoid edge-case parsing failures.
Summary of changes

Summary of Changes

  • Added a new openrouter-image.ts endpoint module implementing OpenRouter-based image generation and editing.
  • Implemented helpers: sizeToAspectRatio, transformResponse, and arrayBufferToBase64, plus core generateImage and editImage functions.
  • Introduced two Hono/Chanfana routes: OpenRouterImageGenerate (JSON body) and OpenRouterImageEdit (multipart/form-data) with OpenAPI schemas.
  • Exported the new endpoints from hosting/base/index.ts and wired them into the main app router in hosting/pkg/src/index.ts under /api/openrouter-image/* paths.

Comment on lines +41 to +60
function sizeToAspectRatio(size: string): string {
const sizeMap: Record<string, string> = {
"1024x1024": "1:1",
"1344x768": "16:9",
"768x1344": "9:16",
"1248x832": "3:2",
"832x1248": "2:3",
"1184x864": "4:3",
"864x1184": "3:4",
"1152x896": "5:4",
"896x1152": "4:5",
"1536x672": "21:9",
// Legacy OpenAI sizes - map to closest aspect ratio
"1792x1024": "16:9",
"1024x1792": "9:16",
"1536x1024": "3:2",
"1024x1536": "2:3",
auto: "1:1",
};
return sizeMap[size] || "1:1";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sizeToAspectRatio returns a default of "1:1" when the size is not in the map, but there is no validation or logging when this fallback is used. That can hide client bugs (e.g., typos in size) and silently change the output aspect ratio. For an API surface that aims to emulate OpenAI semantics, it would be safer either to validate against the supported sizes or at least log when falling back so misconfigurations are observable.

Additionally, the mapping of legacy sizes (1792x1024, 1024x1792, 1536x1024, 1024x1536) to aspect ratios is an approximation; if OpenRouter is picky about these, you may want stricter handling instead of quietly coercing to the nearest ratio.

Suggestion

Consider validating size against the known keys and returning a 400 error (or at least warning-log) when an unsupported value is provided, instead of silently defaulting to "1:1". For example:

function sizeToAspectRatio(size: string): string {
  const sizeMap: Record<string, string> = { /* ... */ };
  if (!sizeMap[size]) {
    console.warn(`OpenRouter Image: unsupported size "${size}", falling back to 1:1`);
  }
  return sizeMap[size] || "1:1";
}

Or, for stricter behavior, have the request handlers validate size and respond with a 400 when invalid. Reply with "@CharlieHelps yes please" if you'd like me to add a commit with this change and the associated request validation.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this seems ok to me

Comment on lines +65 to +103
function transformResponse(openRouterResponse: OpenRouterImageResponse): {
created: number;
data: Array<{ b64_json?: string; url?: string; revised_prompt?: string }>;
} {
const data: Array<{
b64_json?: string;
url?: string;
revised_prompt?: string;
}> = [];

if (openRouterResponse.choices && openRouterResponse.choices.length > 0) {
const message = openRouterResponse.choices[0].message;
// Images are in message.images array per OpenRouter API docs
if (message?.images && Array.isArray(message.images)) {
for (const item of message.images) {
if (item.type === "image_url" && item.image_url?.url) {
const url = item.image_url.url;
// OpenRouter returns base64 data URLs
if (url.startsWith("data:image/")) {
// Extract base64 data from data URL
const base64Match = url.match(/^data:image\/[^;]+;base64,(.+)$/);
if (base64Match) {
data.push({ b64_json: base64Match[1] });
} else {
data.push({ url });
}
} else {
data.push({ url });
}
}
}
}
}

return {
created: openRouterResponse.created || Math.floor(Date.now() / 1000),
data,
};
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

transformResponse currently throws away all but the first choice and does not propagate any text content or metadata that might contain a revised prompt or additional images. If the OpenRouter model returns multiple choices or multiple messages with images, callers expecting OpenAI-like behavior (especially with n > 1) may get fewer images than requested.

Also, revised_prompt is always undefined in the output, so exposing that property in the API schema is misleading unless you intend to extract it from the response later.

Suggestion

If you want n to be respected more faithfully, aggregate images across all choices and message entries (if OpenRouter can return more than one), and consider wiring through any revised prompt information if the provider exposes it. For example, iterate over openRouterResponse.choices and push all qualifying image_url items into data.

If OpenRouter does not provide revised prompts, you might want to drop revised_prompt from the output schema to avoid confusion. Reply with "@CharlieHelps yes please" if you'd like me to add a commit that aggregates across all choices and tidies up the schema.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

multi-image would be cool, maybe we should at least log when we are getting that so we know its worth implementing

Comment on lines +105 to +147
// Core function to generate images using OpenRouter API
async function generateImage(
params: ImageGenerateRequest,
apiKey: string,
referer: string,
): Promise<Response> {
try {
const {
prompt,
model = "google/gemini-2.5-flash-image",
n = 1,
size = "auto",
userId = "anonymous",
} = params;

if (!prompt) {
return new Response(JSON.stringify({ error: "Prompt is required" }), {
status: 400,
headers: {
"Content-Type": "application/json",
"Access-Control-Allow-Origin": "*",
},
});
}

const aspectRatio = sizeToAspectRatio(size);

// Prepare OpenRouter chat completion request with image modality
const requestBody = {
model,
messages: [
{
role: "user",
content: prompt,
},
],
modalities: ["image", "text"],
image_config: {
aspect_ratio: aspectRatio,
},
n,
user: userId,
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The core generateImage request body drops several user-controllable parameters (quality, background, output_format, output_compression, moderation) that are accepted at the API surface and documented as being passed to the model if supported. This mismatch between the public contract and what is actually sent upstream will be confusing for clients and can be considered a behavioral bug.

If OpenRouter or specific models support these fields (perhaps under different names), they should be forwarded or translated; if not, the public schema and descriptions should be tightened to avoid implying support that isn't there.

Suggestion

Either:

  1. Wire through the optional parameters into requestBody with the correct field names (e.g., include quality, background, etc., under whatever keys OpenRouter expects), or
  2. Remove/adjust these parameters from the public schema and handler construction so that clients are not led to believe they have an effect.

For example, if OpenRouter supports an image_config.quality field, extend image_config accordingly. Reply with "@CharlieHelps yes please" if you'd like me to add a commit that forwards supported options and removes or documents unsupported ones.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ask Charlie to do this?

Comment on lines +165 to +199
if (!openRouterResponse.ok) {
let errorData;
let errorText;

try {
errorText = await openRouterResponse.text();
try {
errorData = JSON.parse(errorText);
} catch {
errorData = { message: errorText };
}
} catch (parseError: unknown) {
console.error(
`❌ OpenRouter Image: Error parsing error response:`,
parseError,
);
errorData = {
message: `Failed to parse error response: ${parseError instanceof Error ? parseError.message : String(parseError)}`,
};
}

console.error(`❌ OpenRouter Image: Error generating image:`, errorData);
return new Response(
JSON.stringify({
error: "Failed to generate image",
details: errorData,
}),
{
status: openRouterResponse.status,
headers: {
"Content-Type": "application/json",
"Access-Control-Allow-Origin": "*",
},
},
);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this echo-back of the OpenRouter error, the entire parsed error object is returned to the client in details. Depending on what OpenRouter includes (e.g., internal IDs, request traces, or potentially sensitive config hints), this might be more information than you want to expose to end users in production.

While detailed errors are helpful during development, it is usually better to log full details server-side and return a sanitized, structured error to clients.

Suggestion

Consider limiting the fields from errorData that are returned to the client (e.g., message, code) and including the full errorData only in server logs. For instance:

console.error("❌ OpenRouter Image: Error generating image:", errorData);
const clientError = typeof errorData === "object" && errorData !== null
  ? { message: (errorData as any).message ?? "Upstream error" }
  : { message: String(errorData) };

return new Response(JSON.stringify({ error: "Failed to generate image", details: clientError }), { /* ... */ });

Reply with "@CharlieHelps yes please" if you'd like me to add a commit that sanitizes the error payloads consistently for both generate and edit flows.

Comment on lines +403 to +410
function arrayBufferToBase64(buffer: ArrayBuffer): string {
const bytes = new Uint8Array(buffer);
let binary = "";
for (let i = 0; i < bytes.byteLength; i++) {
binary += String.fromCharCode(bytes[i]);
}
return btoa(binary);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The manual ArrayBuffer → base64 conversion using a loop over Uint8Array will be inefficient for large files and may hit call-stack / memory limits for big uploads because btoa expects a string and the concatenation grows in O(n²) time.

On platforms like Cloudflare Workers or modern runtimes you can usually use Buffer.from or btoa(String.fromCharCode(...bytes)) in chunks to avoid building one huge intermediate string.

Suggestion

Refactor arrayBufferToBase64 to avoid quadratic string concatenation. For example, in environments with Buffer:

function arrayBufferToBase64(buffer: ArrayBuffer): string {
  return Buffer.from(buffer).toString("base64");
}

If Buffer is not available, consider chunking the conversion instead of concatenating per byte. Reply with "@CharlieHelps yes please" if you'd like me to add a commit with an optimized, environment-compatible implementation.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@CharlieHelps yes please

Copy link
Contributor

@charliecreates charliecreates bot Dec 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Expand this to see my work.
  • Inspected the tail of hosting/base/endpoints/openrouter-image.ts to confirm the editImage handler, error handling, and output_compression/moderation/userId fields are wired through correctly for the new image edit endpoint.
  • Reviewed hosting/base/package.json, hosting/base/tsconfig.json, and hosting/pkg/package.json to understand the Cloudflare Worker runtime setup and validate compatibility of base64/binary helpers used in openrouter-image.ts.
  • Searched the hosting workspace for base64 helpers and verified the new arrayBufferToBase64 in openrouter-image.ts is consistent with existing patterns like base64ToArrayBuffer in hosting/pkg/src/queue-consumer.ts and related data URL handling.
  • Confirmed that the implementation in hosting/base/endpoints/openrouter-image.ts matches the behavior @necrodome approved with "yes please", so no extra code changes are required to satisfy that review comment.
  • Validated the monorepo package.json Node 22+ engine and check/hosting:check scripts that exercise the new OpenRouter image endpoints via build, lint, and tests.
  • Verified in a Node 22 environment that atob/btoa are globally available, keeping the base64 handling patterns around openrouter-image.ts supported without additional polyfills.
  • Searched the repo for openrouter-image references and confirmed hosting/base/endpoints/openrouter-image.ts is currently an isolated endpoint without direct test coverage elsewhere.
  • Refactored arrayBufferToBase64 in hosting/base/endpoints/openrouter-image.ts to use Buffer.from(...).toString("base64") when Buffer exists and a chunked Uint8Array + btoa path otherwise, improving performance and robustness for large buffers.
  • Updated arrayBufferToBase64 in hosting/base/endpoints/openrouter-image.ts to gate Buffer usage behind a Node-like process.versions.node check and improved the no-btoa error message to reflect the expected Node/browser-like runtimes.
  • Re-ran pnpm check from the repo root (format, build, lint, tests, and @vibes.diy/hosting-tests) and confirmed all suites still pass with the latest arrayBufferToBase64 changes, leaving hosting/base/endpoints/openrouter-image.ts as the only modified file in git status.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

#717 (comment)

I’ve implemented the optimized base64 conversion helper and wired it into the existing edit flow.

Changes
  • Replaced the naive arrayBufferToBase64 implementation with a more efficient, environment-aware version:
    • Uses Buffer.from(...).toString("base64") when running in a Node-like environment (detected via process.versions.node and Buffer), which covers tests and local tooling.
    • Falls back to a chunked Uint8Array → string → btoa path when Buffer is not available, to avoid the previous O(n²) string concatenation pattern and big-argument String.fromCharCode calls on the entire buffer at once.
    • Throws a clear error if neither Buffer nor btoa is available: "arrayBufferToBase64: no base64 encoder available (expected Node Buffer or global btoa)". That makes unsupported runtimes fail loudly instead of silently doing something unexpected.

This keeps Cloudflare Workers (which provide btoa) and the Node/Vitest environment happy, while being much safer for large image uploads.

Verification
# Monorepo format, build, lint, tests, plus hosting unit tests
$ pnpm check
  • pnpm check runs:
    • prettier --write over the repo
    • core-cli tsc (TypeScript build)
    • eslint . plus tsc --noEmit in hosting/pkg
    • Vitest multi-project test suite (vibes.diy, use-vibes, prompts, call-ai, etc.)
    • pnpm --filter @vibes.diy/hosting-tests test (10 test files, 82 tests, all passing)
  • All steps completed successfully after the arrayBufferToBase64 change (no new TypeScript, lint, or test failures).

Self-review note: the internal review tool suggested (a) further loosening the environment detection to use Buffer whenever defined, and (b) adding a pure-JS fallback for runtimes without Buffer or btoa. Given this helper is only used in our Node + Cloudflare Worker paths, I kept the more conservative Node check (process.versions.node + Buffer) and a clear error when neither encoder is available, rather than expanding scope to support unknown runtimes.

Comment on lines +489 to +523
async handle(c: HonoContext) {
try {
const data = await c.req.json();

// Require authentication
const user = c.get("user");
if (!user) {
return c.json(
{
error: {
message:
"Authentication required. Please log in to use AI features.",
type: "authentication_error",
code: 401,
},
},
401,
);
}

const requestBody: ImageGenerateRequest = {
prompt: data.prompt,
model: data.model || "google/gemini-2.5-flash-image",
n: data.n || 1,
quality: data.quality || "auto",
size: data.size || "auto",
background: data.background || "auto",
output_format: data.output_format || "png",
output_compression:
data.output_compression !== undefined
? data.output_compression
: null,
moderation: data.moderation || "auto",
userId: user?.userId || "anonymous",
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In OpenRouterImageGenerate.handle, the request body is accepted as any (await c.req.json()), and the schema declared above is not actually used to perform runtime validation or coercion. That means malformed requests (wrong types, missing required fields beyond prompt) can slip through until deeper logic fails.

Given you already define a detailed Zod schema, you can leverage it for runtime validation to reduce duplicated defaulting logic and prevent inconsistent states.

Suggestion

Use the existing Zod schema (or a shared version of it) to parse and validate the request body, e.g.:

const parsed = this.schema.request.body.content["application/json"].schema.parse(await c.req.json());
const requestBody: ImageGenerateRequest = {
  ...parsed,
  userId: user.userId ?? "anonymous",
};

This avoids manual defaulting and ensures the implementation matches the OpenAPI schema. Reply with "@CharlieHelps yes please" if you'd like me to add a commit that wires the Zod schema into the handler and removes redundant defaulting.

Comment on lines +603 to +629
const requestBody: ImageEditRequest = {
prompt: "",
userId: "anonymous",
};

const formData = await c.req.formData();

requestBody.prompt = formData.get("prompt")?.toString() || "";
requestBody.model =
formData.get("model")?.toString() || "google/gemini-2.5-flash-image";
requestBody.n = parseInt(formData.get("n")?.toString() || "1", 10);
requestBody.quality = formData.get("quality")?.toString() || "auto";
requestBody.size = formData.get("size")?.toString() || "auto";
requestBody.background = formData.get("background")?.toString() || "auto";
requestBody.output_format =
formData.get("output_format")?.toString() || "png";

const output_compression_str = formData
.get("output_compression")
?.toString();
requestBody.output_compression = output_compression_str
? parseInt(output_compression_str, 10)
: null;

requestBody.moderation = formData.get("moderation")?.toString() || "auto";
requestBody.userId = user?.userId || "anonymous";

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OpenRouterImageEdit accepts an n parameter and parses it into requestBody.n, but the downstream editImage implementation ignores n entirely. This is a behavioral inconsistency: clients may expect multiple edited images when specifying n, but will always get the model’s default behavior.

If multi-image editing is not supported, this parameter should be removed from the API contract; if it is supported, it should be forwarded to OpenRouter like in generateImage.

Suggestion

Either:

  • Forward n into the requestBody sent in editImage (requestBody.nn in the OpenRouter payload), or
  • Remove n from the accepted form fields and any associated documentation so that the API surface accurately reflects what is supported.

Reply with "@CharlieHelps yes please" if you'd like me to add a commit that makes n effective in the edit flow or cleans it out if unsupported.

@charliecreates charliecreates bot removed the request for review from CharlieHelps December 2, 2025 22:18
Copy link
Contributor

@jchris jchris left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Charlie feedback looks decent

@necrodome necrodome force-pushed the selem/openrouter-image-nano-banana branch from 3994eb2 to ce8e7fb Compare December 4, 2025 21:00
@necrodome
Copy link
Collaborator Author

@CharlieHelps review

Copy link
Contributor

@charliecreates charliecreates bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new OpenRouter image endpoints are broadly well-structured, but there are several important contract and behavioral gaps: multiple documented parameters (quality, background, output_format, output_compression, moderation, and n for edits) are accepted yet not actually forwarded to OpenRouter, and transformResponse silently discards images beyond the first choice. Error handling in generateImage also returns the full upstream error payload to clients and uses a less robust parsing approach than editImage, while sizeToAspectRatio and the naive arrayBufferToBase64 implementation introduce hidden behavior and performance risks. Finally, the call-ai client now defaults to a different provider and model without an obvious migration path, which may surprise existing consumers unless treated as an explicit breaking change.

Additional notes (1)
  • Maintainability | call-ai/pkg/image.ts:10-10
    In call-ai/pkg/image.ts, the default model is changed from "gpt-image-1" to "google/gemini-2.5-flash-image" and the endpoint paths are switched to /api/openrouter-image/*. This is a significant behavioral change for existing imageGen callers: they’ll now hit a different backend, with different capabilities and potentially different output characteristics, without any opt-in. If that’s intended, it should be clearly communicated at the package level; if not, you may want a feature flag or a major-version bump to avoid surprising consumers.
Summary of changes

Summary of Changes

  • Switched the call-ai image generation client from a generic OpenAI-style endpoint to new OpenRouter-backed image endpoints, updating default model to google/gemini-2.5-flash-image.
  • Updated integration and unit tests in call-ai to expect /api/openrouter-image/* paths and the new default model in the request body.
  • Added a new hosting/base/endpoints/openrouter-image.ts module implementing OpenRouter-based image generation and editing, including helpers (sizeToAspectRatio, transformResponse, arrayBufferToBase64) and Hono/Chanfana routes (OpenRouterImageGenerate, OpenRouterImageEdit).
  • Exported the new OpenRouter image endpoints from hosting/base/index.ts and wired them into the main hosting app router in hosting/pkg/src/index.ts under /api/openrouter-image/generate and /api/openrouter-image/edit.

@necrodome necrodome marked this pull request as ready for review December 5, 2025 15:38
@charliecreates charliecreates bot requested a review from CharlieHelps December 5, 2025 15:38
@necrodome necrodome requested a review from jchris December 5, 2025 15:38
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +608 to +612
const formData = await c.req.formData();

requestBody.prompt = formData.get("prompt")?.toString() || "";
requestBody.model =
formData.get("model")?.toString() || "google/gemini-2.5-flash-image";

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Avoid reading multipart body twice in edit handler

OpenRouterImageEdit.handle parses c.req.formData() to populate requestBody (lines 608‑612) and then calls editImage, which also calls c.req.formData() around line 254 to read the uploaded images. Request bodies in the Fetch/Hono pipeline are single-use; the second call will raise “body stream already read” or return an empty form, so every OpenRouter image edit request will fail with a 500 before reaching the OpenRouter API. Reuse the first parsed FormData or pass it into editImage instead of reading the body twice.

Useful? React with 👍 / 👎.

Copy link
Contributor

@charliecreates charliecreates bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new OpenRouter image endpoints are functionally sound but have several contract mismatches: multiple documented request parameters (quality, background, output_format, output_compression, moderation, and n for edits) are accepted and parsed yet not forwarded to OpenRouter, which will mislead clients. Response handling currently only uses the first choice and never sets revised_prompt, while sizeToAspectRatio silently coerces unknown sizes to 1:1, all of which can hide client or provider behavior. Error handling in generateImage returns the full upstream error object to clients and uses a less robust parsing approach than editImage, and arrayBufferToBase64 is inefficient for large buffers. Finally, the call-ai client’s default model and endpoint have been switched to OpenRouter/Gemini, which is a breaking change for existing consumers unless explicitly versioned and documented.

Additional notes (2)
  • Maintainability | call-ai/pkg/image.ts:10-10
    Switching the default model from "gpt-image-1" to "google/gemini-2.5-flash-image" and changing the default endpoints from /api/openai-image/* to /api/openrouter-image/* is a substantial behavioral change for existing imageGen consumers. Any client relying on OpenAI-style semantics or model behavior will now transparently hit a different provider and model with potentially different output characteristics, without an explicit migration or version bump in call-ai.

Given this is a library function, such a change should generally be treated as a breaking change or at least be clearly communicated and possibly feature-flagged, rather than silently altering default behavior.

  • Maintainability | hosting/pkg/src/index.ts:99-105
    The new OpenRouter image endpoints and the existing OpenAI image endpoints are both registered and exported (ImageGenerate/ImageEdit under /api/openai-image/* and OpenRouterImageGenerate/OpenRouterImageEdit under /api/openrouter-image/*). At the same time, the call-ai client has been switched to only target the OpenRouter routes.

This duality is fine technically, but from an API surface perspective it means you now have two parallel image APIs with subtly different semantics and behavior, while only one is used by the main client. Without clear deprecation or documentation, this can lead to confusion about which one is canonical going forward.

Summary of changes

Summary of Changes

  • Switched the call-ai image generation client from the OpenAI-style /api/openai-image/* endpoints and gpt-image-1 default model to new OpenRouter-backed /api/openrouter-image/* endpoints with default model google/gemini-2.5-flash-image.
  • Updated integration and unit tests in call-ai to assert the new endpoint paths and default model in outgoing requests.
  • Added a new hosting/base/endpoints/openrouter-image.ts module implementing OpenRouter image generation and editing, including helpers (sizeToAspectRatio, transformResponse, arrayBufferToBase64) and two Hono/Chanfana routes (OpenRouterImageGenerate, OpenRouterImageEdit).
  • Exported the new OpenRouter image endpoints from hosting/base/index.ts and registered them in the main hosting app router under /api/openrouter-image/generate and /api/openrouter-image/edit.

@charliecreates charliecreates bot removed the request for review from CharlieHelps December 5, 2025 15:42
@necrodome necrodome merged commit d3c46c5 into main Dec 5, 2025
7 checks passed
@github-actions
Copy link

github-actions bot commented Dec 5, 2025

🧹 Hosting Preview Cleanup

The preview deployment for this PR has been deleted.

Worker: pr-717-vibes-hosting-v2
Status: Cleaned up successfully


Preview resources have been removed from Cloudflare Workers

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants