Skip to content

Conversation

@butnarurazvan
Copy link

@butnarurazvan butnarurazvan commented Dec 4, 2025

Title

Fix Handling token array input decoding for embeddings endpoint

Relevant issues

Pre-Submission checklist

Please complete all items before asking a LiteLLM maintainer to review your PR

  • I have Added testing in the tests/litellm/ directory, Adding at least 1 test is a hard requirement - see details
  • I have added a screenshot of my new test passing locally
  • My PR passes all unit tests on make test-unit
  • My PR's scope is as isolated as possible, it only solves 1 specific problem

Type

🐛 Bug Fix

Changes

  • changed from using get_deployment to get_deployment_by_model_group_name as data["model"] is not a model ID but a name

@vercel
Copy link

vercel bot commented Dec 4, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Comments Updated (UTC)
litellm Ready Ready Preview Comment Dec 4, 2025 7:13am

@CLAassistant
Copy link

CLAassistant commented Dec 4, 2025

CLA assistant check
All committers have signed the CLA.

@butnarurazvan butnarurazvan changed the title Fix handling token array input decoding for embeddings fix(embeddings): fix handling token array input decoding for embeddings Dec 4, 2025
@butnarurazvan butnarurazvan marked this pull request as draft December 4, 2025 07:32
@butnarurazvan butnarurazvan marked this pull request as ready for review December 4, 2025 07:32
@butnarurazvan
Copy link
Author

@ishaan-jaff Can you please help with merging this PR?
This issue appeared with langchain integration with Gemini Embeddings models.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants