Skip to content

Conversation

@adamantal
Copy link
Contributor

Description

The service.instance.id identifier is unique to a component (a Lambda execution), and it may have several advantages exposing this information:

  • due to its manageable cardinality, it is a better candidate for indexing (as opposed to Lambda execution ids, that can be of much bigger magnitude)
  • debugging caching issues across consecutive Lambda invocation on the same instance, as the opentelemetry-lambda's collector is persisted across executions

Testing

Tested live on a Lambda function, on top of #2011:
Screenshot 2025-11-18 at 15 53 43

@adamantal adamantal requested a review from a team as a code owner November 18, 2025 14:57
@wpessers wpessers added enhancement New feature or request go Pull requests that update Go code labels Nov 26, 2025
@wpessers
Copy link
Contributor

I'm not entirely sure this change is really adding something new. There is already a faas.instance resource attribute which I believe should be handled by the aws resource detector (in language specific layers like node.js). This refers to the cloudwatch log stream id, so it has the same cardinality as your proposed service_instance_id.

@wpessers
Copy link
Contributor

Upon closer inspection, it looks like this may only be supported in node.js and java

@wpessers
Copy link
Contributor

So in my opinion, this feature is already solved in a different way. I believe otel already supports aws lambda resource detection for most languages, which should include the faas.instance. This uniquely identifies a lambda "container" through the log stream id. Some examples below:

@tylerbenson @serkan-ozal any opinion on this?

@adamantal
Copy link
Contributor Author

In our use case the (Python) Lambda code itself is not instrumented with opentelemetry, but the opentelemetry-collector-lambda is running alongside the execution (as a Lambda extension). The logs that we receive lack the attribute you referenced above.

This PR adds this ability to enrich the logs implemented at collector-side (so we don't have to instrument the Python code just to collect the logs that would otherwise be available through the Telemetry API anyways).

@adamantal
Copy link
Contributor Author

I'll take a deeper look to the attributes using the debug exporter to see all metadata, I might have missed something

@serkan-ozal
Copy link
Contributor

What I am unable to see here is that who puts AWS CloudWatch log stream id into the receiver.Settings as service instance it? cc @adamantal

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request go Pull requests that update Go code

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants