-
-
Notifications
You must be signed in to change notification settings - Fork 95
feat: Add OpenTelemetry instrumentation #525
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
…hecks from pre-commit, fix taskiq-python#417
| content: "97DC185FE0A2F5B123861F0790FDFB26" # pragma: allowlist secret | ||
| - - meta | ||
| - name: "yandex-verification" | ||
| content: "9b105f7c58cbc920" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why pragma: allowlist secret is here?
As I see, we already exclude this file from detect-secrets pre-commit hook check. Is some other linter checks fails on this lines?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
even with exclusion in .pre-commit-config.yaml, still seeing this without pragmas
Detect secrets...........................................................Failed
- hook id: detect-secrets
- exit code: 1
ERROR: Potential secrets about to be committed to git repo!
Secret Type: Base64 High Entropy String
Location: docs\README.md:17
Secret Type: Hex High Entropy String
Location: docs\README.md:20
Secret Type: Hex High Entropy String
Location: docs\README.md:23
Possible mitigations:
- For information about putting your secrets in a safer place, please ask in
#security
- Mark false positives with an inline `pragma: allowlist secret` comment
If a secret has already been committed, visit
https://help.github.com/articles/removing-sensitive-data-from-a-repositoryThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thats strange...
I checked on my fork and anything seems to be fine:
(taskiq) git pull
Already up to date.
(taskiq) git st
On branch master
Your branch is up to date with 'origin/master'.
nothing to commit, working tree clean
(taskiq) pre-commit install
pre-commit installed at .git/hooks/pre-commit
(taskiq) pre-commit run detect-secrets --all-files
Detect secrets...........................................................Passed
Are you sure that you run poetry install --all-extras && pre-commit install after rebase? I don't really know that else can be a problem
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In any case - it's not a blocker to merge this MR. I will deal with it later if we don't find the root cause of this strange pre-commit hook behaviour.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess the root cause is me commiting from windows :)
On mac the exclusions are working as expected
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rollbacked the file
taskiq/package.py
Outdated
| @@ -0,0 +1,2 @@ | |||
| # for compatibility with opentelemetry-instrumentation | |||
| _instruments = ("taskiq >= 0.11.19",) | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just wondering: do we need to update version in this file on every release?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tests/opentelemetry/test_tasks.py
Outdated
|
|
||
| self.assertEqual(result.return_value, {"key": "value"}) | ||
|
|
||
| @pytest.mark.anyio |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I enabled anyio_mode = "auto" flag for pytest. So tests doesn't need to explicitly say that they are async with pytest.mark.anyio marker.
You can just remove it and tests should work as they were)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
taskiq/instrumentation.py
Outdated
| --- | ||
| """ | ||
|
|
||
| from __future__ import annotations |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like everything works fine without this import. Maybe we can remove it?
As far as I can see, there are no other places in the taskiq repo where __future__ is used.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
removed this
| broker = InMemoryBroker() | ||
| @broker.on_event(TaskiqEvents.WORKER_STARTUP) | ||
| async def startup(state: TaskiqState) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you do it in main? So you won't need to create to on_event.
| if not span.is_recording(): | ||
| return | ||
|
|
||
| for key in TASKIQ_CONTEXT_ATTRIBUTES: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we track only particular labels? Let's just iterate over all available labels.
Also, I think that we also want to extract args and kwargs to set it on the span. For better tracing. Instead of passing labels, you can pass TaskiqMessage itself and get everthing from it.
| ctx = retrieve_context(message, is_publish=True) | ||
|
|
||
| if ctx is None: | ||
| logger.warning("no existing span found for task_id=%s", message.task_id) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| logger.warning("no existing span found for task_id=%s", message.task_id) | |
| logger.debug("no existing span found for task_id=%s", message.task_id) |
| span.set_attribute(_TASK_NAME_KEY, message.task_name) | ||
| set_attributes_from_context(span, message.labels) | ||
|
|
||
| activation = trace.use_span(span, end_on_exit=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here's a small quetsion about spans. It seems like you use spans as events.
From my understanding it makes sense to only have 1 single span for task execution. And inside of that span we can use events to show what was going on during the exeuction.
I'm not sure if it's the best way. I need to try it out.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No description provided.