Replies: 23 comments 5 replies
-
|
try latest 1.0.7 |
Beta Was this translation helpful? Give feedback.
-
|
@hhftechnology I updated earlier today and will monitor for new occurrences and reopen if needed. Thanks! |
Beta Was this translation helpful? Give feedback.
-
|
@hhftechnology it happened just a few minutes after I typed the above... :/ 2025/08/05 15:53:59 Panic in pollLoop: runtime error: slice bounds out of range [:116588] with capacity 65536 |
Beta Was this translation helpful? Give feedback.
-
|
share your compose file, OS version and traefik version |
Beta Was this translation helpful? Give feedback.
-
|
@hhftechnology here's my compose (redacted where needed): This is running in Docker 28.3.3 on Debian 12.2.0 with Traefik 3.5.0. |
Beta Was this translation helpful? Give feedback.
-
Use this as it is. In target user private ip of the VPS/VM |
Beta Was this translation helpful? Give feedback.
-
|
@hhftechnology I had to re-add the "networks: pangolin" lines to frontend/backend otherwise it wasn't reachable + had to use "latest" tag since I couldn't pull it using "1.0.7" tag (it throws "manifest unknown"), but am now running it as you suggested otherwise: |
Beta Was this translation helpful? Give feedback.
-
|
@hhftechnology not sure if this is any way related to the original issue, but I just had a crash with this exception (using the above compose): |
Beta Was this translation helpful? Give feedback.
-
|
@hhftechnology I am able to reproduce this crash, sort of. At the time the crash occurred I was navigating around Jellyseerr so I went back and did some more navigating around Jellyseerr while viewing the frontend page and after maybe 20-30 seconds of loading different things in Jellyseerr (which was of course causing a LOT of URL hits to spin through the live logging) it hit this same crash again. My assumption is that this is an issue with updating the frontend when a lot of rapid logging activity is occurring. For example, loading just the home (Discover) page of Jellyseerr results in about 150 log entries (most of that is the image loading). Edit: I cannot reproduce any crash if I'm not actively accessing the frontend when I'm loading/refreshing Jellyseerr. |
Beta Was this translation helpful? Give feedback.
-
|
currently i am running the dashboard with 12000K entries. no crash v1.0.7 |
Beta Was this translation helpful? Give feedback.
-
|
@hhftechnology I'm not referring to the total number of accumulated requests, I'm referring to a high frequency of requests in a short time. This app is running at 15K accumulated requests at the moment with no issue... but I bring up the frontend page and am viewing the live log and then open Jellyseerr in another browser that burst of 100-150 updates to the live log is what I see at the point of the crash occurring and that's something I can consistently reproduce. Let me know if you have any questions or need more info, or if you'd like a separate bug filed for this new crash scenario (haven't seen the original panic errors since deploying the updated compose you suggested). |
Beta Was this translation helpful? Give feedback.
-
|
@hhftechnology just had another occurrence of the original issue: 2025/08/07 14:59:42 Panic in watchLoop: runtime error: slice bounds out of range [:112715] with capacity 65536 15,532 requests accumulated with an uptime of a little over 12 hours. |
Beta Was this translation helpful? Give feedback.
-
|
log ingest is only 1000, i now reduced to 500 |
Beta Was this translation helpful? Give feedback.
-
|
@hhftechnology will you be updating the pre-built images with these latest changes? I'm using ghcr.io/hhftechnology/traefik-log-dashboard-backend and ghcr.io/hhftechnology/traefik-log-dashboard-frontend images at the moment but they haven't gotten the latest updates it looks like. I still see the original issue (panic error, no more updating) about once or twice a day and can still reproduce the crash issue when loading Jellyseerr while viewing the frontend. |
Beta Was this translation helpful? Give feedback.
-
|
@hhftechnology I switched to using the OTLP config earlier today and haven't had any occurrences of the runtime error or the backend crash when refreshing Jellyseerr since then (yay!). However, I'm no longer getting any size information in the logs and the "Data Transmitted" section in the header is always "0 B" of course. I've been looking at how to maybe get this added via changes to the Traefik config.yml but so far I'm not seeing it... any idea on how to get sizes added to the OTLP output? Thanks! Edit: Unfortunately I was able to reproduce the "fatal error: concurrent map iteration and map write" backend crash while scrolling through the "Movies" section of Jellyseerr (where it's loading tons of images very fast)... this is with the sampleRate set to "1.0" (I had started out with "0.1" and increased in stages to see how it performed). I'll set sampleRate back down a bit and see if the crash still occurs. |
Beta Was this translation helpful? Give feedback.
-
|
Hey My traefik+crowdsec handles like 20 services and it takes like 5-10minutes to get past 2.000 entries. In another issue posted here it was recommended to use "sample rate" to 0.1 as a workaround to get not that much crashed but i dont know where to set it. Is this an issue you will and can fix in the future? Since your dashboard is by far the only and best nice looking dashboard i found which is leightweight whats important for me. |
Beta Was this translation helpful? Give feedback.
-
|
Hello, have the same issue, use latest docker-compose base from github main-branch. |
Beta Was this translation helpful? Give feedback.
-
|
Using followed by lots of this and more. Not sure how useful it is to post it all here. |
Beta Was this translation helpful? Give feedback.
-
|
I can reproduce this too. As reported by everyone else, it happens when a burst of request come. |
Beta Was this translation helpful? Give feedback.
-
|
@hhftechnology Maybe we can trust the AI? :) |
Beta Was this translation helpful? Give feedback.
-
|
@orlovds @dlhall111 @plakun @ovizii @milindpatel63 V2 should solve all the issues |
Beta Was this translation helpful? Give feedback.
-
|
@hhftechnology Thank you so much, it's running, it's working! |
Beta Was this translation helpful? Give feedback.
-
|
@hhftechnology thx. Waiting for documentation fixes for new version. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Since updating to 1.0.6 (including adding resource limits, as recommended) I'm seeing the following errors logged in the backend after a few hours of uptime:
2025/08/04 16:22:50 Panic in pollLoop: runtime error: slice bounds out of range [8700:5800]
2025/08/04 16:22:50 Panic in watchLoop: runtime error: slice bounds out of range [8700:5800]
Once these errors are thrown the dashboard no longer updates but the frontend/backend service continue to run and the dashboard is still accessible.
Simply restarting the backend service restores normal operations.
Note: I'm using Beszel for docker resource monitoring and don't see any unusual resource usage from the backend service (or the frontend) at the timeframe that the runtime error is logged. Memory and CPU usage is consistent before/after the error is logged.
Beta Was this translation helpful? Give feedback.
All reactions