Skip to content

Conversation

@HatefulMoron
Copy link

Data requests now happen in the Go backend. This commit adds the following:

o Authentication code, translated from Ecosuite
o HTTP client for using the authentication code
o Datum serialization/deserialization
o Query code for list/reading queries using CBOR

This commit also removes the code on the client which accomplished the same tasks.

I didn't add any e2e tests with playwright because they seem to be out of date to begin with, let me know if I should do that.

Data requests now happen in the Go backend. This commit adds the
following:

 o Authentication code
 o HTTP client for using the authentication code
 o Datum serialization/deserialization
 o Query code for list/reading queries using CBOR

This commit also removes the code on the client which accomplished the same tasks.

Signed-off-by: Thomas Passmore <thomas@ecosuite.io>
@msqr
Copy link
Collaborator

msqr commented Nov 25, 2025

Awesome sauce, thank you I will have a look and test locally.

@msqr msqr self-assigned this Nov 25, 2025
Copy link
Collaborator

@msqr msqr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fantastic start! I added some initial comments for discussion...

// Aggregate format: [startTs, endTs] - use endTs if available, else startTs
isAggregate = true
if len(tsArray) >= 2 && tsArray[1] != nil {
ts, _ = toInt64(tsArray[1])
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the datum timestamp should always be the first element of the array here. Perhaps consider if the second element could make its way to the return value of this function, and then show up as an additional endTimestamp or similar output field so the front end could use that value.

I confirmed by referencing a Java example of parsing the streaming response here, where the "start" timestamp is always used as the datum timestamp, and the "end" timestamp ends up as an extra field endTimestamp on the datum. In the example, the ts variable is the first array element value, and tsEnd the second.

I think always using the "start" value makes sense logically as well, imagining a bar chart where the bar would span the start - end time values. This also keeps it aligned with the "aggregate timestamps represent the start of the time period" approached used in SolarNetwork.


if isAggregate {
if arr, ok := row[rowIdx].([]interface{}); ok && len(arr) > 0 {
val, _ := toFloat64(arr[0])
Copy link
Collaborator

@msqr msqr Nov 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could the aggregate statistic values be included as output fields? The non-stream SN API does this by adding properties with these instantaneous stat suffixes at the additional arr indexes:

  1. _count (not actually generated by SN API, but could be included here)
  2. _min
  3. _max

and these accumulating stat suffixes:

  1. _start
  2. _end

So for an instantaneous watts property you'd get watts, watts_min, and watts_max output fields. For an accumulating wattHours property you'd get wattHours, wattHours_start, and wattHours_end output fields.

These would be super helpful in some dashboards!

return n.Float64()
case cbor.Tag:
// CBOR decimal fraction (tag 4): mantissa * 10^exponent
if n.Number == 4 {
Copy link
Collaborator

@msqr msqr Nov 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are 2 other tags I know the SN API can return:

  • Tag 2 for a "bignum"
  • Tag 3 for a "negative bignum" (same section 2.4.2 as previous)

Do you think you could add support for those as well? Here's an example in TypeScript from SolarFlux that handles these two (and tag 4 as well).

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had another thought: I was surprised to see CBOR handling in this file... as opposed to stream.go where the CBOR is unmarshalled. I see that the CBOR library supports registering tag decoders that I think would be automatically applied during unmarshalling... do you think it makes sense to implement tag decoders for these 3 tags and register them as a TagSet, so all CBOR decoding was handled there?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this code actually used by the backend? I couldn't tell... and thought the code in stream.go was what was really used...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants