Skip to content

Commit 88240a3

Browse files
craig[bot]rafissandy-kimballyuzefovichmgartner
committed
155133: scbuild: allow multiple column renames r=rafiss a=rafiss Please review each commit individually. informs #148340 ### schemaexpr: add isComputed to ColumnLookupFn ### scbuild: move function definitions to helpers.go ### schemaexpr: add iterColsWithLookupFn This function allows the computed column validation to be used from the declarative schema changer -- it uses a lookup function to access columns rather than referring to the table descriptor directly. ### scbuild: use ValidateComputedColumnExpressionWithLookup function This validates the computed column by referencing the builder state rather than the table descriptor. The table descriptor would not reflect the schema changes that are being staged by the declarative schema changer. ### scplan: relax constraint for cleaning up non-index-backed constraints This rule was matching all non-index-backed constraints, but it only needs to apply to UNIQUE WITHOUT INDEX constraints. ### scplan: tighten rule for transient check constraints This rule was only needed for the ALTER COLUMN TYPE op, so the rule is updated to only apply in that case. ### scbuild: allow multiple column renames This includes new dependency rules so that we fully remove the old name before allowing a new column to take on that name, and another one to make sure that a name is available before we create a computed column expression that uses that name. Release note: None 156557: sql: update MaxBatchSize comments r=yuzefovich a=andy-kimball The comments for MaxBatchSize incorrectly state that it limits the number of KV batch entries. It actually limits the number of SQL-level batch rows. It includes only primary index rows, not secondary index rows. This commit updates all the comments to reflect what the code is actually doing. Epic: none Release note: None 156584: copy: fix vectorized auto commit behavior r=yuzefovich a=yuzefovich When we implemented the vectorized INSERT which supports COPY in some cases, we missed one condition for auto-committing the txn that is present in the regular `tableWriterBase` path. Namely, we need to check whether the deadline that might be set on the txn hasn't expired yet, and if it has, we shouldn't be auto-committing and should be leaving it up to the connExecutor (which will try to refresh the deadline). The impact of the bug is that often if COPY took longer than 40s (controlled via `server.sqlliveness.ttl`), we'd hit the txn retry error and propagate it to the client. Fixes: #155300. Release note (bug fix): Previously, the "atomic" COPY command (controlled via `copy_from_atomic_enabled`, which is `true` by default) could encounter RETRY_COMMIT_DEADLINE_EXCEEDED txn errors if the whole command took 1 minute or more. This was the case only when the vectorized engine was used for COPY and is now fixed. 156654: sql: add optimizer_use_max_frequency_selectivity session setting r=mgartner a=mgartner The `optimizer_use_max_frequency_selectivity` session setting has been added. It is enabled by default. Disabling it reverts the selectivity improvements added in #151409. Release note: None 156673: build: update `MungeTestXML` and `MergeTestXMLs` behavior r=rail a=rickystewart The `test.xml` produced by `rules_go` changed with the latest upgrade; this change restores the previous behavior. Epic: DEVINF-1477 Closes #156597 Release note: none Co-authored-by: Rafi Shamim <rafi@cockroachlabs.com> Co-authored-by: Andrew Kimball <andyk@cockroachlabs.com> Co-authored-by: Yahor Yuzefovich <yahor@cockroachlabs.com> Co-authored-by: Marcus Gartner <marcus@cockroachlabs.com> Co-authored-by: Ricky Stewart <ricky@cockroachlabs.com>
6 parents 0a25cb8 + 886079f + ca7cf8d + 35eae84 + 418521e + 8e6b393 commit 88240a3

File tree

73 files changed

+3082
-421
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

73 files changed

+3082
-421
lines changed

pkg/build/util/util.go

Lines changed: 91 additions & 44 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,6 @@
88
package util
99

1010
import (
11-
"cmp"
1211
"encoding/xml"
1312
"fmt"
1413
"io"
@@ -30,8 +29,13 @@ type TestSuites struct {
3029
type testSuite struct {
3130
XMLName xml.Name `xml:"testsuite"`
3231
TestCases []*testCase `xml:"testcase"`
33-
Attrs []xml.Attr `xml:",any,attr"`
32+
Errors int `xml:"errors,attr"`
33+
Failures int `xml:"failures,attr"`
34+
Skipped int `xml:"skipped,attr"`
35+
Tests int `xml:"tests,attr"`
36+
Time float64 `xml:"time,attr"`
3437
Name string `xml:"name,attr"`
38+
Timestamp string `xml:"timestamp,attr"`
3539
}
3640

3741
type testCase struct {
@@ -41,11 +45,12 @@ type testCase struct {
4145
// this isn't Java so there isn't a classname) and excluding it causes
4246
// the TeamCity UI to display the same data in a slightly more coherent
4347
// and usable way.
44-
Name string `xml:"name,attr"`
45-
Time string `xml:"time,attr"`
46-
Failure *XMLMessage `xml:"failure,omitempty"`
47-
Error *XMLMessage `xml:"error,omitempty"`
48-
Skipped *XMLMessage `xml:"skipped,omitempty"`
48+
Classname string `xml:"classname,attr"`
49+
Name string `xml:"name,attr"`
50+
Time string `xml:"time,attr"`
51+
Failure *XMLMessage `xml:"failure,omitempty"`
52+
Error *XMLMessage `xml:"error,omitempty"`
53+
Skipped *XMLMessage `xml:"skipped,omitempty"`
4954
}
5055

5156
// XMLMessage is a catch-all structure containing details about a test
@@ -122,7 +127,6 @@ func OutputsOfGenrule(target, xmlQueryOutput string) ([]string, error) {
122127
// it to the output file. TeamCity kind of knows how to interpret the schema,
123128
// but the schema isn't *exactly* what it's expecting. By munging the XML's
124129
// here we ensure that the TC test view is as useful as possible.
125-
// Helper function meant to be used with maybeStageArtifact.
126130
func MungeTestXML(srcContent []byte, outFile io.Writer) error {
127131
// Parse the XML into a TestSuites struct.
128132
suites := TestSuites{}
@@ -135,59 +139,102 @@ func MungeTestXML(srcContent []byte, outFile io.Writer) error {
135139
if err != nil {
136140
return err
137141
}
138-
// If test.xml is empty, this will be an empty object. We do want to
139-
// emit something, however.
140-
if len(suites.Suites) == 0 {
141-
return writeToFile(&testSuite{}, outFile)
142+
var res testSuite
143+
for _, suite := range suites.Suites {
144+
res.XMLName = suite.XMLName
145+
res.TestCases = append(res.TestCases, suite.TestCases...)
146+
if res.Name == "" {
147+
i := strings.LastIndexByte(suite.Name, '.')
148+
if i < 0 {
149+
i = len(suite.Name)
150+
}
151+
res.Name = suite.Name[0:i]
152+
}
153+
res.Errors += suite.Errors
154+
res.Failures += suite.Failures
155+
res.Skipped += suite.Skipped
156+
res.Tests += suite.Tests
157+
res.Time += suite.Time
158+
if res.Timestamp == "" {
159+
res.Timestamp = suite.Timestamp
160+
} else {
161+
res.Timestamp = min(res.Timestamp, suite.Timestamp)
162+
}
142163
}
143-
// We only want the first test suite in the list of suites.
144-
return writeToFile(&suites.Suites[0], outFile)
164+
return writeToFile(res, outFile)
145165
}
146166

147-
// MergeTestXMLs merges the given list of test suites into a single test suite,
148-
// then writes the serialized XML to the given outFile. The prefix is passed
149-
// to xml.Unmarshal. Note that this function might modify the passed-in
150-
// TestSuites in-place.
167+
// MergeTestXMLs merges the given list of test suites into a single object,
168+
// then writes the serialized XML to the given outFile.
151169
func MergeTestXMLs(suitesToMerge []TestSuites, outFile io.Writer) error {
152170
if len(suitesToMerge) == 0 {
153171
return fmt.Errorf("expected at least one test suite")
154172
}
155-
var resultSuites TestSuites
156-
resultSuites.Suites = append(resultSuites.Suites, testSuite{})
157-
resultSuite := &resultSuites.Suites[0]
158-
resultSuite.Name = suitesToMerge[0].Suites[0].Name
159-
resultSuite.Attrs = suitesToMerge[0].Suites[0].Attrs
160-
cases := make(map[string]*testCase)
173+
type suiteAndCaseMap struct {
174+
suite testSuite
175+
cases map[string]*testCase
176+
}
177+
suitesMap := make(map[string]*suiteAndCaseMap)
161178
for _, suites := range suitesToMerge {
162-
for _, testCase := range suites.Suites[0].TestCases {
163-
oldCase, ok := cases[testCase.Name]
179+
for _, suite := range suites.Suites {
180+
oldSuite, ok := suitesMap[suite.Name]
164181
if !ok {
165-
cases[testCase.Name] = testCase
182+
cases := make(map[string]*testCase)
183+
for _, testCase := range suite.TestCases {
184+
cases[testCase.Name] = testCase
185+
}
186+
suitesMap[suite.Name] = &suiteAndCaseMap{
187+
suite: suite,
188+
cases: cases,
189+
}
166190
continue
167191
}
168-
if testCase.Failure != nil {
169-
if oldCase.Failure == nil {
170-
oldCase.Failure = testCase.Failure
171-
} else {
172-
oldCase.Failure.Contents = oldCase.Failure.Contents + "\n" + testCase.Failure.Contents
192+
for _, testCase := range suite.TestCases {
193+
oldCase, ok := oldSuite.cases[testCase.Name]
194+
if !ok {
195+
oldSuite.cases[testCase.Name] = testCase
196+
continue
173197
}
174-
}
175-
if testCase.Error != nil {
176-
if oldCase.Error == nil {
177-
oldCase.Error = testCase.Error
178-
} else {
179-
oldCase.Error.Contents = oldCase.Error.Contents + "\n" + testCase.Error.Contents
198+
if testCase.Failure != nil {
199+
if oldCase.Failure == nil {
200+
oldCase.Failure = testCase.Failure
201+
oldSuite.suite.Failures += 1
202+
} else {
203+
oldCase.Failure.Contents = oldCase.Failure.Contents + "\n" + testCase.Failure.Contents
204+
}
205+
}
206+
if testCase.Error != nil {
207+
if oldCase.Error == nil {
208+
oldCase.Error = testCase.Error
209+
oldSuite.suite.Errors += 1
210+
} else {
211+
oldCase.Error.Contents = oldCase.Error.Contents + "\n" + testCase.Error.Contents
212+
}
180213
}
181214
}
182215
}
183216
}
184-
for _, testCase := range cases {
185-
resultSuite.TestCases = append(resultSuite.TestCases, testCase)
217+
suitesSlice := make([]string, 0, len(suitesMap))
218+
for suiteName := range suitesMap {
219+
suitesSlice = append(suitesSlice, suiteName)
220+
}
221+
slices.Sort(suitesSlice)
222+
var res TestSuites
223+
for _, suiteName := range suitesSlice {
224+
suiteAndCases := suitesMap[suiteName]
225+
suite := suiteAndCases.suite
226+
suite.TestCases = []*testCase{}
227+
casesSlice := make([]string, 0, len(suiteAndCases.cases))
228+
for caseName := range suiteAndCases.cases {
229+
casesSlice = append(casesSlice, caseName)
230+
}
231+
slices.Sort(casesSlice)
232+
for _, caseName := range casesSlice {
233+
suite.TestCases = append(suite.TestCases, suiteAndCases.cases[caseName])
234+
}
235+
res.Suites = append(res.Suites, suite)
186236
}
187-
slices.SortFunc(resultSuite.TestCases, func(a, b *testCase) int {
188-
return cmp.Compare(a.Name, b.Name)
189-
})
190-
return writeToFile(&resultSuites, outFile)
237+
return writeToFile(&res, outFile)
191238
}
192239

193240
func writeToFile(suite interface{}, outFile io.Writer) error {

0 commit comments

Comments
 (0)