Skip to content
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,94 @@
# Safe Bulk Record Update with Logging

## Overview
Efficiently update multiple records in batch with error handling, progress tracking, and logging to prevent timeouts and data loss.

## What It Does
- Updates records in configurable batch sizes
- Logs progress for monitoring
- Handles individual record errors without stopping batch
- Prevents script timeout with batch processing
- Tracks success/failure counts
- Logs detailed error information

## Use Cases
- Bulk data migrations
- Mass field updates after deployment
- Scheduled bulk corrections
- Data cleanup operations
- Batch status updates across records

## Files
- `bulk_update_with_progress.js` - Background Script for safe bulk updates

## How to Use

### Option 1: Run as Background Script
1. Go to **System Diagnostics > Script Background**
2. Copy code from `bulk_update_with_progress.js`
3. Modify the table name and query filter
4. Execute and monitor logs

### Option 2: Create as Scheduled Job
1. Go to **System Scheduler > Scheduled Jobs**
2. Create new job with the script code
3. Schedule for off-peak hours
4. Logs will be available in System Logs

## Example Usage
```javascript
// Customize these variables:
var TABLE = 'incident';
var FILTER = "priority=1^state=2"; // Your query condition
var BATCH_SIZE = 100;
var FIELD_TO_UPDATE = 'assignment_group'; // Field to update
var NEW_VALUE = '123456789abc'; // New value

// Run the script - it handles everything else
```

## Key Features
- **Batch Processing**: Prevents timeout by processing records in chunks
- **Error Resilience**: Continues on error, logs details
- **Progress Tracking**: Logs every N records updated
- **Flexible**: Works with any table and field
- **Safe**: Won't crash on individual record failures
- **Auditable**: Detailed logging of all operations

## Output Examples
```
[Bulk Update Started] Processing incidents with filter: priority=1
[Progress] Updated 100 records successfully (5 errors)
[Progress] Updated 200 records successfully (8 errors)
[Bulk Update Complete] Total: 250 | Success: 242 | Errors: 8
[Failed Records] 7af24b9c: User already has assignment
[Failed Records] 8bd35c8d: Invalid assignment group
```

## Performance Notes
- Batch size of 100 is optimal for most tables
- Adjust batch size based on available resources
- Run during maintenance windows for large updates
- Monitor system logs during execution

## Customization
```javascript
// Change batch size for your table size
var BATCH_SIZE = 50; // For smaller batches
var BATCH_SIZE = 200; // For larger tables

// Different field update logic
record.setValue(FIELD_TO_UPDATE, NEW_VALUE);
// Or use gs.getProperty() for configuration
```

## Requirements
- ServiceNow instance
- Access to Background Scripts or Scheduled Jobs
- Write access to target table
- Appropriate table and field permissions

## Related APIs
- [GlideRecord Query API](https://docs.servicenow.com/bundle/sandiego-application-development/page/app-store/dev_apps/concept/c_UsingGlideRecord.html)
- [GlideSystem Logging](https://docs.servicenow.com/bundle/sandiego-application-development/page/app-store/dev_apps/concept/c_SystemLog.html)
- [Best Practices for Bulk Operations](https://docs.servicenow.com/bundle/sandiego-application-development/page/app-store/dev_apps/concept/c_BulkOperations.html)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These docs are very out of date and not accessible anymore, how did you get these links?

Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
// Background Script: Safe Bulk Record Update with Progress Tracking
// Purpose: Update multiple records safely with batch processing and error handling

var TABLE = 'incident'; // Change to your table
var FILTER = "priority=1"; // Add your filter conditions
var BATCH_SIZE = 100;
var FIELD_TO_UPDATE = 'state'; // Field to update
var NEW_VALUE = '1'; // Value to set

var successCount = 0;
var errorCount = 0;
var totalProcessed = 0;

gs.log('[Bulk Update Started] Table: ' + TABLE + ' | Filter: ' + FILTER, 'BulkUpdate');
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

gs.log usage is not recommended


try {
var gr = new GlideRecord(TABLE);
gr.addEncodedQuery(FILTER);
gr.query();

var recordsToProcess = [];
while (gr.next()) {
recordsToProcess.push(gr.getUniqueValue());

// Process in batches to prevent timeout
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As these updates are occurring within the same execution, this does not prevent timeout or create separate items to process in parallel

if (recordsToProcess.length === BATCH_SIZE) {
processBatch(recordsToProcess);
recordsToProcess = [];
}
}

// Process remaining records
if (recordsToProcess.length > 0) {
processBatch(recordsToProcess);
}

gs.log('[Bulk Update Complete] Total: ' + totalProcessed + ' | Success: ' + successCount + ' | Errors: ' + errorCount, 'BulkUpdate');

} catch (e) {
gs.error('[Bulk Update Error] ' + e.toString(), 'BulkUpdate');
}

function processBatch(recordIds) {
for (var i = 0; i < recordIds.length; i++) {
try {
var record = new GlideRecord(TABLE);
record.get(recordIds[i]);
record.setValue(FIELD_TO_UPDATE, NEW_VALUE);
record.update();
successCount++;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are doing 100 database queries instead of one for all 100 records, and updating one at a time instead of using updateMultiple. This is not ideal. Consider changing this to iterate through all of the records to reduce the queries, and consider updateMultiple to reduce the database updates

} catch (error) {
errorCount++;
gs.log('[Failed Record] ' + recordIds[i] + ': ' + error.toString(), 'BulkUpdate');
}
totalProcessed++;
}

// Log progress
gs.log('[Progress] Updated ' + totalProcessed + ' records | Success: ' + successCount + ' | Errors: ' + errorCount, 'BulkUpdate');
}
Loading