Map/Reduce scripts are the most powerful script type in NetSuite for processing large volumes of data. They split work across four stages, handle parallelism automatically, and provide built-in error recovery that keeps one failed record from killing an entire batch job. If you have ever hit governance limits on a Scheduled Script or needed to process tens of thousands of records reliably, Map/Reduce is the answer.
What Are Map/Reduce Scripts?
A Map/Reduce script divides bulk processing into four sequential stages: getInputData, map, reduce, and summarize. NetSuite manages concurrency, governance resets, and data serialization between each stage, so you can focus on business logic instead of infrastructure concerns.
The pattern originates from distributed computing (think Google's original MapReduce paper), adapted for NetSuite's server-side execution model. Each stage has its own governance allocation, and NetSuite can run multiple map and reduce invocations in parallel across different processing queues.
When to Use Map/Reduce vs Scheduled Scripts
| Criteria | Map/Reduce | Scheduled Script |
|---|---|---|
| Record volume | Thousands to millions | Hundreds to low thousands |
| Parallelism | Automatic (up to 5 queues) | Single-threaded |
| Error recovery | Per-record isolation | Manual try/catch |
| Governance | Resets per invocation | Shared across entire run |
| Complexity | Higher (4-stage design) | Lower (single entry point) |
| Yield/restart | Automatic | Manual via task.checkStatus() |
| Debugging | Harder (async stages) | Easier (linear flow) |
Use Map/Reduce when:
- Processing more than a few hundred records
- Individual record failures should not stop the entire job
- You need parallel execution for performance
- The operation is naturally decomposable into key-value pairs
Use Scheduled Scripts when:
- Processing a small, predictable number of records
- The operation is inherently sequential (order matters)
- You need simpler debugging during development
- The logic is straightforward and does not benefit from parallelism
The Four Stages
Stage 1: getInputData
This stage defines the data set to process. It runs once and returns the full input for the map stage. You can return:
- A search object (most common and most efficient)
- An array of objects
- An object with key-value pairs
- A search result set via
search.create().run()
/**
* @NApiVersion 2.1
* @NScriptType MapReduceScript
*/
define(['N/search'], (search) => {
const getInputData = () => {
// Option 1: Return a saved search (best for large data sets)
return search.create({
type: search.Type.SALES_ORDER,
filters: [
['mainline', 'is', 'T'],
'AND',
['status', 'anyof', 'SalesOrd:B'], // Pending Fulfillment
'AND',
['custbody_batch_processed', 'is', 'F']
],
columns: [
search.createColumn({ name: 'entity' }),
search.createColumn({ name: 'tranid' }),
search.createColumn({ name: 'total' }),
search.createColumn({ name: 'email' })
]
});
};
// ... other stages
return { getInputData, map, reduce, summarize };
});Returning a search object is the most memory-efficient approach. NetSuite streams results from the search directly into the map stage, rather than loading everything into memory at once. This is critical when dealing with data sets that exceed 4,000 results.
// Option 2: Return an array (for custom data sources)
const getInputData = () => {
const records = [];
// Read from a CSV file, external API, or custom logic
records.push({ id: 1, name: 'Item A', action: 'update' });
records.push({ id: 2, name: 'Item B', action: 'delete' });
return records;
};
// Option 3: Return key-value pairs
const getInputData = () => {
return {
'customer_101': { action: 'recalculate', segment: 'enterprise' },
'customer_102': { action: 'recalculate', segment: 'smb' },
'customer_103': { action: 'archive', segment: 'inactive' }
};
};Stage 2: map
The map stage receives one input entry at a time. Each invocation gets a context object with a key and value (both strings). Your job here is to process or transform the data and optionally write output key-value pairs for the reduce stage.
const map = (context) => {
const searchResult = JSON.parse(context.value);
// Extract what we need
const customerId = searchResult.values.entity.value;
const total = parseFloat(searchResult.values.total);
// Write to reduce stage, grouped by customer
context.write({
key: customerId,
value: JSON.stringify({
orderId: searchResult.id,
total: total,
tranId: searchResult.values.tranid
})
});
};Key points about the map stage:
- Input is always serialized as strings -- you must
JSON.parse()thecontext.value - Each map invocation is independent -- NetSuite can run them in parallel
- Governance resets for each invocation -- you get a fresh 1,000 units per map call
- Use
context.write()to emit data -- the key determines grouping in reduce - You can write zero, one, or many key-value pairs per map invocation
- If you skip
context.write(), nothing reaches reduce -- useful for filtering
Stage 3: reduce
The reduce stage receives all values that share the same key. This is where you aggregate, consolidate, or perform batch operations on grouped data.
const reduce = (context) => {
const customerId = context.key;
const orders = context.values.map(v => JSON.parse(v));
// Calculate total across all orders for this customer
let customerTotal = 0;
const orderIds = [];
orders.forEach((order) => {
customerTotal += order.total;
orderIds.push(order.tranId);
});
// Update customer record with consolidated data
record.submitFields({
type: record.Type.CUSTOMER,
id: customerId,
values: {
custentity_total_pending: customerTotal,
custentity_pending_orders: orderIds.join(', ')
}
});
// Write summary data for the summarize stage
context.write({
key: customerId,
value: JSON.stringify({
total: customerTotal,
orderCount: orders.length
})
});
};Key points about the reduce stage:
context.valuesis an array of strings -- all values written with the same key in map- Each reduce invocation handles one key -- all values for that key arrive together
- Governance resets per reduce invocation -- another fresh 1,000 units
- Reduce is optional -- if you do not define it, map output goes directly to summarize
- Multiple reduce invocations run in parallel for different keys
Stage 4: summarize
The summarize stage runs once after all map and reduce work is complete. Use it for final reporting, cleanup, and error handling.
const summarize = (summary) => {
// Log overall statistics
log.audit('Map/Reduce Complete', {
dateCreated: summary.dateCreated,
seconds: summary.seconds,
usage: summary.usage,
yields: summary.yields,
concurrency: summary.concurrency
});
// Check for input stage errors
if (summary.inputSummary.error) {
log.error('Input Error', summary.inputSummary.error);
}
// Check for map stage errors
let mapErrorCount = 0;
summary.mapSummary.errors.iterator().each((key, error) => {
log.error(`Map Error - Key: ${key}`, error);
mapErrorCount++;
return true; // continue iterating
});
// Check for reduce stage errors
let reduceErrorCount = 0;
summary.reduceSummary.errors.iterator().each((key, error) => {
log.error(`Reduce Error - Key: ${key}`, error);
reduceErrorCount++;
return true;
});
// Process final output
let totalProcessed = 0;
summary.output.iterator().each((key, value) => {
const data = JSON.parse(value);
totalProcessed++;
return true;
});
log.audit('Processing Summary', {
totalProcessed: totalProcessed,
mapErrors: mapErrorCount,
reduceErrors: reduceErrorCount
});
};How Parallel Processing Works
NetSuite runs Map/Reduce scripts with automatic parallelism. Here is how it works in practice:
- getInputData runs on a single thread to collect all input
- map invocations are distributed across up to 5 parallel queues (configurable in deployment settings via the "Concurrency Limit" field)
- reduce invocations also run in parallel, one per unique key
- summarize runs once on a single thread after everything else completes
The concurrency level depends on your NetSuite account tier and the deployment configuration. Most accounts support 2-5 concurrent queues. You can check the actual concurrency used in the summary.concurrency property.
getInputData (1 thread)
|
v
map (up to 5 parallel invocations)
|
v
reduce (parallel by key)
|
v
summarize (1 thread)
Each parallel invocation is isolated -- they do not share memory or variables. All communication between stages happens through serialized key-value pairs.
Data Serialization Between Stages
This is one of the most common sources of bugs in Map/Reduce scripts. Every value passed between stages is serialized as a string. You cannot pass objects, arrays, or numbers directly.
// WRONG - objects get converted to "[object Object]"
context.write({ key: 'myKey', value: { id: 123, name: 'Test' } });
// CORRECT - serialize explicitly
context.write({
key: 'myKey',
value: JSON.stringify({ id: 123, name: 'Test' })
});
// Reading in the next stage
const data = JSON.parse(context.value); // map stage
// or
const items = context.values.map(v => JSON.parse(v)); // reduce stageThe key is also a string. If you use a numeric internal ID as a key, it arrives as a string in reduce:
// In map:
context.write({ key: String(customerId), value: JSON.stringify(data) });
// In reduce:
const customerId = parseInt(context.key, 10); // convert back to numberError Handling and Recovery
One of the biggest advantages of Map/Reduce over Scheduled Scripts is automatic error isolation. If one record fails in the map or reduce stage, NetSuite catches the error, logs it, and continues processing the remaining records.
Handling Individual Record Failures
const map = (context) => {
const data = JSON.parse(context.value);
try {
// Attempt the operation
record.submitFields({
type: record.Type.SALES_ORDER,
id: data.id,
values: { custbody_processed: true }
});
context.write({
key: 'success',
value: JSON.stringify({ id: data.id, status: 'updated' })
});
} catch (e) {
// Log the error but let it propagate
// NetSuite will record it in summary.mapSummary.errors
log.error(`Failed to process record ${data.id}`, e.message);
// Option A: Re-throw to let NetSuite handle it
throw e;
// Option B: Swallow the error and write to a failure key
// context.write({
// key: 'failed',
// value: JSON.stringify({ id: data.id, error: e.message })
// });
}
};Building a Retry Mechanism
For transient errors (locked records, temporary network issues), you can build retry logic:
const map = (context) => {
const data = JSON.parse(context.value);
const MAX_RETRIES = 3;
for (let attempt = 1; attempt <= MAX_RETRIES; attempt++) {
try {
record.submitFields({
type: record.Type.CUSTOMER,
id: data.id,
values: { custentity_status: 'Processed' }
});
context.write({ key: data.id, value: 'success' });
return; // success, exit
} catch (e) {
log.debug('Retry', `Attempt ${attempt} failed for ${data.id}: ${e.message}`);
if (attempt === MAX_RETRIES) {
// Final attempt failed, let it propagate
throw e;
}
}
}
};Comprehensive Error Reporting in Summarize
const summarize = (summary) => {
const errors = {
input: null,
map: [],
reduce: []
};
// Capture input errors
if (summary.inputSummary.error) {
errors.input = summary.inputSummary.error;
log.error('INPUT STAGE FAILED', errors.input);
}
// Capture map errors with details
summary.mapSummary.errors.iterator().each((key, error) => {
const errorDetail = JSON.parse(error);
errors.map.push({
key: key,
name: errorDetail.name,
message: errorDetail.message
});
return true;
});
// Capture reduce errors
summary.reduceSummary.errors.iterator().each((key, error) => {
const errorDetail = JSON.parse(error);
errors.reduce.push({
key: key,
name: errorDetail.name,
message: errorDetail.message
});
return true;
});
// Send error report if there were failures
const totalErrors = errors.map.length + errors.reduce.length + (errors.input ? 1 : 0);
if (totalErrors > 0) {
log.error('Processing Errors', JSON.stringify(errors));
// Optionally send an email alert
email.send({
author: -5, // system user
recipients: 'admin@company.com',
subject: `Map/Reduce Errors: ${totalErrors} failures`,
body: `Input errors: ${errors.input ? 1 : 0}\n` +
`Map errors: ${errors.map.length}\n` +
`Reduce errors: ${errors.reduce.length}\n\n` +
`Details:\n${JSON.stringify(errors, null, 2)}`
});
}
// Count successes
let successCount = 0;
summary.output.iterator().each(() => {
successCount++;
return true;
});
log.audit('Final Report', {
processed: successCount,
errors: totalErrors,
runtime: summary.seconds + ' seconds',
governance: summary.usage + ' units'
});
};Governance Management
Each stage of a Map/Reduce script has its own governance allocation:
| Stage | Governance Units |
|---|---|
| getInputData | 10,000 |
| map (per invocation) | 1,000 |
| reduce (per invocation) | 5,000 |
| summarize | 10,000 |
This is fundamentally different from Scheduled Scripts, which share a single pool of 10,000 units across the entire execution. With Map/Reduce, governance resets for every individual map and reduce call, making it possible to process unlimited records without hitting limits.
Monitoring Governance Usage
const map = (context) => {
const startUsage = runtime.getCurrentScript().getRemainingUsage();
// ... do work ...
const endUsage = runtime.getCurrentScript().getRemainingUsage();
log.debug('Governance', `Used ${startUsage - endUsage} units in this map invocation`);
};Keeping Map Invocations Lean
Since each map call only gets 1,000 units, keep operations minimal:
// GOOD: One operation per map invocation
const map = (context) => {
const data = JSON.parse(context.value);
// Single record update uses ~10 units
record.submitFields({
type: record.Type.ITEM,
id: data.id,
values: { custitem_flag: true }
});
context.write({ key: data.id, value: 'done' });
};
// BAD: Loading full records and doing heavy work in map
const map = (context) => {
const data = JSON.parse(context.value);
// Loading a full record uses ~10 units
const rec = record.load({ type: record.Type.SALES_ORDER, id: data.id });
// Iterating sublists and loading related records eats governance fast
const lineCount = rec.getLineCount({ sublistId: 'item' });
for (let i = 0; i < lineCount; i++) {
const itemId = rec.getSublistValue({ sublistId: 'item', fieldId: 'item', line: i });
const itemRec = record.load({ type: record.Type.ITEM, id: itemId }); // 10 more units each
// ...
}
// You could easily exceed 1,000 units with a large order
};If you need to do heavy work, move it to the reduce stage (5,000 units) by using the map stage purely for grouping.
Practical Example 1: Mass Updating Records
This script updates a custom field on all active customers based on their order history:
/**
* @NApiVersion 2.1
* @NScriptType MapReduceScript
* @NModuleScope SameAccount
*
* Recalculate customer tier based on 12-month order totals
*/
define(['N/search', 'N/record', 'N/log', 'N/runtime', 'N/email'],
(search, record, log, runtime, email) => {
const getInputData = () => {
return search.create({
type: search.Type.TRANSACTION,
filters: [
['type', 'anyof', 'SalesOrd'],
'AND',
['mainline', 'is', 'T'],
'AND',
['trandate', 'within', 'lastrollingyear'],
'AND',
['status', 'anyof', 'SalesOrd:C', 'SalesOrd:F', 'SalesOrd:G'] // Billed, Fulfilled
],
columns: [
search.createColumn({ name: 'entity', summary: search.Summary.GROUP }),
search.createColumn({ name: 'amount', summary: search.Summary.SUM }),
search.createColumn({ name: 'internalid', summary: search.Summary.COUNT })
]
});
};
const map = (context) => {
const result = JSON.parse(context.value);
const customerId = result.values['GROUP(entity)'].value;
const totalAmount = parseFloat(result.values['SUM(amount)']);
const orderCount = parseInt(result.values['COUNT(internalid)'], 10);
// Determine tier
let tier;
if (totalAmount >= 100000) {
tier = 'Platinum';
} else if (totalAmount >= 50000) {
tier = 'Gold';
} else if (totalAmount >= 10000) {
tier = 'Silver';
} else {
tier = 'Bronze';
}
context.write({
key: customerId,
value: JSON.stringify({ tier, totalAmount, orderCount })
});
};
const reduce = (context) => {
const customerId = context.key;
const data = JSON.parse(context.values[0]); // one value per customer
try {
record.submitFields({
type: record.Type.CUSTOMER,
id: customerId,
values: {
custentity_tier: data.tier,
custentity_annual_total: data.totalAmount,
custentity_order_count: data.orderCount,
custentity_tier_updated: new Date()
}
});
context.write({
key: 'success',
value: JSON.stringify({ id: customerId, tier: data.tier })
});
} catch (e) {
log.error(`Failed to update customer ${customerId}`, e.message);
throw e;
}
};
const summarize = (summary) => {
let successCount = 0;
summary.output.iterator().each(() => {
successCount++;
return true;
});
let errorCount = 0;
summary.reduceSummary.errors.iterator().each((key, error) => {
log.error(`Reduce error for customer ${key}`, error);
errorCount++;
return true;
});
log.audit('Customer Tier Update Complete', {
updated: successCount,
errors: errorCount,
runtime: summary.seconds + 's',
governance: summary.usage
});
};
return { getInputData, map, reduce, summarize };
});Practical Example 2: Processing CSV Imports
This script reads a CSV file from the File Cabinet and creates or updates records:
/**
* @NApiVersion 2.1
* @NScriptType MapReduceScript
*
* Import product updates from CSV file
*/
define(['N/file', 'N/record', 'N/search', 'N/log', 'N/runtime'],
(file, record, search, log, runtime) => {
const getInputData = () => {
// Get the CSV file ID from script parameter
const scriptObj = runtime.getCurrentScript();
const fileId = scriptObj.getParameter({ name: 'custscript_import_file_id' });
if (!fileId) {
throw new Error('No import file specified');
}
const csvFile = file.load({ id: fileId });
const csvContent = csvFile.getContents();
const lines = csvContent.split('\n');
const headers = lines[0].split(',').map(h => h.trim().toLowerCase());
const records = [];
for (let i = 1; i < lines.length; i++) {
if (!lines[i].trim()) continue; // skip empty lines
const values = lines[i].split(',').map(v => v.trim());
const row = {};
headers.forEach((header, index) => {
row[header] = values[index] || '';
});
row._lineNumber = i + 1; // for error reporting
records.push(row);
}
log.audit('CSV Parsed', `Found ${records.length} rows to process`);
return records;
};
const map = (context) => {
const row = JSON.parse(context.value);
// Validate required fields
if (!row.sku || !row.price) {
log.error('Invalid Row', `Line ${row._lineNumber}: Missing SKU or price`);
return; // skip this row, do not write to reduce
}
// Look up existing item by SKU
const itemSearch = search.create({
type: search.Type.ITEM,
filters: [['itemid', 'is', row.sku]],
columns: ['internalid', 'itemid', 'baseprice']
});
let existingItemId = null;
itemSearch.run().each((result) => {
existingItemId = result.id;
return false; // only need the first match
});
context.write({
key: row.sku,
value: JSON.stringify({
existingId: existingItemId,
sku: row.sku,
name: row.name || '',
price: parseFloat(row.price),
description: row.description || '',
lineNumber: row._lineNumber
})
});
};
const reduce = (context) => {
const sku = context.key;
const data = JSON.parse(context.values[0]);
try {
if (data.existingId) {
// Update existing item
record.submitFields({
type: record.Type.INVENTORY_ITEM,
id: data.existingId,
values: {
baseprice: data.price,
salesdescription: data.description
}
});
log.debug('Updated', `Item ${sku} (ID: ${data.existingId})`);
context.write({ key: 'updated', value: sku });
} else {
// Create new item
const newItem = record.create({
type: record.Type.INVENTORY_ITEM,
isDynamic: true
});
newItem.setValue({ fieldId: 'itemid', value: data.sku });
newItem.setValue({ fieldId: 'displayname', value: data.name });
newItem.setValue({ fieldId: 'baseprice', value: data.price });
newItem.setValue({ fieldId: 'salesdescription', value: data.description });
const newId = newItem.save();
log.debug('Created', `Item ${sku} (new ID: ${newId})`);
context.write({ key: 'created', value: sku });
}
} catch (e) {
log.error(`Failed: ${sku}`, `Line ${data.lineNumber}: ${e.message}`);
throw e;
}
};
const summarize = (summary) => {
let created = 0;
let updated = 0;
summary.output.iterator().each((key, value) => {
if (key === 'created') created++;
if (key === 'updated') updated++;
return true;
});
let errors = 0;
summary.reduceSummary.errors.iterator().each((key, error) => {
errors++;
return true;
});
log.audit('CSV Import Complete', {
created: created,
updated: updated,
errors: errors,
runtime: summary.seconds + 's'
});
};
return { getInputData, map, reduce, summarize };
});Practical Example 3: Generating Consolidated Reports
This script aggregates invoice data by subsidiary and month, then creates a summary custom record:
/**
* @NApiVersion 2.1
* @NScriptType MapReduceScript
*
* Generate monthly revenue summary by subsidiary
*/
define(['N/search', 'N/record', 'N/log', 'N/format'],
(search, record, log, format) => {
const getInputData = () => {
return search.create({
type: search.Type.INVOICE,
filters: [
['mainline', 'is', 'T'],
'AND',
['trandate', 'within', 'lastmonth'],
'AND',
['status', 'anyof', 'CustInvc:B'] // Open invoices
],
columns: [
search.createColumn({ name: 'subsidiary' }),
search.createColumn({ name: 'trandate' }),
search.createColumn({ name: 'amount' }),
search.createColumn({ name: 'entity' }),
search.createColumn({ name: 'tranid' })
]
});
};
const map = (context) => {
const result = JSON.parse(context.value);
const subsidiaryId = result.values.subsidiary.value;
const subsidiaryName = result.values.subsidiary.text;
const amount = parseFloat(result.values.amount);
const tranDate = result.values.trandate;
// Extract month key for grouping (e.g., "2026-01")
const dateParts = tranDate.split('/');
const monthKey = `${dateParts[2]}-${dateParts[0].padStart(2, '0')}`;
// Group by subsidiary + month
const compositeKey = `${subsidiaryId}__${monthKey}`;
context.write({
key: compositeKey,
value: JSON.stringify({
subsidiaryId: subsidiaryId,
subsidiaryName: subsidiaryName,
month: monthKey,
amount: amount,
invoiceId: result.id,
tranId: result.values.tranid,
customer: result.values.entity.text
})
});
};
const reduce = (context) => {
const invoices = context.values.map(v => JSON.parse(v));
const first = invoices[0];
// Aggregate
let totalRevenue = 0;
let invoiceCount = 0;
const customers = new Set();
invoices.forEach((inv) => {
totalRevenue += inv.amount;
invoiceCount++;
customers.add(inv.customer);
});
// Create summary record
try {
const summaryRec = record.create({
type: 'customrecord_revenue_summary',
isDynamic: true
});
summaryRec.setValue({ fieldId: 'custrecord_rs_subsidiary', value: first.subsidiaryId });
summaryRec.setValue({ fieldId: 'custrecord_rs_month', value: first.month });
summaryRec.setValue({ fieldId: 'custrecord_rs_total_revenue', value: totalRevenue });
summaryRec.setValue({ fieldId: 'custrecord_rs_invoice_count', value: invoiceCount });
summaryRec.setValue({ fieldId: 'custrecord_rs_unique_customers', value: customers.size });
const summaryId = summaryRec.save();
log.debug('Summary Created', `${first.subsidiaryName} - ${first.month}: $${totalRevenue}`);
context.write({
key: first.subsidiaryName,
value: JSON.stringify({
month: first.month,
revenue: totalRevenue,
invoices: invoiceCount,
summaryId: summaryId
})
});
} catch (e) {
log.error('Summary Creation Failed', `${first.subsidiaryName} - ${first.month}: ${e.message}`);
throw e;
}
};
const summarize = (summary) => {
const results = [];
summary.output.iterator().each((key, value) => {
results.push({ subsidiary: key, ...JSON.parse(value) });
return true;
});
log.audit('Revenue Summary Report', {
summariesCreated: results.length,
runtime: summary.seconds + 's',
details: JSON.stringify(results)
});
// Log any errors
summary.reduceSummary.errors.iterator().each((key, error) => {
log.error('Reduce Error', `Key: ${key}, Error: ${error}`);
return true;
});
};
return { getInputData, map, reduce, summarize };
});Yield and Concurrency Considerations
Understanding Yields
NetSuite automatically yields (pauses and resumes) Map/Reduce scripts to manage server resources. You can see the number of yields in summary.yields. Yields happen between map and reduce invocations -- not in the middle of one.
This means each individual map or reduce function call runs to completion without interruption. But between calls, NetSuite may pause your script, run other scripts, and resume yours later. This has implications:
- Do not rely on timing between invocations
- Do not use global variables to share state between map/reduce calls
- Each invocation is stateless -- treat it as an independent function
Concurrency Configuration
You can control parallelism through the Script Deployment record:
Deployment Settings:
Concurrency Limit: 1-5 (default varies by account)
When to limit concurrency:
- Your script updates records that other scripts also modify (lock contention)
- You are calling an external API with rate limits
- You need more predictable execution order for debugging
When to maximize concurrency:
- Processing independent records with no shared dependencies
- You need the job to complete as fast as possible
- Each record operation is self-contained
Avoiding Record Lock Contention
When multiple map/reduce invocations try to update the same record simultaneously, you get RCRD_HAS_BEEN_CHANGED errors. Structure your keys to avoid this:
// BAD: All map invocations write to the same parent record
const map = (context) => {
const data = JSON.parse(context.value);
// Multiple parallel invocations all hitting the same customer
record.submitFields({
type: record.Type.CUSTOMER,
id: data.customerId,
values: { custentity_counter: data.count }
});
};
// GOOD: Defer writes to reduce stage, grouped by target record
const map = (context) => {
const data = JSON.parse(context.value);
context.write({
key: data.customerId, // group by customer
value: JSON.stringify({ count: data.count })
});
};
const reduce = (context) => {
// All data for this customer arrives in one reduce call
// No lock contention because only one invocation touches this customer
const customerId = context.key;
const totalCount = context.values.reduce((sum, v) => {
return sum + JSON.parse(v).count;
}, 0);
record.submitFields({
type: record.Type.CUSTOMER,
id: customerId,
values: { custentity_counter: totalCount }
});
};Debugging and Logging Strategies
Debugging Map/Reduce scripts is harder than other script types because of the asynchronous, multi-stage execution. Here are practical strategies.
Structured Logging
Use consistent log formats that you can search in the Execution Log:
const LOG_PREFIX = 'CUSTOMER_TIER_MR';
const map = (context) => {
const data = JSON.parse(context.value);
log.debug(`${LOG_PREFIX}:MAP`, `Processing record ${data.id}`);
// ... logic ...
log.debug(`${LOG_PREFIX}:MAP`, `Completed record ${data.id}, wrote key: ${data.customerId}`);
};
const reduce = (context) => {
log.debug(`${LOG_PREFIX}:REDUCE`, `Key: ${context.key}, Values: ${context.values.length}`);
// ... logic ...
log.debug(`${LOG_PREFIX}:REDUCE`, `Completed key: ${context.key}`);
};Testing with Small Data Sets
During development, limit your input data to a handful of records:
const getInputData = () => {
const search = search.create({
type: search.Type.CUSTOMER,
filters: [
['internalid', 'anyof', ['101', '102', '103']] // specific test records
],
columns: [/* ... */]
});
return search;
};Using Script Parameters for Debug Mode
const getInputData = () => {
const scriptObj = runtime.getCurrentScript();
const debugMode = scriptObj.getParameter({ name: 'custscript_debug_mode' });
const filters = [
['isactive', 'is', 'T']
];
// In debug mode, limit to 10 records
if (debugMode) {
log.debug('DEBUG MODE', 'Limiting to 10 records');
const searchObj = search.create({
type: search.Type.CUSTOMER,
filters: filters,
columns: [/* ... */]
});
const results = [];
let count = 0;
searchObj.run().each((result) => {
results.push(result);
count++;
return count < 10;
});
return results;
}
return search.create({
type: search.Type.CUSTOMER,
filters: filters,
columns: [/* ... */]
});
};Tracking Execution in the UI
After deploying your script:
- Go to Customization > Scripting > Script Deployments
- Find your Map/Reduce deployment
- Click View to see the execution status: Pending, Processing, Complete, or Failed
- Check the Map/Reduce Script Status page for real-time progress (percentage complete, current stage, queue assignments)
- Review the Execution Log tab for your
log.debug()andlog.audit()entries
Performance Optimization Tips
1. Return Search Objects from getInputData
Do not run a search and push results into an array. Return the search object directly and let NetSuite stream results:
// FAST: NetSuite streams results efficiently
const getInputData = () => {
return search.create({ type: 'salesorder', filters: [...], columns: [...] });
};
// SLOW: Loads all results into memory first
const getInputData = () => {
const results = [];
search.create({ type: 'salesorder', filters: [...], columns: [...] })
.run().each((result) => {
results.push(result);
return true;
});
return results;
};2. Use submitFields Instead of Load/Save
When you only need to update a few fields, record.submitFields() is significantly faster and uses fewer governance units than loading, modifying, and saving a full record:
// FAST: ~10 governance units
record.submitFields({
type: record.Type.CUSTOMER,
id: customerId,
values: { custentity_tier: 'Gold' }
});
// SLOW: ~20+ governance units
const rec = record.load({ type: record.Type.CUSTOMER, id: customerId });
rec.setValue({ fieldId: 'custentity_tier', value: 'Gold' });
rec.save();3. Minimize Data Passed Between Stages
Only serialize what you actually need in the next stage. Large payloads slow down serialization and deserialization:
// GOOD: Pass only the IDs and computed values you need
context.write({
key: customerId,
value: JSON.stringify({ orderId: result.id, total: amount })
});
// BAD: Passing entire search result objects
context.write({
key: customerId,
value: JSON.stringify(result) // includes lots of metadata you don't need
});4. Use the Map Stage for Filtering
If some records do not need processing, filter them in map by simply not calling context.write(). This avoids wasting reduce governance on irrelevant data:
const map = (context) => {
const data = JSON.parse(context.value);
const amount = parseFloat(data.values.amount);
// Skip small amounts
if (amount < 100) return; // no context.write() means this is filtered out
context.write({
key: data.values.entity.value,
value: JSON.stringify({ amount })
});
};5. Batch Operations in Reduce
If you are creating multiple related records in reduce, consider batching them:
const reduce = (context) => {
const items = context.values.map(v => JSON.parse(v));
// Create a single parent record with all child lines
// instead of creating multiple individual records
const journal = record.create({
type: record.Type.JOURNAL_ENTRY,
isDynamic: true
});
items.forEach((item) => {
journal.selectNewLine({ sublistId: 'line' });
journal.setCurrentSublistValue({
sublistId: 'line',
fieldId: 'account',
value: item.accountId
});
journal.setCurrentSublistValue({
sublistId: 'line',
fieldId: 'debit',
value: item.amount
});
journal.commitLine({ sublistId: 'line' });
});
journal.save(); // one save instead of many
};6. Script Parameter Configuration
Always use script parameters for configurable values instead of hardcoding:
/**
* Script Parameters:
* - custscript_mr_search_id: Saved search to use as input
* - custscript_mr_batch_size: Number of records per reduce batch
* - custscript_mr_email_on_complete: Send completion email (checkbox)
*/
const getInputData = () => {
const scriptObj = runtime.getCurrentScript();
const searchId = scriptObj.getParameter({ name: 'custscript_mr_search_id' });
return search.load({ id: searchId });
};This makes your script reusable across different scenarios without code changes.
Deployment Checklist
Before deploying a Map/Reduce script to production:
- Test with a small data set -- use specific internal IDs in your search filters
- Verify error handling -- intentionally cause a failure and confirm the summarize stage reports it
- Check governance usage -- log
runtime.getCurrentScript().getRemainingUsage()in map and reduce - Set appropriate concurrency -- start with 1 queue and increase after confirming no lock contention
- Configure status notifications -- set up email alerts for script failures
- Schedule appropriately -- run during off-peak hours if processing thousands of records
- Add a "processed" flag -- prevent re-processing records on subsequent runs
- Review the Map/Reduce Script Status page after the first production run
Next Steps
Now that you understand Map/Reduce scripts, continue building your SuiteScript expertise:
- Scheduled Scripts for simpler batch processing
- Search API in SuiteScript for building input queries
- RESTlets for triggering bulk operations via API
Need help with bulk data processing in NetSuite? Contact our development team for a consultation.