Cursor-Based Pagination in Salesforce Apex- Processing Millions of Records Without Hitting Governor Limits

Cursor-Based Pagination in Salesforce Apex

Key Takeaways:

  • As Salesforce data volumes grow, batch processes that once worked reliably begin to fail—hitting governor limits, slowing down operations, and creating inconsistencies in business-critical data.
    At Sigma Infosolutions, we address this by re-architecting data processing using cursor-based pagination with Batch Apex, enabling controlled, large-scale execution without system failures.
  • Many implementations still rely on processing large datasets in a single transaction, a pattern that breaks beyond platform limits and leads to performance bottlenecks.
    Sigma replaces this with chunked processing using Database.QueryLocator, allowing millions of records to be handled efficiently with predictable performance.
  • A lack of fault tolerance in batch operations often means a single bad record can disrupt entire workflows, reducing reliability at scale.
    To overcome this, we design resilient processing pipelines using partial success handling and structured error tracking, ensuring continuity even when data is imperfect.
  • As systems scale, poor query design and non-bulkified logic start impacting performance and stability.  Sigma applies bulk processing principles, optimized data access patterns, and proven architectural best practices to ensure Salesforce systems remain stable, efficient, and scalable over time.

Introduction

Salesforce systems rarely fail at the beginning. They fail as they grow.

What starts as a stable setup—handling customer data, workflows, and reporting—gradually begins to show signs of strain. Processing slows down, batch jobs fail intermittently, and data inconsistencies start surfacing across business functions. At this point, the conversation often shifts to platform limits. But in practice, the issue is not Salesforce.
It’s how the system was designed to handle increasing data volumes.

In many cases, the architecture that worked at 10,000 records is still in place at 200,000 or more. And that’s where the breakdown begins. Cursor-based pagination addresses this challenge—not as a workaround, but as a foundational design approach for scalable data processing.

Where Systems Begin to Struggle

As data grows, many Salesforce environments continue to rely on processing models that were never designed for scale.

These systems typically attempt to:

  • Load large datasets into a single transaction
  • Process records sequentially in-memory
  • Execute updates in bulk without isolation

Initially, this works. Over time, it introduces friction:

  • Jobs fail midway, leaving incomplete updates
  • Processing times become unpredictable
  • Teams lose visibility into execution outcomes

The business impact is subtle at first—but compounds quickly. Reporting accuracy drops. Operational effort increases. Confidence in the system starts to erode.

Rethinking Data Processing: A Scalable Approach

The shift required is not incremental—it’s structural. Instead of processing everything at once, scalable systems process data in controlled, sequential segments.

Cursor-based pagination enables this model. Rather than loading the entire dataset, the system:

  • Retrieves a defined subset of records
  • Processes them independently
  • Progresses automatically until completion

This aligns data processing with how Salesforce is designed to operate, within defined limits, but at scale.

What Is Cursor-Based Pagination?

Cursor-Based Pagination_one page at at time

Cursor-based pagination is a technique where data is processed in small, manageable chunks, instead of loading everything at once.

Think of it like reading a large book:

  • Instead of reading all pages at once
  • You read one page at a time, but keep track of where you left off

That “bookmark” is your cursor.

How cursor based pagination works

offset vs cursor pagination

How This Works in Salesforce

Salesforce supports this approach through Batch Apex and Database.QueryLocator, which together enable server-side cursors. From a business perspective, the advantages are clear:

  • Large volumes can be processed without failure
  • Each execution cycle is isolated, reducing systemic risk
  • Processing becomes predictable and measurable

A simplified architectural representation is shown below:

cursor based

The intent here isn’t the code itself. It’s the pattern: controlled execution, isolation, and continuity at scale.

Governor Limits: Before vs After

LimitWithout CursorWith Cursor
SOQL Rows50,00050,000,000
Heap Size6 MB12 MB per batch
DML Rows10,00010,000 per batch
CPU Time10 sec60 sec per batch

What Changes When Architecture Evolves

When systems adopt this model, the shift is noticeable. Data operations become stable. Failures no longer cascade across entire jobs. Processing times stop fluctuating unpredictably. Most importantly, teams regain confidence in the system’s ability to handle growth.

This is less about performance optimization and more about operational reliability.

A Typical Transformation Scenario

In a growing Salesforce environment, a business relied heavily on batch processes to maintain account-level insights and reporting accuracy.

As data volumes increased, these processes began failing intermittently. Updates were inconsistent, and operational teams had to intervene frequently to correct issues manually.

The root cause wasn’t logic, it was execution design.

By restructuring data processing using a cursor-based approach:

  • Large jobs were broken into controlled execution cycles
  • Failures were isolated instead of impacting entire runs
  • Processing became consistent and easier to monitor

Over time, the system stabilized—not through incremental fixes, but through architectural realignment.

Also, read the blog: Salesforce Integration: Essential Best Practices for Growth

Where Sigma Fits In?

This is the point where many organizations reach a decision threshold.

They can continue patching failures as they arise—or step back and rethink how their Salesforce system is designed to operate at scale. At Sigma, this is where our Salesforce consulting approach is focused.

Instead of treating issues like failed batches or slow processing as isolated problems, we look at the entire data processing layer as a system:

  • How data flows across objects and workflows
  • Where execution patterns break under load
  • How limits are being encountered—and why

From there, the focus shifts to:

  • Redesigning processing models for scale
  • Introducing fault-tolerant execution patterns
  • Improving observability and operational control

The outcome isn’t just better-performing code.
It’s a Salesforce environment that remains stable as the business grows, without constant intervention.

Read our success story: Unifying Multi-Org Operations with Salesforce-to-Salesforce Integration for a specialty finance company

The Architectural Difference

At scale, the difference between stable and unstable systems comes down to intent.

Unstructured systems attempt to process everything at once and eventually fail under pressure. Structured systems are designed to operate within constraints, using them as part of the salesforce architecture patterns. Cursor-based pagination is one example of this shift—but the broader principle is more important: Scalable systems are designed, not patched.

When This Becomes Critical

This approach becomes essential when:

  • Data volumes exceed operational comfort zones
  • Business-critical processes depend on batch execution
  • System reliability directly impacts reporting and decisions

At this stage, continuing with existing patterns introduces more risk than value.

Conclusion

Cursor-based pagination is not just a technique, it’s a reflection of how scalable Salesforce systems are built.

Organizations that continue to grow successfully on Salesforce don’t rely on incremental fixes. They ensure that their data processing architecture can support increasing complexity without introducing instability.

That requires a shift:

  • From reactive fixes to proactive design
  • From isolated jobs to structured processing systems
  • From short-term solutions to long-term scalability

Final Thought

If your Salesforce environment is starting to show signs of strain, failed jobs, delayed processing, or inconsistent data, it’s rarely a one-off issue. It’s a signal. And addressing it effectively requires more than optimization. It requires rethinking how the system is designed to handle scale.