Optimized by Frappe core contributors
We've taken 8-hour batch jobs down to under a minute. Not by throwing hardware at the problem – by understanding the Frappe framework at the source level and fixing the actual bottleneck. Our developers contribute to Frappe core, so they know where the framework is fast, where it's slow, and how to work with both.
Get a performance audit →8 hrs
Before optimization
→
<1 min
After optimization
Actual result from a client's batch processing job on ERPNext
Most ERPNext performance issues are code problems, not infrastructure problems. Adding more RAM won't fix an N+1 query that runs 10,000 times per page load.
Query Reports that take 30+ seconds. List views that time out on large datasets. Usually caused by missing indexes, unoptimized SQL, or Python-side filtering that should happen in the query. We profile the actual queries hitting MariaDB, add proper indexes, rewrite the SQL, and push filtering to the database layer. Result: sub-second reports.
Payroll processing that takes hours. Bulk invoice generation that blocks the queue. Background jobs that fail silently and retry forever. We restructure batch operations to use efficient bulk queries instead of document-by-document processing, implement proper chunking with progress tracking, and fix retry logic to prevent queue poisoning.
The most common performance killer in Frappe customizations: a loop that calls frappe.get_doc() or frappe.get_value() for every row. 1,000 items = 1,000 database queries. We refactor these to batch queries using frappe.get_all() with proper filters. One query instead of a thousand.
Documents with many linked fields, custom scripts that make API calls on load, child tables with thousands of rows. We identify which calls are blocking the render, implement lazy loading for heavy data, add server-side caching for frequently accessed values, and restructure client scripts to load data asynchronously.
ERPNext databases that grow to 50GB+ with years of transaction data. Slow backups, slow queries, growing storage costs. We implement data archival strategies, set up table partitioning where applicable, clean up orphaned records, and optimize table structures – all without losing any business data.
Frappe uses Redis heavily for caching, queuing, and session management. Misconfigured Redis can cause cache misses that hit the database for every request, queue backlogs that delay background processing, and session issues under high concurrency. We tune Redis configuration, implement proper cache invalidation, and ensure the caching layer is actually caching.
We don't guess. We measure, identify, fix, and verify.
We enable MariaDB slow query log, Frappe request profiling, and Redis monitoring. We measure baseline performance on the specific operations that are slow. No assumptions – we look at actual query execution plans, actual Python call stacks, actual cache hit rates.
Usually 80% of the slowness comes from 1-2 specific queries or code paths. We find the biggest offender first. Fixing the top bottleneck often reveals the second one – they compound. We rank by impact, not by how interesting the problem is.
We fix the bottleneck with the smallest change that achieves the result. Add an index, not a rewrite. Cache a value, not refactor the architecture. Only escalate to bigger changes when the small ones aren't sufficient. Every change is tested against the same dataset that was slow.
We measure again. Same operation, same dataset, same conditions. We show the before/after numbers. We document what was changed, why it was slow, and how to prevent the pattern in future development.
When your optimizer has contributed to the framework's query builder, ORM, and caching layer, they know where the performance boundaries are. They know which Frappe methods are O(1) and which are O(n). They know what gets cached and what doesn't. They don't need to reverse-engineer the framework – they helped build it.
That's the difference between someone who profiles your code and someone who profiles your code AND knows the framework internals well enough to suggest a change to how the framework itself handles your use case.
Common causes include: unoptimized database queries in custom reports and scripts, missing database indexes, N+1 query patterns in custom code, large background job queues, inefficient Server Scripts or Client Scripts, bloated database tables without archival, and misconfigured Redis caching. Most ERPNext performance problems are code-level, not infrastructure – adding more RAM rarely helps if the root cause is a query that does a full table scan.
For a focused optimization project, a Development Sprint (₹1,50,000, 3-week cycle) covers profiling, identifying top bottlenecks, implementing fixes, and measuring results. For a smaller investigation, our Hourly model (₹2,500–3,500/hr) works. For ongoing optimization as part of larger development, Dedicated Resource at ₹5,500–8,000/day (quarterly commitment, limited availability).
Most optimizations are additive – adding indexes, adding cache layers, restructuring queries within existing code. We don't change business logic without discussion. Every change is tested against your data and verified before deployment. We explain what we're changing and why before we do it.
Tell us what's slow – reports, batch jobs, form loads, the whole system. We'll profile it and give you an honest assessment of what's fixable and how long it'll take.
hello@aerele.in →