Right-sizing ArcGIS Server: a memory calculator that actually reflects how it works.
You can access the calculators here;
Sizing an ArcGIS Server deployment is one of those tasks that sounds straightforward until you sit down and do it. Esri publishes some general rules of thumb - 4 GB per core, ~3 services per core - but those numbers can feel disconnected from what you actually observe on a running server.
How many services can you really run? What happens when you have a mix of lightweight feature services and heavyweight GP tools? And what does "memory" even mean in the context of ArcSOC processes that might be idle, active, or somewhere in between?
I built a small set of browser-based calculators to make this easier to reason about.
What the calculators do
There are two calculators that cover different planning scenarios.
The Basic calculator is top-down: you enter the number of cores and your deployment type, and it tells you how much memory to provision and how many services you can expect to run. It's useful early in a project - procurement conversations, initial infrastructure requests, back-of-envelope checks. It applies Esri's best practice rules directly and rounds memory up to the nearest 8 GB to align with real DIMM configurations.
The Advanced calculator is bottom-up: you start with the number of services you need to run and work backwards to the hardware. It lets you tune the ArcSOC memory per instance, min/max instance counts, the percentage of services that are busy at any given time, and OS and ArcGIS overhead. The result is a recommended core count and memory size grounded in how ArcGIS Server actually behaves.
Both calculators agree at sensible defaults - at 25% concurrency with 300 MB per ArcSOC instance, the Advanced calculator confirms the Basic calculator's 4 GB/core rule of thumb.
The memory vs. cores distinction
One thing that trips people up is that memory and CPU scale differently in ArcGIS Server.
Memory is consumed by every ArcSOC process that is loaded, including idle ones. If you configure min instances = 1, every service has an ArcSOC process sitting in RAM at all times, whether it is serving requests or not. The memory calculation needs to account for all of those loaded instances.
CPU is only consumed by instances that are actively processing a request. An idle ArcSOC process at minimum instances uses negligible CPU. So, sizing cores to total loaded instances would massively over-provision. Instead, the calculators size cores to the instances that are busy, which is the busy percentage times services times max instances, then divides by four (the instances-per-core figure from Esri Architecture Center testing).
This distinction matters a lot in practice. A server with 100 services at 25% concurrency and min=1 instances might need 16 GB of RAM but only 4–8 cores. Getting this wrong in either direction wastes money.
Cloud vs. on-premises VMs
The old Esri guidance of +1 GB/core for virtualised deployments was written for on-premises VMware and Hyper-V environments, where the customer manages the hypervisor and needs to leave headroom for it. That overhead is real and still applies to on-prem VMs.
Cloud IaaS is different. Azure and AWS EC2 both provide dedicated, non-overcommitted physical memory to their instances. The hypervisor overhead exists but is absorbed by the provider; you get what you pay for. Applying the +1 GB/core overhead on top of a cloud VM double-counts something you are already paying for. The calculators handle this with a three-way deployment selector: physical, cloud IaaS, or on-premises VM.
Measuring your actual ArcSOC memory
The most important input to the Advanced calculator is ArcSOC memory per instance. This varies widely from around 80 MB for a simple hosted feature service to 900 MB or more for a complex geoprocessing or image service. The default of 300 MB is a reasonable general average, but if you have a running server, you can do much better.
The project includes a PowerShell script, `Get-ArcSOCMemory.ps1`, that scans all running ArcSOC.exe processes and reports working set and peak working set statistics. It also captures ArcGIS Server overhead (ArcGISServer.exe and any java processes) and derives an OS overhead figure from what is left over.
Example script output
Suggested values for Advanced Calculator:
- ArcSOC Memory : 300 MB
- ArcGIS Overhead : 3200 MB
- OS Overhead : 6000 MB
The suggested values feed directly into the Advanced calculator. Run the script during a representative period of normal load, not during idle or an unusual spike. Peak working set values persist since the last server restart, so they capture the high-water mark even if the server happens to be quiet when you run the script.
The "live dangerously" mode
Both calculators include an optional concurrency mode that replaces the conservative best-practice defaults with an assumption that only a percentage of services are active at any one time - the rest have zero running instances. This allows considerably more services on the same hardware:
For example concurrency affects the number of services running on 4 cores
- Best practice = 12 services
- 50% concurrent = 32 services
- 25% concurrent = 64 services
- 10% concurrent = 160 services
The risk is cold-start latency. When a request arrives for a service with zero instances, ArcGIS Server has to spin up an ArcSOC process from scratch. Depending on the service, this can take anywhere from a few seconds to over a minute. Requests that arrive during that window will queue or fail. This mode is appropriate for services where occasional slow first responses are acceptable - not for anything with strict SLAs or unpredictable bursty load.
What it doesn't model
These calculators are sizing tools, not capacity planning tools. They do not model throughput — requests per second, queue depth, or response time under load. For that, the Esri System Designer or the ArcGIS Server Capacity Planning Tool are more appropriate.
They also model only the dedicated instance pool. From ArcGIS Server 10.9+, new sites use a shared instance pool by default for ArcGIS Pro-published services. Shared pool sizing uses a different approach (pool size is typically set to twice the physical core count) and is outside the scope of these calculators.
Running the calculator
The calculators run entirely in the browser with no dependencies.
They are a practical tool for anyone who needs to have a sensible conversation about ArcGIS Server infrastructure without spending an hour reading white papers.