Please read this docs page as background for this article, as well as for the most up to date version of the below table.
Users occasionally ask us how the numbers in our capacity planning table were generated:
Number of Triples | JVM Heap Memory | Direct Memory | Total System Memory |
100 million | 3G | 4G | 8G |
1 billion | 8G | 20G | 32G |
10 billion | 30G | 80G | 128G |
25 billion | 60G | 160G | 256G |
50 billion | 80G | 380G | 512G |
These recommendations are based on a variety of workloads we run over different benchmarks, including but not limited to BSBM, LDBC, and LUBM. The workloads test reads and writes, including concurrent user access. Giving Stardog more memory almost always improves performance, so allocating more memory than specified in this table is generally a good idea. Memory requirements vary based on the dataset and the query complexity, so we recommend testing with your own workload.
Was this article helpful?
That’s Great!
Thank you for your feedback
Sorry! We couldn't be helpful
Thank you for your feedback
Feedback sent
We appreciate your effort and will try to fix the article