This past weekend I finally replaced the last of our DS 1.3 instances (1.3.6.12 to be exact). We did a lot of testing but what happened in production with a full prod workload was quite surprising.
This 1.3 instance has been in operation for over a decade and has never had any issues with memory usage (it has 16GB total). When we moved the 2.5 instance into production, the amount of memory usage quickly rose to over 16GB, causing AWS ECS to kill the task. I tried upping the instance type to r6i.xlarge (32GB) and that quickly ran out of memory too. r6i.2xlarge also failed with excessive memory consumption. It wasn't until I switched to r6i.4xlarge (128GB) that the instances finally more or less stabilized at approx. 60GB of memory use.
Other info:
- This is using bdb, not mdb.
- We tried to keep the cn=config values as close as possible to the original instance.
- Over a year ago, we did the same thing with a different production system (moved it to 2.5 bdb) and it's typically consuming about 32GB in spite of being a significantly larger database.
- We used the same docker image from docker hub for both of these systems (2.5.0 B2024.017.0000).
For the system we moved this past weekend, here are some daily stats:
SRCH Events BIND Events MOD Events SRCH/BIND
2026-01-01T00:00:00.000-0700 71422 18013 1136 397%
2026-01-02T00:00:00.000-0700 88233 26273 1958 336%
2026-01-03T00:00:00.000-0700 71724 20275 1512 354%
2026-01-04T00:00:00.000-0700 90487 26763 2271 338%
2026-01-05T00:00:00.000-0700 232190 69743 5602 333%
2026-01-06T00:00:00.000-0700 270592 65322 5752 414%
2026-01-07T00:00:00.000-0700 288077 73869 6021 390%
2026-01-08T00:00:00.000-0700 276662 69352 6309 399%
2026-01-09T00:00:00.000-0700 265886 62109 4992 428%
2026-01-10T00:00:00.000-0700 201912 33331 2528 606%
2026-01-11T00:00:00.000-0700 229512 44090 2956 521%
2026-01-12T00:00:00.000-0700 333711 97047 6494 344%
2026-01-13T00:00:00.000-0700 384455 121332 7049 317%
2026-01-14T00:00:00.000-0700 544805 202567 10667 269%
2026-01-15T00:00:00.000-0700 523023 180011 38875 291%
2026-01-16T00:00:00.000-0700 393932 121466 27357 324%
2026-01-17T00:00:00.000-0700 235071 47104 10557 499%
2026-01-18T00:00:00.000-0700 269432 64199 9329 420%
2026-01-19T00:00:00.000-0700 299010 76743 9937 390%
2026-01-20T00:00:00.000-0700 501148 176427 16488 284%
2026-01-21T00:00:00.000-0700 466164 164206 13574 284%
2026-01-22T00:00:00.000-0700 422490 141143 9041 299%
2026-01-23T00:00:00.000-0700 360027 109641 8832 328%
2026-01-24T00:00:00.000-0700 230624 48358 5385 477%
2026-01-25T00:00:00.000-0700 292855 75711 7428 387%
2026-01-26T00:00:00.000-0700 449129 151908 11426 296%
2026-01-27T00:00:00.000-0700 433417 146937 9902 295%
2026-01-28T00:00:00.000-0700 425190 142981 9832 297%
2026-01-29T00:00:00.000-0700 401043 132635 8418 302%
2026-01-30T00:00:00.000-0700 350886 102997 6616 341%
2026-01-31T00:00:00.000-0700 235397 49611 4578 474%
The system we moved earlier has 10 times as much search traffic as this one, but this one gets up to 4 times as many binds.
Any thoughts on what might be going on here?
Tim
--
_______________________________________________
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-leave@lists.fedoraproject.org
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: https://forge.fedoraproject.org/infra/tickets/issues/new
No comments:
Post a Comment