Thank you for the pointer to the defect, Thierry. I appreciate the very quick, and informative response. It certainly smells like this is what is affecting us.
Our case is a single connection, through which ~32,000 sequential queries are passed. To work around this, we have re-created a DS11 replica, to which we have re-directed this job. On DS12, ~30 minutes are required. With DS11, the job completes in ~2 minutes.
(Our DS12 instance is actually running RHDS, so we have opened a Red Hat support case with the details.)
-- Do things because you should, not just because you can. John Thurston 907-465-8591 John.Thurston@alaska.gov Department of Administration State of Alaska
Hi Jon,
Yes the description is "mostly" correct. We recently found a corner case [1], where large requests (requiring several poll/read) can get high wtime although there was no worker starvation.
Would you provide sample of access log showing this issue ?
[1] https://github.com/389ds/389-ds-base/issues/6284
regards
thierry
On 9/12/24 01:29, John Thurston wrote:
I have a new instance of 2.4.5, on which I'm seeing a very high* 'wtime' in the access log.
From https://www.port389.org/docs/389ds/design/access-log-new-time-stats-design.html I read
- wtime - This is the amount of time the operation was waiting in the work queue before being picked up by a worker thread.
Is this still an accurate description of 'wtime' ?
If true, I suspect the high values I'm seeing have nothing to do with the version of the software I'm running, and everything to do with the system on which the software is running. Work has arrived, and been queued, but there aren't enough worker-threads to keep the queue serviced in a timely manner.
* 'high' as in 3,000% longer than what I see on a totally different system running 1.4.4
-- -- Do things because you should, not just because you can. John Thurston 907-465-8591 John.Thurston@alaska.gov Department of Administration State of Alaska
No comments:
Post a Comment