Wednesday, May 26, 2021

[389-users] Monthly internal scheduled task failure resulting in segfault


On 5/26/21 12:12 PM, Nelson Bartley wrote:
Thank you for your assistance, I will be trying it in dour test environment. Is there anything that I can do to artificially do to trigger this issue to ensure it's no longer a problem?

So you are turning "off" compaction with my suggestion.  So it should never run.  There is no way to verify it except for the server not crashing.

HTH,

Mark


Cheers
Nelson. 

On Thu, May 27, 2021 at 0:08 Mark Reynolds <mreynolds@redhat.com> wrote:


On 5/26/21 11:05 AM, Nelson Bartley wrote:
I can confirm we do not have replication on these servers.

Ok so you can use the workaround I mentioned about setting the compact interval to 0 until we get the proper fix released.

Thanks,

Mark


On Wed, May 26, 2021 at 21:15 Mark Reynolds <mreynolds@redhat.com> wrote:

HI Nelsen,

I'm working on a db compaction improvement.,  Now DB compaction occurs every 30 days, and I found a bug if you don't have replication set up then the server crashes when trying to compact a changelog (that does not exist).  This only happens on 389-ds-base-1.4.3, or newer, and only if you don't have replication set up.  Can you confirm if you are using replication on this server?

Mark

On 5/26/21 1:42 AM, Nelson Bartley wrote:
Good day,    I previously messaged about this issue, but didn't have a core dump to provide.    Almost exactly 1 month, to the minute, a scheduled task starts in our  389-ds which results in a seg-fault.    We are currently using Fedora 33, 389 packages 1.4.4.15-1.fc33. This  bug also occurred with an earlier package set as well (I do not  remember the version, same FC33). We have experienced this exact  segault now three times on schedule.    I have attached to the email the cockpit information from the crash.    You can get the coredump from this link: http://gofile.me/4kovq/KZB9Waixm    I was hoping it was possible to identify what scheduled service is  crashing, and if possible how to disable it temporarily until the  actual cause of the crash can be fixed in an updated binary?    Nelson  

_______________________________________________  389-users mailing list -- 389-users@lists.fedoraproject.org  To unsubscribe send an email to 389-users-leave@lists.fedoraproject.org  Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/  List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines  List Archives: https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org  Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure  
--     389 Directory Server Development Team
--
Sent from Gmail Mobile
--     389 Directory Server Development Team
--
Sent from Gmail Mobile
--     389 Directory Server Development Team

No comments:

Post a Comment