[OmniOS-discuss] NFS v3 locking broken in latest OmniOS r151012 and updates

Youzhong Yang youzhong at gmail.com
Wed Jan 28 18:36:12 UTC 2015


I would suggest capturing packets, find out if the 'no locks available' is
returned from the server. If it is, do dtrace on the server, find out where
it returns nlm4_denied_nolocks
<http://src.illumos.org/source/s?defs=nlm4_denied_nolocks&project=illumos-gate>
.

http://src.illumos.org/source/xref/illumos-gate/usr/src/uts/common/klm/nlm_service.c#460

Again, as Dan suggested, it would be better to post on illumos-dev list.

On Wed, Jan 28, 2015 at 1:23 PM, Joe Little <jmlittle at gmail.com> wrote:

> I just set it to 1024 and still locking times out.
>
> On Wed, Jan 28, 2015 at 10:13 AM, Youzhong Yang <youzhong at gmail.com>
> wrote:
>
>> Depending on how many active locks your system needs to handle, 80 might
>> be a small value.
>>
>> We use a different distro of illumos-gate and we set max threads to 1024,
>> so far so good we are happy with the open source nlockmgr except the
>> nlockmgr startup issue when machine reboots.
>>
>>
>>
>> On Wed, Jan 28, 2015 at 1:02 PM, Joe Little <jmlittle at gmail.com> wrote:
>>
>>> Just to answer this question, I had already bumped that up based on some
>>> suggestions on the net:
>>>
>>> root at miele:/root# echo ::svc_pool nlm | mdb -k | grep 'Max threads'
>>>
>>> mdb: failed to add kvm_pte_chain walker: walk name already in use
>>>
>>> mdb: failed to add kvm_rmap_desc walker: walk name already in use
>>>
>>> mdb: failed to add kvm_mmu_page_header walker: walk name already in use
>>>
>>> mdb: failed to add kvm_pte_chain walker: walk name already in use
>>>
>>> mdb: failed to add kvm_rmap_desc walker: walk name already in use
>>>
>>> mdb: failed to add kvm_mmu_page_header walker: walk name already in use
>>>
>>> Max threads             = 80
>>>
>>> Still no locking w/ v3.
>>>
>>> On Wed, Jan 28, 2015 at 9:23 AM, Youzhong Yang <youzhong at gmail.com>
>>> wrote:
>>>
>>>> max threads of nlockmgr is set to 20 I think. Bump up this value then
>>>> you can get rid of 'no locks available' error.
>>>>
>>>> To confirm the current value:
>>>>
>>>> echo ::svc_pool nlm | mdb -k | grep 'Max threads'
>>>>
>>>> On Wed, Jan 28, 2015 at 11:49 AM, Joe Little <jmlittle at gmail.com>
>>>> wrote:
>>>>
>>>>> I recently switched one file server from Nexenta 4 Community (still
>>>>> uses closed NLM I believe) to OmniOS r151012.
>>>>>
>>>>> Immediately, users started to complain from various Linux clients that
>>>>> locking was failing. Most of those clients explicitly set their NFS version
>>>>> to 3. I finally isolated that the locking does not fail on NFS v4 and have
>>>>> worked on transition where possible. But presently, no NFS v3 client and
>>>>> successfully lock against OmniOS NFS v3 locking service. I've confirmed
>>>>> that the locking service is running and is present using rpcinfo, matching
>>>>> one for one in services from previous OpenSolaris and Illumos variants. One
>>>>> example from a user:
>>>>>
>>>>> $ strace /bin/tcsh
>>>>>
>>>>> [...]
>>>>>
>>>>> open("/home/REDACTED/.history", O_RDWR|O_CREAT, 0600) = 0
>>>>>
>>>>> dup(0)                                  = 1
>>>>>
>>>>> dup(1)                                  = 2
>>>>>
>>>>> dup(2)                                  = 3
>>>>>
>>>>> dup(3)                                  = 4
>>>>>
>>>>> dup(4)                                  = 5
>>>>>
>>>>> dup(5)                                  = 6
>>>>>
>>>>> close(5)                                = 0
>>>>>
>>>>> close(4)                                = 0
>>>>>
>>>>> close(3)                                = 0
>>>>>
>>>>> close(2)                                = 0
>>>>>
>>>>> close(1)                                = 0
>>>>>
>>>>> close(0)                                = 0
>>>>>
>>>>> fcntl(6, F_SETFD, FD_CLOEXEC)           = 0
>>>>>
>>>>> fcntl(6, F_SETLKW, {type=F_WRLCK, whence=SEEK_SET, start=0, len=0})
>>>>>
>>>>>
>>>>> HERE fcntl hangs for 1-2 min and finally returns with "-1 ENOLCK (No
>>>>>
>>>>> locks available)"
>>>>>
>>>>> _______________________________________________
>>>>> OmniOS-discuss mailing list
>>>>> OmniOS-discuss at lists.omniti.com
>>>>> http://lists.omniti.com/mailman/listinfo/omnios-discuss
>>>>>
>>>>>
>>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://omniosce.org/ml-archive/attachments/20150128/c8d307e0/attachment-0001.html>


More information about the OmniOS-discuss mailing list