[OmniOS-discuss] CIFS ignores TCP Buffer Settings

Mini Trader miniflowtrader at gmail.com
Wed Mar 9 01:20:05 UTC 2016


Running the following dtrace.

#!/usr/sbin/dtrace -s

#pragma D option quiet

tcp:::send
/ (args[4]->tcp_flags & (TH_SYN|TH_RST|TH_FIN)) == 0 /
{
        @unacked["unacked(bytes)", args[2]->ip_daddr, args[4]->tcp_sport] =
            quantize(args[3]->tcps_snxt - args[3]->tcps_suna);
}

tcp:::receive
/ (args[4]->tcp_flags & (TH_SYN|TH_RST|TH_FIN)) == 0 /
{
        @swnd["SWND(bytes)", args[2]->ip_saddr, args[4]->tcp_dport] =
            quantize((args[4]->tcp_window)*(1 << args[3]->tcps_snd_ws));

}


Is showing that the windows sizes are not going above 64k when things are
not working properly.

On Tue, Mar 8, 2016 at 7:56 PM, Mini Trader <miniflowtrader at gmail.com>
wrote:

> If it helps.  This doesn't happen on NFS from the exact same client.  How
> do I file a bug?
>
> On Tue, Mar 8, 2016 at 1:51 PM, Mini Trader <miniflowtrader at gmail.com>
> wrote:
>
>> Simple example.
>>
>> 1 Server 1 client.
>>
>> Restart service everything is fast.  A few hours later from same client
>> (nothing happening concurrently) speed is slow.  Restart service again,
>> speed is fast.
>>
>> Its like CIFS starts off fast than somehow for whatever reason if it is
>> not used, the connection for my CIFS drives to the server becomes slow.
>> Also this only happens when the client is downloading.  Not when uploading
>> to the server that is always fast.
>>
>> On Tue, Mar 8, 2016 at 1:42 AM, Jim Klimov <jimklimov at cos.ru> wrote:
>>
>>> 8 марта 2016 г. 6:42:13 CET, Mini Trader <miniflowtrader at gmail.com>
>>> пишет:
>>> >Is it possible that CIFS will ignore TCP buffer settings after a while?
>>> >
>>> >I've confirmed my systems max transfer rate using iperf and have tuned
>>> >my
>>> >buffers accordingly. For whatever reason CIFS seems to forget these
>>> >settings after a while as speed drops significantly. Issuing a restart
>>> >of
>>> >the service immediately appears to restore the setting as transfer
>>> >speed
>>> >becomes normal again.
>>> >
>>> >Any ideas why this would happen?
>>> >
>>> >
>>> >------------------------------------------------------------------------
>>> >
>>> >_______________________________________________
>>> >OmniOS-discuss mailing list
>>> >OmniOS-discuss at lists.omniti.com
>>> >http://lists.omniti.com/mailman/listinfo/omnios-discuss
>>>
>>> As a random guess from experience with other network stuff - does the
>>> speed-drop happen on a running connection or new ones too? Do you have
>>> concurrent transfers at this time?
>>>
>>> Some other subsystems (no idea if this one too) use best speeds for new
>>> or recently awakened dormant connections, so short-lived bursts are fast -
>>> at expence of long-running active bulk transfers (deemed to be bulk because
>>> they run for a long time).
>>>
>>> HTH, Jim
>>> --
>>> Typos courtesy of K-9 Mail on my Samsung Android
>>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://omniosce.org/ml-archive/attachments/20160308/2ad6bf36/attachment.html>


More information about the OmniOS-discuss mailing list