<div dir="ltr"> thanks, this is the result of the test:<br><br><div>./iscsisvrtop 1 30 >> /tmp/iscsisvrtop.txt<br>more /tmp/iscsisvrtop.txt :<br> <br>Tracing... Please wait.<br>2017 Sep 27 17:01:48 load: 0.22 read_KB: 345 write_KB: 56<br>client ops reads writes nops rd_bw wr_bw ard_sz awr_sz rd_t wr_t nop_t align%<br>1.1.193.247 4 0 0 0 0 0 0 0 0 0 0 0<br>1.1.193.250 105 91 1 0 345 56 3 56 4 756 0 100<br>all 109 91 1 0 345 56 3 56 4 756 0 0<br>2017 Sep 27 17:01:49 load: 0.22 read_KB: 163 write_KB: 41<br>client ops reads writes nops rd_bw wr_bw ard_sz awr_sz rd_t wr_t nop_t align%<br>1.1.193.247 32 26 2 0 117 41 4 20 6 417 0 100<br>1.1.193.250 42 34 0 0 46 0 1 0 7 0 0 0<br>all 74 60 2 0 163 41 2 20 7 417 0 0<br>2017 Sep 27 17:01:50 load: 0.22 read_KB: 499 write_KB: 232<br>client ops reads writes nops rd_bw wr_bw ard_sz awr_sz rd_t wr_t nop_t align%<br>1.1.193.247 45 40 3 0 210 196 5 65 5 763 0 100<br>1.1.193.250 77 65 2 0 288 36 4 18 4 439 0 100<br>all 122 105 5 0 499 232 4 46 4 634 0 0<br>2017 Sep 27 17:01:51 load: 0.22 read_KB: 314 write_KB: 84<br>client ops reads writes nops rd_bw wr_bw ard_sz awr_sz rd_t wr_t nop_t align%<br>1.1.193.247 3 1 0 0 0 0 0 0 2 0 0 0<br>1.1.193.250 100 88 4 0 313 84 3 21 4 396 0 100<br>all 103 89 4 0 314 84 3 21 4 396 0 0<br>2017 Sep 27 17:01:52 load: 0.22 read_KB: 184 write_KB: 104<br>client ops reads writes nops rd_bw wr_bw ard_sz awr_sz rd_t wr_t nop_t align%<br>1.1.193.247 23 17 1 0 59 88 3 88 5 871 0 100<br>1.1.193.250 50 44 1 0 125 16 2 16 8 445 0 100<br>all 73 61 2 0 184 104 3 52 7 658 0 0<br>2017 Sep 27 17:01:53 load: 0.22 read_KB: 250 write_KB: 1920<br>client ops reads writes nops rd_bw wr_bw ard_sz awr_sz rd_t wr_t nop_t align%<br>1.1.193.247 7 6 0 0 12 0 2 0 6 0 0 0<br>1.1.193.250 71 44 16 0 263 1920 5 120 6 2531 0 100<br>all 78 50 16 0 276 1920 5 120 6 2531 0 0<br>2017 Sep 27 17:01:54 load: 0.22 read_KB: 93 write_KB: 0<br>client ops reads writes nops rd_bw wr_bw ard_sz awr_sz rd_t wr_t nop_t align%<br>1.1.193.247 7 0 0 0 0 0 0 0 0 0 0 0<br>1.1.193.250 38 28 0 0 70 0 2 0 6 0 0 0<br>all 45 28 0 0 70 0 2 0 6 0 0 0<br>2017 Sep 27 17:01:55 load: 0.22 read_KB: 467 write_KB: 156<br>client ops reads writes nops rd_bw wr_bw ard_sz awr_sz rd_t wr_t nop_t align%<br>1.1.193.247 23 21 0 0 23 0 1 0 6 0 0 0<br>1.1.193.250 115 106 4 0 441 156 4 39 5 538 0 100<br>all 138 127 4 0 464 156 3 39 5 538 0 0<br>2017 Sep 27 17:01:56 load: 0.22 read_KB: 485 write_KB: 152<br>client ops reads writes nops rd_bw wr_bw ard_sz awr_sz rd_t wr_t nop_t align%<br>1.1.193.247 16 13 0 0 22 0 1 0 2 0 0 0<br>1.1.193.250 133 119 4 0 462 152 3 38 4 427 0 100<br>all 149 132 4 0 485 152 3 38 3 427 0 0<br>2017 Sep 27 17:01:57 load: 0.22 read_KB: 804 write_KB: 248<br>client ops reads writes nops rd_bw wr_bw ard_sz awr_sz rd_t wr_t nop_t align%<br>1.1.193.247 36 33 1 0 137 104 4 104 6 1064 0 100<br>1.1.193.250 133 131 2 0 667 144 5 72 5 885 0 100<br>all 169 164 3 0 804 248 4 82 5 945 0 0<br>2017 Sep 27 17:01:58 load: 0.22 read_KB: 631 write_KB: 36<br>client ops reads writes nops rd_bw wr_bw ard_sz awr_sz rd_t wr_t nop_t align%<br>1.1.193.247 91 87 0 0 373 0 4 0 2 0 0 0<br>1.1.193.250 93 75 2 0 257 36 3 18 4 252 0 100<br>all 184 162 2 0 631 36 3 18 3 252 0 0<br>2017 Sep 27 17:01:59 load: 0.21 read_KB: 1472 write_KB: 764<br>client ops reads writes nops rd_bw wr_bw ard_sz awr_sz rd_t wr_t nop_t align%<br>1.1.193.250 76 68 6 0 281 636 4 106 4 803 0 100<br>1.1.193.247 265 262 2 0 1191 128 4 64 3 482 0 100<br>all 341 330 8 0 1472 764 4 95 3 723 0 0<br>2017 Sep 27 17:02:00 load: 0.21 read_KB: 3559 write_KB: 376<br>client ops reads writes nops rd_bw wr_bw ard_sz awr_sz rd_t wr_t nop_t align%<br>1.1.193.247 83 82 0 0 270 0 3 0 1 0 0 0<br>1.1.193.250 541 524 8 0 3289 376 6 47 6 359 0 100<br>all 624 606 8 0 3559 376 5 47 5 359 0 0<br>2017 Sep 27 17:02:01 load: 0.21 read_KB: 2079 write_KB: 232<br>client ops reads writes nops rd_bw wr_bw ard_sz awr_sz rd_t wr_t nop_t align%<br>1.1.193.247 120 118 0 0 612 0 5 0 2 0 0 0<br>1.1.193.250 418 416 2 0 1476 232 3 116 4 765 0 100<br>all 538 534 2 0 2088 232 3 116 4 765 0 0<br>2017 Sep 27 17:02:02 load: 0.21 read_KB: 2123 write_KB: 168<br>client ops reads writes nops rd_bw wr_bw ard_sz awr_sz rd_t wr_t nop_t align%<br>1.1.193.247 84 80 3 0 317 168 3 56 4 292 0 100<br>1.1.193.250 367 366 0 0 1812 0 4 0 6 0 0 0<br>all 451 446 3 0 2129 168 4 56 6 292 0 0<br>2017 Sep 27 17:02:03 load: 0.21 read_KB: 307 write_KB: 484<br>client ops reads writes nops rd_bw wr_bw ard_sz awr_sz rd_t wr_t nop_t align%<br>1.1.193.247 14 14 0 0 19 0 1 0 5 0 0 0<br>1.1.193.250 90 85 5 0 273 484 3 96 1 302 0 100<br>all 104 99 5 0 292 484 2 96 2 302 0 0<br>2017 Sep 27 17:02:04 load: 0.22 read_KB: 298 write_KB: 0<br>client ops reads writes nops rd_bw wr_bw ard_sz awr_sz rd_t wr_t nop_t align%<br>1.1.193.247 10 4 0 0 2 0 0 0 9 0 0 0<br>1.1.193.250 85 70 0 0 296 0 4 0 5 0 0 0<br>all 95 74 0 0 298 0 4 0 5 0 0 0<br>2017 Sep 27 17:02:05 load: 0.22 read_KB: 296 write_KB: 420<br>client ops reads writes nops rd_bw wr_bw ard_sz awr_sz rd_t wr_t nop_t align%<br>1.1.193.247 1 0 0 0 0 0 0 0 0 0 0 0<br>1.1.193.250 86 76 5 0 306 420 4 84 6 739 0 100<br>all 87 76 5 0 306 420 4 84 6 739 0 0<br>2017 Sep 27 17:02:06 load: 0.22 read_KB: 1149 write_KB: 379<br>client ops reads writes nops rd_bw wr_bw ard_sz awr_sz rd_t wr_t nop_t align%<br>1.1.193.247 66 56 3 0 310 75 5 25 4 538 0 100<br>1.1.193.250 182 166 5 0 828 304 4 60 4 581 0 100<br>all 248 222 8 0 1138 379 5 47 4 565 0 0<br>2017 Sep 27 17:02:07 load: 0.23 read_KB: 615 write_KB: 164<br>client ops reads writes nops rd_bw wr_bw ard_sz awr_sz rd_t wr_t nop_t align%<br>1.1.193.247 77 75 2 0 374 28 4 14 4 399 0 100<br>1.1.193.250 89 82 2 0 241 136 2 68 3 266 0 100<br>all 166 157 4 0 615 164 3 41 3 333 0 0<br>2017 Sep 27 17:02:08 load: 0.23 read_KB: 1505 write_KB: 9712<br>client ops reads writes nops rd_bw wr_bw ard_sz awr_sz rd_t wr_t nop_t align%<br>1.1.193.247 9 6 0 0 7 0 1 0 5 0 0 0<br>1.1.193.250 302 166 124 0 1978 14288 11 115 3 2037 0 100<br>all 311 172 124 0 1985 14288 11 115 11 2037 0 0<br>2017 Sep 27 17:02:09 load: 0.23 read_KB: 4267 write_KB: 61980<br>client ops reads writes nops rd_bw wr_bw ard_sz awr_sz rd_t wr_t nop_t align%<br>1.1.193.247 12 7 0 0 15 0 2 0 4 0 0 0<br>1.1.193.250 644 156 484 0 3772 57404 24 118 1 1728 0 100<br>all 656 163 484 0 3787 57404 23 118 2 1728 0 0<br>2017 Sep 27 17:02:10 load: 0.24 read_KB: 610 write_KB: 48<br>client ops reads writes nops rd_bw wr_bw ard_sz awr_sz rd_t wr_t nop_t align%<br>1.1.193.247 13 10 1 0 95 16 9 16 4 374 0 100<br>1.1.193.250 116 107 1 0 514 32 4 32 6 495 0 100<br>all 129 117 2 0 610 48 5 24 5 435 0 0<br>2017 Sep 27 17:02:11 load: 0.24 read_KB: 684 write_KB: 68<br>client ops reads writes nops rd_bw wr_bw ard_sz awr_sz rd_t wr_t nop_t align%<br>1.1.193.247 26 20 1 0 59 32 2 32 5 545 0 100<br>1.1.193.250 169 158 2 0 624 36 3 18 4 451 0 100<br>all 195 178 3 0 684 68 3 22 4 482 0 0<br>2017 Sep 27 17:02:12 load: 0.24 read_KB: 154 write_KB: 176<br>client ops reads writes nops rd_bw wr_bw ard_sz awr_sz rd_t wr_t nop_t align%<br>1.1.193.247 14 12 1 0 46 96 3 96 5 854 0 100<br>1.1.193.250 43 35 1 0 492 80 14 80 28 947 0 100<br>all 57 47 2 0 538 176 11 88 22 900 0 0<br>2017 Sep 27 17:02:13 load: 0.25 read_KB: 1134 write_KB: 12<br>client ops reads writes nops rd_bw wr_bw ard_sz awr_sz rd_t wr_t nop_t align%<br>1.1.193.250 36 24 1 0 191 12 7 12 16 469 0 100<br>1.1.193.247 122 117 0 0 558 0 4 0 5 0 0 0<br>all 158 141 1 0 750 12 5 12 7 469 0 0<br>2017 Sep 27 17:02:14 load: 0.25 read_KB: 6357 write_KB: 90908<br>client ops reads writes nops rd_bw wr_bw ard_sz awr_sz rd_t wr_t nop_t align%<br>1.1.193.247 5 2 0 0 1 0 0 0 9 0 0 0<br>1.1.193.250 1003 233 762 0 6357 90908 27 119 14 4844 0 100<br>all 1008 235 762 0 6358 90908 27 119 14 4844 0 0<br>2017 Sep 27 17:02:15 load: 0.25 read_KB: 243 write_KB: 0<br>client ops reads writes nops rd_bw wr_bw ard_sz awr_sz rd_t wr_t nop_t align%<br>1.1.193.247 23 18 0 0 37 0 2 0 2 0 0 0<br>1.1.193.250 70 58 0 0 207 0 3 0 4 0 0 0<br>all 93 76 0 0 244 0 3 0 3 0 0 0<br>2017 Sep 27 17:02:16 load: 0.25 read_KB: 382 write_KB: 16<br>client ops reads writes nops rd_bw wr_bw ard_sz awr_sz rd_t wr_t nop_t align%<br>1.1.193.247 42 38 0 0 191 0 5 0 5 0 0 0<br>1.1.193.250 59 50 1 0 189 16 3 16 8 427 0 100<br>all 101 88 1 0 381 16 4 16 7 427 0 0<br>2017 Sep 27 17:02:17 load: 0.25 read_KB: 23 write_KB: 0<br>client ops reads writes nops rd_bw wr_bw ard_sz awr_sz rd_t wr_t nop_t align%<br>1.1.193.247 6 3 0 0 1 0 0 0 7 0 0 0<br>1.1.193.250 21 13 0 0 21 0 1 0 5 0 0 0<br>all 27 16 0 0 23 0 1 0 6 0 0 0<br><br></div><div>How can i have 2MB/s network traffic (verified on omnios filer and also in kvm host) and write on disk many more than that ?<br><br></div><div><br></div><div class="gmail_extra"><br><div class="gmail_quote">2017-09-27 12:56 GMT+02:00 Artem Penner <span dir="ltr"><<a href="mailto:apenner.it@gmail.com" target="_blank">apenner.it@gmail.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Use <a href="https://github.com/richardelling/tools/blob/master/iscsisvrtop" target="_blank">https://github.com/<wbr>richardelling/tools/blob/<wbr>master/iscsisvrtop</a> to observe iscsi I/O<div><br></div></div><br><div class="gmail_quote"><div dir="ltr">ср, 27 сент. 2017 г. в 11:06, anthony omnios <<a href="mailto:icoomnios@gmail.com" target="_blank">icoomnios@gmail.com</a>>:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div class="h5"><div dir="ltr"><div><div>Hi,<br><br></div>i have a problem, i used many ISCSI zvol (for each vm), network traffic is 2MB/s between kvm host and filer but i write on disks many more than that. I used a pool with separated mirror zil (intel s3710) and 8 ssd samsung 850 evo 1To<br><br> zpool status <br> pool: filervm2<br> state: ONLINE<br> scan: resilvered 406G in 0h22m with 0 errors on Wed Sep 20 15:45:48 2017<br>config:<br><br> NAME STATE READ WRITE CKSUM<br> filervm2 ONLINE 0 0 0<br> mirror-0 ONLINE 0 0 0<br> c7t5002538D41657AAFd0 ONLINE 0 0 0<br> c7t5002538D41F85C0Dd0 ONLINE 0 0 0<br> mirror-2 ONLINE 0 0 0<br> c7t5002538D41CC7105d0 ONLINE 0 0 0<br> c7t5002538D41CC7127d0 ONLINE 0 0 0<br> mirror-3 ONLINE 0 0 0<br> c7t5002538D41CD7F7Ed0 ONLINE 0 0 0<br> c7t5002538D41CD83FDd0 ONLINE 0 0 0<br> mirror-4 ONLINE 0 0 0<br> c7t5002538D41CD7F7Ad0 ONLINE 0 0 0<br> c7t5002538D41CD7F7Dd0 ONLINE 0 0 0<br> logs<br> mirror-1 ONLINE 0 0 0<br> c4t2d0 ONLINE 0 0 0<br> c4t4d0 ONLINE 0 0 0<br><br>i used correct ashift of 13 for samsung 850 evo<br>zdb|grep ashift :<br><br>ashift: 13<br>ashift: 13<br>ashift: 13<br>ashift: 13<br>ashift: 13<br><br></div>But i write a lot on ssd every 5 seconds (many more than the network traffic of 2 MB/s)<br><br>iostat -xn -d 1 : <br><div><br> r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device<br> 11.0 3067.5 288.3 153457.4 6.8 0.5 2.2 0.2 5 14 filervm2<br> 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 rpool<br> 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c4t0d0<br> 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c4t1d0<br> 0.0 552.6 0.0 17284.0 0.0 0.1 0.0 0.2 0 8 c4t2d0<br> 0.0 552.6 0.0 17284.0 0.0 0.1 0.0 0.2 0 8 c4t4d0<br> 1.0 233.3 48.1 10051.6 0.0 0.0 0.0 0.1 0 3 c7t5002538D41657AAFd0<br> 5.0 250.3 144.2 13207.3 0.0 0.0 0.0 0.1 0 3 c7t5002538D41CC7127d0<br> 2.0 254.3 24.0 13207.3 0.0 0.0 0.0 0.1 0 4 c7t5002538D41CC7105d0<br> 3.0 235.3 72.1 10051.6 0.0 0.0 0.0 0.1 0 3 c7t5002538D41F85C0Dd0<br> 0.0 228.3 0.0 16178.7 0.0 0.0 0.0 0.2 0 4 c7t5002538D41CD83FDd0<br> 0.0 225.3 0.0 16210.7 0.0 0.0 0.0 0.2 0 4 c7t5002538D41CD7F7Ed0<br> 0.0 282.3 0.0 19991.1 0.0 0.0 0.0 0.2 0 5 c7t5002538D41CD7F7Dd0<br> 0.0 280.3 0.0 19871.0 0.0 0.0 0.0 0.2 0 5 c7t5002538D41CD7F7Ad0<br><br></div><div>I used zvol of 64k, i try with 8k and problem is the same.<br><br>zfs get all filervm2/hdd-110022a :<br><br>NAME PROPERTY VALUE SOURCE<br>filervm2/hdd-110022a type volume -<br>filervm2/hdd-110022a creation Tue May 16 10:24 2017 -<br>filervm2/hdd-110022a used 5.26G -<br>filervm2/hdd-110022a available 2.90T -<br>filervm2/hdd-110022a referenced 5.24G -<br>filervm2/hdd-110022a compressratio 3.99x -<br>filervm2/hdd-110022a reservation none default<br>filervm2/hdd-110022a volsize 25G local<br>filervm2/hdd-110022a volblocksize 64K -<br>filervm2/hdd-110022a checksum on default<br>filervm2/hdd-110022a compression lz4 local<br>filervm2/hdd-110022a readonly off default<br>filervm2/hdd-110022a copies 1 default<br>filervm2/hdd-110022a refreservation none default<br>filervm2/hdd-110022a primarycache all default<br>filervm2/hdd-110022a secondarycache all default<br>filervm2/hdd-110022a usedbysnapshots 15.4M -<br>filervm2/hdd-110022a usedbydataset 5.24G -<br>filervm2/hdd-110022a usedbychildren 0 -<br>filervm2/hdd-110022a usedbyrefreservation 0 -<br>filervm2/hdd-110022a logbias latency default<br>filervm2/hdd-110022a dedup off default<br>filervm2/hdd-110022a mlslabel none default<br>filervm2/hdd-110022a sync standard local<br>filervm2/hdd-110022a refcompressratio 3.99x -<br>filervm2/hdd-110022a written 216K -<br>filervm2/hdd-110022a logicalused 20.9G -<br>filervm2/hdd-110022a logicalreferenced 20.9G -<br>filervm2/hdd-110022a snapshot_limit none default<br>filervm2/hdd-110022a snapshot_count none default<br>filervm2/hdd-110022a redundant_metadata all default<br><br></div><div>Sorry for my bad english.<br><br></div><div>What can be the problem ? thanks<br><br></div><div>Best regards,<br><br></div><div>Anthony<br></div><div><br><br></div></div></div></div>
______________________________<wbr>_________________<br>
OmniOS-discuss mailing list<br>
<a href="mailto:OmniOS-discuss@lists.omniti.com" target="_blank">OmniOS-discuss@lists.omniti.<wbr>com</a><br>
<a href="http://lists.omniti.com/mailman/listinfo/omnios-discuss" rel="noreferrer" target="_blank">http://lists.omniti.com/<wbr>mailman/listinfo/omnios-<wbr>discuss</a><br>
</blockquote></div>
</blockquote></div><br></div></div>