We have a design that use SPI to communicate with MM6108, the maximum iperf speed test that we can get on channel 44 (8M) is about 5Mbps. That’s not what we expect. The linux driver that we use is 1.9.3.
Do you have any speed test result for SPI interface only? and what is maximum that we can get?
On our EKH01 evaluation kits we have tested SPI up to 21 Mbps with iperf.
To test the SPI bus independent of the HaLow link, our 1.12.4 driver available on GitHub includes a bus throughput profiler. Compile the driver with CONFIG_MORSE_ENABLE_TEST_MODES=y and use the test_mode=6 module parameter on insertion of morse.ko.
SPI performance can be very dependent on the host processor, issues like DMA configuration, internal peripheral bus contention, clock speeds and power save configuration can all impact performance. Are you able to share any more details about your host processor?
Do you have any feedback on expected throughput when using SDIO?
I have gotten the following via SDIO on an IMX8MP board using v1.12.4:
US channel=40/op_class=70 4Mhz width @ 922Mhz; 7mbps
US channel=28/op_class=71 8Mhz width @ 916MHz; 13mbps
I feel like this is about half of what I should be getting. I have the antennas of the two boards connected together through a 30dB attenuator for my tests.
I have profiles the bus with test_mode=6 which showed 49176kbps SDIO throughput.
What is the rule of thumb regarding expected best case rates for 2/4/8MHz channel bandwidths on 802.11ah and what kind of conf would you recommend for best throughput?
I can only give you a rough guideline of what to expect, as different hosts will behave in all sorts of different ways, and there are an exhausting amount of variables which do impact throughput.
That being said, for the iMX8 I would expect better than that. Unfortunately we don’t have measurements for an iMX platform. The measurements below are from our Rapsberry Pi based platform running 1.12.4, and from memory our MT76x8 platform results were identical. I’ll dig these up shortly and update the post.
All measurements in Mbps.
Bandwidth
UDP TX
UDP RX
TCP TX
TCP RX
8 MHz
23
23
18
18
4 MHz
11
11
10
10
2 MHz
5
5
4.5
4.5
Here’s a bit of a “shotgun” approach to try find what might be causing lower than expected throughput.
Can you try adding a little bit more attenuation?
Are the measurements you shared TCP measurements, or UDP?
Is cap-sdio-irq enabled in your device tree for your SDIO bus?
Can you check the coding scheme used while under test? While transmitting capture and share the rate table at /sys/kernel/debug/ieee80211/*/morse/mmrc_table
Let’s start there for now. It might be interesting to also perform a sniffer capture if you have a third device available, but I’ll try not to get too far ahead of myself.
I can only give you a rough guideline of what to expect, as different hosts will behave in all sorts of different ways, and there are an exhausting amount of variables which do impact throughput.
That being said, for the iMX8 I would expect better than that. Unfortunately we don’t have measurements for an iMX platform. The measurements below are from our Rapsberry Pi based platform running 1.12.4, and from memory our MT76x8 platform results were identical. I’ll dig these up shortly and update the post.
All measurements in Mbps.
Bandwidth
UDP TX
UDP RX
TCP TX
TCP RX
8 MHz
23
23
18
18
4 MHz
11
11
10
10
2 MHz
5
5
4.5
4.5
Here’s a bit of a “shotgun” approach to try find what might be causing lower than expected throughput.
Can you try adding a little bit more attenuation?
you’re right 30dBm was a bit low; looking at my RSSI it was pretty high. I increased attenuation to 90dBm and am still getting an RSSI appropriate for MCS7.
Are the measurements you shared TCP measurements, or UDP?
My 7mbps and 13mbps numbers for 4/8Mhz bandwidths were TCP but I realized I was not enabling your rate control algorithm. Once I did that, my numbers agree better with yours:
For reference this is using iperf3 (‘iperf3 --client ’ for TCP and ‘iperf3 --client --udp --bitrate 25M’ for UDP)
Is cap-sdio-irq enabled in your device tree for your SDIO bus?
That does not appear to make a difference for me. Perhaps the IMX8MM is running fast enough to where it doesn’t matter but it would seem to make sense to have it enabled.
Can you check the coding scheme used while under test? While transmitting capture and share the rate table at /sys/kernel/debug/ieee80211/*/morse/mmrc_table
This suggestion is what led me to realize I was not using your rate control algorithm. I am curious from a high level why you needed to replace the default minstrel_ht algo; obviously it is working better than minstrel but why? Is there any reason someone would not want to use it?
At a glance it looks like it’s not keeping per-station stats so I’m guessing a big point here is that it’s more lightweight and appropriate for HaLow’s large number of clients. Perhaps it also is able to perform better knowing that 900Mhz signal strengths are not going to fluctuate as much as 2.4 or 5Ghz and so perhaps it sticks with an MCS rate a bit longer than minstrel_ht? Also perhaps it has something to do with MCS10 being very different for 802.11ah in that it’s actually the slowest (but most reliable for distance) rate. Another obvious benefit is that the MCS rates make sense in mmrc_table vs the ones in minstrel_ht’s rc_stats.
Could be! It’s always worth checking as I find it’s a capability often missing by default from many mmc controllers.
There’s a few reasons we opted to build our own. But this is one of the primary reasons for it. There are also some differences in the sampling dynamics and decisions around LGI and SGI. There’s also non-technical reasons. Using our own rate control algorithm means we can keep it consistent even in non-Linux projects.
Unfortunately I’m no rate control expert, so can only offer that high level explanation!
I’d be interested in seeing the minstrel rc_stats to see which rate it decided to use. For your static configuration, I would have expected it to eventually align on the same best rate. I assume it may have been sampling other rates more aggressively, or had switched to LGI?
Glad you resolved your throughput though! Those numbers are looking quite good.