The transition to 2.5Gbps Ethernet has not been an easy one for Intel. The company's I225/I226 2.5 GbE Ethernet controllers (codename Foxville), a prevalent choice on Intel platform motherboards for the last few years, has presented a fair share of issues since its introduction, including random networking disconnections and stuttering. And while Intel has been working through the issues with multiple revisions of the hardware, they apparently haven't hammered out all of the bugs yet, as evidenced by the latest bug mitigation suggestion from the company. In short, Intel is suggesting that users experiencing connection issues on the latest I226-V controller disable some of the its energy efficiency features, which appear to be a major contributor in the connection stability issues I226-V has been seeing.

To mitigate the connection problems on the I226-V Ethernet controller, Intel is advising affected users to disable Energy-Efficient Ethernet (EEE) mode through Windows Device Manager. The same guidance applies to Linux users as well. EEE mode aims to lower power consumption when the Ethernet connection is in an idle state. The issue is that EEE mode seems to activate when an Ethernet connection is in active use, causing it to drop out momentarily.

And while deactivating EEE does reportedly improve connection stability, deactivating it doesn't seem to be the ultimate solution. Intel has received reports that some users still experienced disconnections with EEE mode disabled. Furthermore, disabling EEE mode forgoes its intended benefits – such as reducing power draw by up to 50% when an Ethernet connection is idling – so it's not an option that cost-conscious consumers would normally want to disable.

Intel has also released an updated driver set for the I226-V/I225-V family of Ethernet controllers that automatically makes this adjustment. Specifically, the patch deactivates EEE mode for connection speeds above 100 Mbps, but users may have to disable it entirely if the workaround doesn't work with their combination of hardware. MSI and Asus have already deployed the new Ethernet driver for their respective Intel 700-series motherboards, so other vendors shouldn't take long to do the same.

In the interim, Intel will continue investigating the root cause and provide a concrete solution for motherboards with the I226-V Ethernet controller. The Foxville family of Intel Ethernet controllers has a long history of connectivity quirks – going back to the original I225-V in 2019 and E3100 in 2020 – ultimately requiring multiple hardware revisions (B1, B2, & B3 steppings) before finding solutions to many of its issues. As a result, it's not off the table that the I226-V Ethernet controller may suffer the same fate.

Source: Intel (via TPU)

Comments Locked

40 Comments

View All Comments

  • Sivar - Tuesday, March 7, 2023 - link

    I regularly feel constrained by my 25GBit Broadcom NICs which are starved of PCI Express bandwidth because my home systems lack the PCI Express channels to saturate the line. 1GBit would feel like a dead slug stuck in frozen molasses when transferring files, though granted I am not a typical non-geek home user.
  • lopri - Saturday, March 11, 2023 - link

    File sizes have gotten a lot bigger and virtualization is not unusual even for home users, many of whom prefer NAS for storage. Gigabit is a need, not a want in 2023.
  • davidedney123 - Tuesday, March 7, 2023 - link

    What on earth is going on with Intel these days. They used to be the absolute high watermark in the industry for quality, reliability, and product execution but the last few years have just been shambolic.

    The Intel who managed to single handedly make the entire motherboard industry stop turning out crap and up their game with their retail boards, or made the first SSDs that weren't flaky rubbish, or got WiFi not to be hopeless feel far, far away from any company that would churn out this nonsense.
  • TheinsanegamerN - Tuesday, March 7, 2023 - link

    A decade of laziness, sloth, and greed has come home to roost. Intel started losing talent years ago and has done little to excite them to come back.
  • Sivar - Tuesday, March 7, 2023 - link

    Intel has too long been led by bottom-line-obsessed businessmen with little love of engineering. R&D is expensive. Keeping the best engineers is expensive. Long-term leadership doesn't benefit the next quarterly results.
    I have some confidence that Pat Gelsinger can increase thrust before the whole plane stalls over rough terrain. Pat is a Stanford man and led VMWare to become a critical part of the world's datacenter infrastructure.
  • lopri - Saturday, March 11, 2023 - link

    They are still bottom-line obsessed. See: Intel On Demand
  • hechacker1 - Tuesday, March 7, 2023 - link

    It’s unfortunate because I’d like to build a custom SFF router but most have this network chip included. Even the pfsense Netgate boxes use this chip, so I don’t know what workaround they are using or perhaps it’s just slightly unstable and nobody has complained loudly.

    So, I probably have to use PCI-E add on cards for enterprise stability or just buy a real 1u server with better chips.

    How can Intel not know what is wrong after two generations? So incompetent.
  • blppt - Tuesday, March 7, 2023 - link

    This is nothing new for Intel. How many years has that flawed Puma6 chipset been out, and they've never managed to fix that epic latency bug?
  • Makaveli - Wednesday, March 8, 2023 - link

    That Puma 6 issue will never be fixed. Its still there in the PUMA 7 chipset but its not as bad. If one has to use cable internet make sure your modem is using a broadcom chipset.
  • abufrejoval - Thursday, March 9, 2023 - link

    I take offence at your first sentence, because if it was difficult for Intel, that means it was far worse for Intel's customers, who might have lost data, time, customers and serious money due to a defective product and a QA that didn't catch it.

    The first issue, of course, was greed and price: for far too long Intel, like many other Ethernet vendors, decided to make >1Gibt Ethernet a luxury item that would require optical cables, transreceivers and massive ASICs with tons of offload gadgetry. They wanted at least fibre-channel prices, better yet Infiniband returns and not just on the NICs, but everything from cables to fabrics and management software.

    So when Nbase-T finally came along, it was another Microchannel, x86_64 or ARM HPC moment where for the longest time Intel mangement simply refused to invest in a product that was nice and cost effective for users, because they had long ago deciced it was time to follow IBM's mainframe footsteps and Omnipath/Optane-lock-in customers: their 10Gbit kit wasn't competitive in any shape or form and Gbit a horse long since beaten to death.

    Teranetics/PLX/Aquantia/Marvell has delivered very cost efficient NBase-T hardware for many years, but for some shady reason they have never caught on the the market, even when Intel itself put Marvel AQC107 kit into their high-end 9th gen NUCs for lack of a working Intel alternative.

    The AQC113 supports up to 10Gbit speeds with full NBase-T support, including EEE from a single PCIe 4.0 lane and no current mainboard or NUC should really be sold with less.

    Yes, 10Base-T might grab more power than a modern SoC just for the PHY, but not when you operate it at 1 or 2.5 Gbit/s on a battery.

    I believe it's high-time for some anti-trust investigation there and could someone in the mean-time please just offer PCIe 4.0 x1 based 10Gbit NICs based on the AQC113?

Log in

Don't have an account? Sign up now