I was pretty excited about the switch to Orbi: performance doubled and coverage was great. While it had an app, it also had a full web-based GUI for management and included a depth of configuration options. The fact that the web console was a bit clunky and hard to work with was fine because I could once again control my network from a laptop.
The the updates started.
With every firmware update two things were obvious: they did not do enough testing/QA and the solution to fixing their problems was always “do a full factory reset.” I muddled through the first update and it took me about an hour to get all of the devices back on the network. Satellites were not clearly responding despite being hardwired with an Ethernet backhaul. Eventually everything was back up to speed and I went back to living a normal life.
Then one day the office satellite stopped showing up. In trying to get it back online I realized that there was a new firmware update available. In a move of stupidity I thought “I’ll just apply the update to this one satellite and leave the rest of the system updates for later because I have some important work to do now.” BIG mistake. I manually updated just that one satellite and instead, the system updated everything, both satellites and the router. Now I had zero connectivity and work to get done. After an hour of trying to troubleshoot this situation and get it stable I gave up and put the Amplifi back online so I could get my work done.
Later that evening I put the Orbi back online and tried to figure out what was going on. What I realized was that they had a bug that would not allow 2 satellites to be online together. As soon as I brought the second one online the router stopped working. It turns out that this is a known bug of the firmware. How could you possibly allow an update to a 3-unit system when you know that only 2 units will be operational?
Eventually, through lots of work, I was able to get all 3 units online but the stability was lacking. I was looking at the list of connected clients. There are ~34 clients on my network (depending on whether something is in sleep mode at the time.) I looked at Orbi’s list of connected clients: 32 devices. OK. Then 17 devices. Then 19 devices. Then 26 devices. Then 31 devices. Then 14 devices. This was all in the course of less than a minute.
Something was going on and it was frustrating. When I inquired on the Netgear forums, the standard response was “well, as long as all of the devices are working, don’t worry about what is showing up.” But in my mind when something simple appears broken that is a sign that there is even more under the hood that might not be working.
With all of that data I decided that Netgear is not going to stay in the house. Too sketchy for my needs, when it was performing it was great, but getting it stable (and keeping it stable) was more work than I cared for. I can only imagine the difficulty that the typical consumer would have with this product.
I have been working with mesh systems for quite a while now and have some new observations. I had begun the journey with Google WiFi, a product that checked all of my boxes and had incredible performance, right out of the box. However, what I quickly realized with Google was that their business model and lack of flexibility really became the real issue and that led me down the path of unplugging it and replacing it with Amplifi.
Everyone is talking about intent-based networking these days, but because there is no great agreement on what it is, many people are trying to twist that around. The latest entrant to this is Cisco, who is trying to co-opt the trend. Where do we go from here?
I have been working with Amplifi tech support to diagnose some dropout issues for some time now. One of the things I was concerned about was a wireless HDMI repeater a few feet away. (Spoiler alert: the repeater had no impact, powering it completely off had no change on the issue.)
The new AMD EPYC processors feature two new distinct modes, power determinism and performance determinism. These modes enable a business to optimize based on greatest overall performance or on greater predictability in clock speed.
After a decade of use, the SPEC CPU benchmark is changing at the same time that new CPUs are hitting the market. This will be an important inflection point as the dynamics of how businesses are changing while at the same time the dynamics of how performance is measured is changing as well.
The new EPIC processor from AMD was designed for the critical environments of both data centers and at the edge as more compute heads that way. With plenty of RAS and availability features, EPYC can tackle the most critical enterprise tasks.
Digital transformation is changing how businesses view their infrastructure, putting them in a position of needing technology to not only solve today’s immediate pressing needs, but also serve them well as they embark on a transformation of their business. Dell EMC understands this and has a new portfolio of PowerEdge servers that can scale to address digital transformation.
I spent a week in Beijing, participating in the OPNFV Summit which is a global event focused on the NFV market, primarily with carriers. I had a lot of engagement with the Chinese carriers and found that there is an amazing amount of collaboration within their ranks.
Back to back trips to Boston gave me an opportunity to talk open source and cloud from different perspectives with both Red Hat and OpenStack. There is a lot going on in this space and the trips showed that just as the open source movement has a lot of commonality, what really brings them together is the acceptance of opposing viewpoints.