Thoughts on Tech News of Note - 03-20-2026

Thoughts on Tech News of Note - 03-20-2026
Telling you about the tech news and helping you figure out what to with it...

  • NVIDIA GPU Technology Conference 2026
  • We Can Have Better Data Centers
  • The MacBook Neo and the Death of the Trifold

NVIDIA GTC 2026
Most of what NVIDIA revealed at GTC rides along a predictable path for a chip maker: show off the new chip platform Vera Rubin (Vera is the CPU and Rubin is the GPU), which is the successor to the Grace Blackwell chips that have been extremely profitable for the company. Blackwell chips generated ~$11B in revenue in their first quarter of availability, which was the fastest ever ramp-up for NVIDIA. Jensen Huang, CEO of NVIDIA, indicated that he is expecting $1T in demand for the chips through 2027 instead of a mere $500B previous estimate. But progress never stops, so a new line is expected and appropriate, and the Vera Rubin line now incorporates a language processing unit (LPU) from their acquihire of Groq in 2025. Groq 3's LPU is said to enable up to a 35x increase in inference throughput. There's even a Vera Rubin platform for deployment in space to improve performance for satellite operations.

Most of what seems to have reverberated throughout the tech news cycle is the debut of DLSS (Deep Learning Super Sampling) 5, which is an AI-powered graphics renderer that can enable more photo-realistic images for games. I am not a gamer, so when I saw images of characters in games before and after DLSS 5, I was legitimately impressed. The characters did indeed look more realistic. However, the backlash from the tech community at large and the gaming community specifically has been overwhelmingly negative. In order for me to process this correctly, I had to put my artist's hat on. As a musician and songwriter, I may realize that my music could be improved in many ways. But in the end, I want some measure of control over how my music is improved. Adding in musical elements that weren't there originally might be impressive technically, but it could alter the soul and feel of the music in ways I didn't intend or desire. This is the concern that many graphic and digital artists were expressing. Yes, the characters looked more real, but they'd lost something in the process. All the scaled-up characters ended up having the same basic look and feel. The original edginess or charm of a character was lost in the upscaling, and many said the end result looked more like AI slop than true art. This is a perfectly fair and honest take. For what it is worth, Huang tried to assuage fears a bit by implying that designers and artists will have final control, but this sense of dread probably won't fade until those designers and artists can start to explore the new capabilities to see what functionality it might unlock for them to enhance creativity as opposed to potentially quashing it. I suspect DLSS 5 will still manage to be a benefit to the gaming industry. After all, games have been steadily improving in realism since the very beginning and that probably won't stop until some games look as good real life.

NVIDIA has other headwinds that will require adjusting their sails a bit. Huang seems to be savvy at seeing trends early enough to capitalize on them and right now he is tackling the latest explosion of agentic AI with a new "agentic AI OS" named NemoClaw. NemoClaw allows companies to deploy the popular open-source OpenClaw AI agentic tool but with security and privacy controls to operate more safely in a corporate environment. Huang declared that OpenClaw is the "operating system for personal AI". Personal AI may seem like a misnomer for what is intended to be an enterprise system, but each OpenClaw agent runs on an individual system and takes direction from the entity that installed it. On a personal system, the installer might be OK with the agent having free reign to modify system files and execute code, but in a corporate environment, this could quickly become an existential threat from a security and privacy perspective. So NemoClaw provides a sandboxed environment with specific definable controls outlining what level of access the agent can wield and what actions it is allowed to take. The complete software stack can be implemented with a single command, so it is optimized for quick deployment. At first glance, it might seem odd that NVIDIA would take these steps to create a system to allow companies to establish OpenClaw agents in their environment and offer that system for free. But don't you worry: NVIDIA is still fundamentally a hardware company and hardware is where they expect all of this to pay off for them well into the future. NemoClaw is optimized for NVIDIA hardware. It runs on GeForce RTC PCs, RTX Pro Workstations, and DGX Spark/Stations. Do you want to run the hottest agentic system on earth right now, Big Corporation? Well, make sure you pick up some also hot new NVIDIA hardware on your way out the door there. You'll need that.

For everyday people, most of what NVIDIA unveiled isn't immediately substantive. Gamers will eventually feel the effects of DLSS 5 one way or another, but the agentic AI wars are really just starting to heat up at the consumer level. Perplexity has launched its Perplexity Computer and Anthropic has had Claude Code and Claude Cowork for some time now. Those products are starting to make their way down to regular people who are willing to pay for a subscription. Google is also starting to add more agentic features to Gemini and those features will start to impact users on mobile even faster than on the PC as Gemini on Android devices like the newest Pixels and the Samsung Galaxy S26 Ultra can complete multi-step tasks for you right on your phone. It remains to be seen whether NVIDIA will also push NemoClaw downstream, but even if they don't, the personal AI agent is the next frontier for all of these AI companies to start pulling more money out of your pocket.

We Can Have Better Data Centers
Microsoft is building a new state-of-the-art data center in Mount Pleasant, Wisconsin (if you are a reader of the TheVerge.com or you are a Vergecast podcast listener, this location will sound very familiar because editor-in-chief Nilay Patel has talked extensively about it as it is near his hometown and all manner of manufacturing and data center shenanigans have been planned there over the past few years). Another day, another data center, right? The thing that could set this data center apart from others is its planned use of micro-LED technology dubbed "MOSAIC" that replaces the need for traditional fiber optic cables that use lasers. Microsoft says that the micro-LED system uses thousands of parallel 2Gbps channels instead of the 4 or 8 10-100Gbps channels in high-speed fiber optic cables. The systems are contrasted by their nicknames - "wide and slow" (micro-LED) and "narrow and fast" (fiber optic). Traditional fiber optic cables are sensitive to dust and high temperatures, but micro-LED channels are more heat resistant and durable. The micro-LED channels use imaging fiber (this is commonly used by the medical industry in endoscopes) instead of glass fiber. Because the micro-LED channels run at slower speeds, they don't need the digital signal processing required by fiber optic cables to correct signal errors, so they can be more efficient and less expensive to operate. The suggested real-world benefit is a 50% reduction in energy consumption and reliability levels comparable to copper wires. However, because micro-LED channels are effective at distances up to only 50 meters, this technology will not replace fiber optic cables for uses outside of the data center. Micro-LED will be used inside the data center for connecting the GPUs and servers it houses. But Microsoft is also beginning to use hollow core fiber for connections between data centers to address the shortcomings of micro-LED. Hollow core fibers pass light through air instead of the glass of fiber optic cables. This reduces latency and minimizes signal distortions so data can be transmitted over longer distances. Microsoft took on this technology when it acquired Lumenisity in 2022 and has since deployed nearly 800 miles of hollow core fiber.

Technology that can reduce the energy consumption of a data center is beneficial not only to the companies building data centers, but to the communities around them. As we begin to see pressures mounting on companies and government officials to address soaring energy needs and the resulting higher costs, we will need more innovations like these to help make data center buildouts cheaper and more sustainable. Data centers are likely going to get built whether people like it or not. We should start demanding more of our tech companies and push them to minimize the harms on the environment and the people in the communities where they build.

MacBook Neo and the Death of the TriFold
The Apple MacBook Neo and the Samsung Galaxy Z TriFold are not at all the same thing. The Neo is an inexpensive laptop running Mac OS that has set the computing corner of the Technorati on fire. Many people are seemingly knocking themselves over to come up with some new hot take on what the Neo will do to Windows laptops, to tablets (iPads), to Chromebooks, to other MacBooks, etc. I'm not here to say that any of these stances are incorrect. I do believe that the Neo will be immensely popular and I've stated before that Windows laptop manufacturers are going to have to find ways to step up and extol the virtues of touchscreens, pens, and the extraordinarily extensive library of software (much of it free) available for Windows. They will actually have to start making good stuff and Microsoft will also need to spend more time on Windows from a consumer angle than they probably want to spend if they don't want to end up being trapped in a dying corporate market that has stuck with the platform largely because of software lock-in. As AI programming tools become more advanced and more prevalent, this lock-in is going to fade. I don't think Microsoft is ready for that. But that's not even my point here. Samsung canceled its ambitious Galaxy Z TriFold this week because it could no longer justify selling at its $2900 listed price, seemingly due to increases in hardware components, most likely memory. It has also been said that the phone was over-engineered and couldn't be built sustainably at scale even at its luxury price point. As others have pointed out, Samsung makes memory, but it's far more lucrative to sell memory to other big companies than to sell it to itself to make phones for a lower value consumer market. They are starting to face a similar reality for their cheaper phones as well, as will most other manufacturers over the course of the next several months. Yet, as have many internet stars (hello Juan Carlos Bagnell aka SomeGadgetGuy, hello Michael Fisher aka CaptainTwoPhones) pointed out for years, our phones are very powerful and with a little effort, many of them can now be our computers, or at least in theory. You don't necessarily need a 10" or even an 8" folding phone to achieve this feat. Many modern smartphones now support display out via their USB C ports, and you can plug them into monitors or lapdocks and use them in many of the same ways you'd use a traditional laptop or Chromebook. Manufacturers should be leaning into this more. The MacBook Neo uses an iPhone chip and a mere 8GB of memory to power a computer that can do most of the things people want to do every day. The chips in many modern Android smartphones are just as powerful and some Android phones and tablets have 16GB of memory or more. Why aren't more manufacturers exposing the benefits of having all your stuff in one place and not having to sync across devices as much because your phone is your PC? Why haven't manufacturers put more effort into making Android devices work better with a mouse and keyboard (I'm typing this on my Samsung Galaxy S25 Ultra connected to a monitor, mouse, and keyboard, and it's maddening sometimes how bad it still is after all these years). Why is it that some devices still don't scale up to a 4K display properly? The MacBook Neo supports an external 4K display just as you'd expect a MacBook would. Its keyboard and trackpad work almost exactly the same way any other MacBook would. It's not power and RAM that is holding us back from using our phones to do more. In many cases it's not even available software. It's mainly the will and effort from manufacturers to showcase what these devices can do and market those capabilities effectively. And we also need the accessories market to catch up because although I happen to have a nice 15.6" 4K OLED touchscreen monitor with its own battery that I can use with a phone with one cable, I bought it several years ago and it's still too hard to find phone-friendly accessories like it on the market today. Perhaps this RAM crunch will push companies in new directions they should have always been pursuing.