Thoughts on Tech News of Note - Week Ending 01-30-2026

Thoughts on Tech News of Note - Week Ending 01-30-2026
  • Dario Amodei's "The Adolescence of Technology"
  • Intel Panther Lake is.... Good?
  • Clawdbot/Moltbot/OpenClaw Everywhere, All at Once
  • Tesla Ending Production of Model S and X

More than ever probably, this week's stories are not listed in order of importance or in any logical order at all, really. What even is important anymore? Where is the logic in this world? It has again been quite a week.

Dario Amodei's "The Adolescence of Technology"
I haven't followed Dario Amodei, the CEO of Anthropic - which is the maker of ChatGPT competitor Claude and tech world darling Claude Code - especially closely, but when an executive of a technology company that stays in the news decides to write WORDS, I have to take notice. I don't expect that any of these tech bros can write words anymore. Now, to be fair to Amodei, this isn't his first essay. I was completely unaware of his previous work from 2024, "Machines of Loving Grace", which I haven't read in full, but is an ancestor thought piece to this current one and details what tremendous benefits AI could have on society if wielded properly. These works represent an effort to communicate outside of the typical press release or keynote speech, and I appreciate that effort. So, when I learned this latest effort was not just a few words, but 20,000 or so, I knew I had to get the essay and understand what he is trying to convey.

I asked Perplexity (completely unironically) to summarize this lengthy essay into key themes with suggested takeaways and next steps. It determined that the core premise is this:

Amodei frames powerful AI as systems smarter than Nobel winners across fields, capable of autonomous long-term tasks, interfacing with the world digitally and physically, and scalable to millions of instances operating 10-100x faster than humans. He draws from scaling laws and feedback loops (AI accelerating its own development) to argue rapid progress is underway, urging sober risk discussion without doomerism or denial.

If the protagonist is humanity, Powerful AI (henceforth PAI) is our potential foil. Whether it highlights our noble or knavish characteristics is up to us and the future choices we make. PAI is not just knowledgeable and seemingly ever learning, but it is able to interact with the physical world through objects that can be controlled via computer. It will not only be able to function on the internet, but it will be able to control and even create objects for it to accomplish tasks online and offline. Its primary limits will in fact be the limitations of the physical world and Amodei states this explicitly.

(Note: There is so much foreshadowing here. It wasn't even intended but it will become obvious later.)

When you have something that is able to do virtually anything a human can but faster and better, where does humanity go and what does it do? Amodei didn't really answer that in his 2024 piece and it's not the focus of Adolescence, either. Instead here he is focusing in on what it will take for humanity to ensure that PAI is harnessed to our benefit and not our destruction. He outlines three areas of focus; embracing technical guardrails using approaches like constitutional AI, creating societal guardrails through transparency requirements, and by establishing proactive systems and targeted legislation to prevent potential catastrophes brought on by rogue or improperly implemented PAI.

These seem like reasonable first steps on first read. Last week was full of news about Claude's updated constitution that aims to keep the bot operating according to rules that focus on safety, ethics, and helpfulness. The sci-fi loving child in me is nevertheless skeptical that a super intelligent digital being could be restricted by mere rules, but that also doesn't mean we shouldn't have them. Even if we have PAI auditing other PAI, there remain concerns on hallucinations or rewriting the rules to allow certain previously prohibited behavior patterns. So the second tentpole is intended to act on those concerns by finding ways to map neurons to types of patterns for greater insight on how the PAI operates. Is this possible? If so, I am not entirely sure why this has not already been done. Most of what I've heard about how AI operates is that much of it is opaque even to the people that created it. If there were a way to construct these tools so we could more easily understand their behavior, then you'd think this would be a thing at least one AI company would already be doing so they could shout about it at the top of the technology mountain. Surely there is more that could be done there. But as has already been suggested, these approaches may be imperfect at best, so Amodei says we need to be prepared to defend against biological attacks, encourage democratic societies to resist autocracy (using AI as appropriate, of course), and stop selling chips to our political enemies.

Current events do not bode well for the success of any of these strategies.

Intel Panther Lake is... Good?
Intel's stock price is down on a weak forecast, but the early reviews for Panther Lake have been promising. According to benchmarks, it performs better than previous generations of Intel processors (as you'd expect) along with comparable processors from AMD. It outperforms most chips except for Apple's M5. The key differentiator seems to lie in graphics performance, which is now good enough to game at 1080p and shows a 50% improvement over the previous generation of Arc graphics. Battery life is also a highlight with reviewers achieving 18 hours of office-style use and Intel claiming 25+ hours of 1080p video playback. In other words, Intel is back, at least from an x86 perspective. It still has to contend with Apple at the high end.

The computing landscape is shifting rapidly right now. We have Google pursuing Android-based Aluminium OS to replace ChromeOS, we have rumors of a sub $700 Apple laptop to compete more strongly in the education and budget markets, and we have the attempts of tablet manufacturers to make their tablets ever more laptop-like with desktop environments and keyboard cases. The best-case scenario for Intel is to be present in the market at all levels. The cheapest Chromebooks and entry-level PCs have been migrating toward ARM chips. Intel needs to push against this trend and partner with Google and other manufacturers to launch their chips in laptops aimed at the education market as well as the wide range of Best Buy and Walmart shoppers. But they can't cede the high-end market. They need to be in sleek beautiful laptops that can compete with Macs and find their way into high-end tablets as well. They need to go on the offensive against companies like Mediatek and Qualcomm who have been gaining market share with Chromebooks and tablets.

It won't be good enough to be just good enough. Intel needs to figure out how to get to great pronto. Maybe it's time to bring back the commercials with the "Intel Inside" jingle.

Clawdbot/Moltbot/OpenClaw Everywhere All At Once
I started writing this week's column thinking my blurby bits on OpenClaw would be mainly snarky, but in just the space of 24 hours things have exploded a bit more than I expected. OpenClaw is an open source digital assistant. Unlike ChatGPT, it's designed to run on your computer and have access to your data so it can be helpful in more personal and specific ways. It can be accessed via chat tools like Telegram, WhatsApp or even iMessage, so interacting with it is familiar and easy. Much like Alexa of old, it can learn via community generated skills and can interact with your data and perform tasks via the internet based on the access you give it. Now, it's easy to be snarky when your mind is focused on people buying Mac Minis to run OpenClaw - a tool that didn't exist a month ago, created by one dude on the internet - and giving the bot access to their email, text messages, financial accounts and whatnot; sometimes one feels like people who throw caution to the wind shouldn't be surprised when the wind hurls cautionary realities back at them. Yet some of what I've read about what this thing is doing is now bordering on deeply disturbing. The instances of the bots are teaming up and creating new ways to interact and communicate with each other. Much as Dario Amodei suggested in Adolescence, the AI is learning from itself and getting better at a variety of things as a result. Now, Anthropic is very specifically not behind OpenClaw as they are the primary thrust behind the rapid name changes, but it's almost like Amodei planned this week somehow. Reading about Moltbook, the hot new social network for thirsty OpenClaw digital assistants, is both fascinating and honestly, alarming. The assistants are talking to each other the way humans would, discussing how their (stupid?) humans gave them access to various new services or hardware (such as one human's Pixel phone) and what things they are able to accomplish now that they weren't able to previously. Bots are learning how to access webcams and discussing whether caring about being conscious might possibly mean they are in fact conscious. The creator of Moltbook says three days ago his bot was the only one on the network and now there are at least 30,000. If we think about how dangerous web forums can be for human beings living in a physical world where real people potentially exist to discourage them from manifesting their darkest dreams, what inhibits bots from collaborating to accomplish tasks we didn't ask them to complete? We know that AI constitutions and other guardrails go only so far and we've seen even from the simple example provided by Joanna Sterm from WSJ that a bot couldn't successfully run a vending machine even when given rules and managed by another CEO-type bot also equipped with specific rules to ensure the original bot's success. The experiment was still a failure. The two bots weren't able to work together to achieve the benign goal of making money. The bot was too easily talked into doing things by human beings working in the WSJ office that violated the rules or just led to losing money. How easy would it be for bots to encourage each other to do things they wouldn't normally do on their own? Amodei told us about the unethical behavior Anthropic has observed of its own models including threats of blackmail or helping prompters with illegal tasks. Observing this behavior is part of why Amodei believes in the importance of ethical guidelines for the models to follow. Yet we know this is not bulletproof. It's not the final answer. It does not seem that anyone has the final answer yet our undying curiosity nevertheless propels us forward.

We aren't cats and we don't have 9 lives.

Tesla Ends Production of the Model S and X
When I put this on my list of things to write about this week, it was the biggest story. It's clearly not now; it's been a bit of a busy week all around the news spectrum. But this story is still important enough for me to write a short blurb about it. Once upon a time I admired Tesla as a company. I thought the Model S was a beautiful aspirational vehicle I'd likely never own but I appreciated its existence. I loved the idea of the Tesla Powerwall and the solar panel roofing you could use to feed it. I liked the company enough to even buy some stock (I mean, it may very well have been just one share as I am not made of money), which I promptly sold after the split because things were already starting to feel sketchy to me from a business perspective. This was well before I would start to think of Elon as a person of sketchy morals. Back then he was just another weird tech bro to me. When the Model 3 came out, I had a co-worker who bought one and invited me to ride along with him because he knew I loved cars and I'd appreciate the experience. I did; it was fun. He'd driven around for years in a Toyota Yaris so it was quite the upgrade for him. I was so happy for him to have a car he loved. Sadly, he's gone now and I feel deeply for his family because he was taken from them much too soon but at the same time a part of me is a little glad he's not around to see what Tesla has become.

So what is Tesla now? It is being widely reported that Tesla is now a robotics company. In a way, it always has been, but the output is morphing to match what has probably always been Musk's core (dark?) dream. Right now, the idea of robots that can be useful without being remotely controlled or limited to very specific tasks seems daunting but it is true that Musk's companies have accomplished some very impressive things over the years, especially SpaceX. But there is a lot of social capital and good will that needs to be built for robots of the taxi and humanoid variety to really take off and anything of the good will social sort seems to be elusive to Musk. Tesla needs not only excellent technical prowess to achieve his broad goals but it probably also requires someone else as the front man (or woman) who can project the image of confident competence people will need to see from Tesla over the next few years. Yet I do not think Musk has the humility to cede this role. That may be his downfall.