Thoughts on Tech News of Note - Week ending 01-16-2026

Thoughts on Tech News of Note - Week ending 01-16-2026
  • Gemini is the New Siri
  • ChatGPT Health
  • Everyone is Using Claude Code

(SIGH) I know, I know, all these topics are AI related. That kinda sucks, but let’s go…

Gemini Is the New Siri
The tech news never goes away, but this week felt a bit more muted, perhaps because the buzz and resulting hangover of CES 2026 is finally dying down as people again realize most of what they saw there may never materialize into consumable products. Therefore, it was a good week for Apple and Google to announce an agreement for Gemini to be the foundation for the next generation of Siri, Apple's on-device intelligent assistant. I have seen quite a few takes on this agreement and some of them seem to gloss over or simply miss some key points:

  1. This agreement is non-exclusive.

This has been touted as a big win for Google, and of course it is, it means billions of dollars being fed into Google's piggybank to reinvest in whatever priorities are important to them. And because it is a multi-year agreement, it means stable forecastable income for the length of the agreement. This is indeed very good for Google. It is good for any business to have additional income they can count on from a company they know has the means and ability to pay and the press on this agreement is very good press for Google. However, because it is a non-exclusive agreement, it means there is room for one or more players to play in this sandbox. Now, I don't currently have any idea how this could play out. The basic premise of the agreement seems to be that Gemini models will be Siri's new backbone. As privacy is important to Apple, the models will run on Apple's servers and not Google's. This means it is unlikely that other players would provide additional cloud infrastructure directly to Apple. However, since it is possible that much like Siri hands off certain requests to ChatGPT today, there could be a future arrangement where certain new tasks or features could be built on top of models from other companies, those companies would possibly need to build out additional resources of their own to handle the increased volume. But at the same time, there don't seem to be any indications that Apple is actively courting other companies. My best guess right now is that Apple is being prudent. They will build out the new Siri, see how it performs on devices and with consumers, and then make decisions on future partnerships based on their analysis.

  1. Apple clearly intends for this to be an Apple thing, not a Google thing

I have seen splashy headlines and YouTube video titles stating that Gemini is the new Siri. Now, in a sense, that is true since we know that Google's Gemini models will power the new Siri. However, I think it is a little disingenuous to simply say Gemini is the new Siri. Apple very much intends for this to be an Apple product with Apple's signature polish on it. As part of the deal, they have the ability to "fine-tune" Siri responses so that they reflect Siri's personality rather than Gemini's. Although the deal has been publicly announced, it feels like Apple really wants this to be a background story. To their end users, Siri will still be Siri, just smarter. It will be very interesting to see how much tailoring they do. I envision many future videos on YouTube where creators pit the new Siri against Gemini to see how differently they respond and whether one retains an edge over the other. Apple will surely be doing those tests, too, long before any consumers get their hands on devices running the new assistant.

  1. OpenAI declined to enter into a similar deal

This is perhaps one of the most interesting tidbits. It does not appear that Apple immediately plans to sunset or redirect features that go to ChatGPT today. Why? After all, there don't seem to be many things ChatGPT can do that Gemini can't do, and Gemini is increasingly becoming the tool of choice for more people as knowledge of its competence spreads. It would seem that Gemini is really meant to focus on what I'll call "insider knowledge", that being the information stored on your phone or tablet. Its primary purpose will be helping you with tasks that require more intimate knowledge of your day-to-day life whereas more general tasks and queries that can be handled completely off-device will likely still continue to be handled off-device, despite Gemini's apparent ability to handle those duties as well. It seems that at least for now, Apple wants clear delineation between secure private activities that it wants to remain secure and private vs. general intelligence activities that could be done in an app or in the browser. Perhaps it is the cleanest way to keep a clear line of demarcation. This is also probably the biggest opening for additional companies to make deals with Apple a la point #1. And maybe because OpenAI gets to keep what they already had, they decided they didn't want the drama or distraction of trying to be more. The math must not have been math-ing for them. It does seem OpenAI has some math problems to solve.

I find Gemini to be useful in answering questions and summarizing things for me and I've been interested in the Google Labs feature where Gmail provides daily summaries for you based off what is in your inbox. I'm optimistic that the new Siri will be helpful for Apple customers and hopefully Google adds some new and improved features for Google customers as well, so we all benefit from their learnings and experience as time passes.

ChatGPT Health
Although this was announced and launched on January 7, it seems to have become a story this week and since I did not write about it last week, I will touch on it this week. In one sense, this isn't really a new and scary feature. The announcement made note that it will be able to connect to apps like Peloton, Weight Watchers, and MyFitnessPal to tailor its responses and have better insight into your health and wellness. Many fitness-minded apps already do something like this through Apple Health, Google Fit, Samsung Health, or Health Connect. I use an app called Welltory that connects to many of these platforms with the intent of serving me better advice about my sleep, exercise, and nutrition. And I am using the beta version of Fitbit with its AI coach that has a similar intent to advise me how to improve my life based on what it knows about my sleep and exercise and what I tell it about my life circumstances. Platforms like Oura and others already allow you to import external health data for additional analysis and assessment. It seems logical that if you're seeking advice from ChatGPT on something related to your health that having more information would provide a better response. Where some of the fearmongering has settled in is on the idea of uploading your medical records to ChatGPT Health. And honestly, I do think this is an area in which to absolutely pause and consider the various possibilities and permutations. We have seen what can happen when medical information is mistreated by a company whether through negligence (i.e. hacking) or unfortunate circumstances (i.e. bankruptcy). It is probably not a good idea to upload your medical history to a company that has not outlined a clear and specific plan for how they will keep your information safe over time. It has been made clear that OpenAI is not subject to HIPAA as it is not a medical company. OpenAI maintains that Health data will not be used to train models and will be separate from other ChatGPT data. They have also stated that connections will be made only to apps with the proper levels of security. These are good things to say and do, but once your medical data is out of your hands, you don't really have any control over where it goes. What might be safer would be if people could run ChatGPT Health locally on their machines and not over the internet, but I haven't seen anything that suggests this will ever be an option. So, as it stands today, one night as well be comfortable with their data ending up on the dark web because although OpenAI may have all the right intentions, there are many signs looming that they aren't safe and secure from future potential negligent or unfortunate circumstances.

Everyone Is Using Claude Code
Using AI tools to generate apps isn't new. Software companies themselves have been using these tools to generate their own software, so it isn't at all unreasonable to expect that as the tools have become easier to use and more capable that more people would try them out to see what they can create. I have listened to more than one podcast where a host or guest has used a so-called 'vibe coding' (note that I do not like this term but it seems entrenched already) tool to create a graphic, a personal productivity tool, a personal website, or even a game to be played by the hosts of the podcast. Recently, Kevin Roose from the New York Times used a tool to create a fully functioning Mastodon server that he launched with PJ Vogt on PJ's podcast, "Search Engine". Now, for many of these people, playing with technology is part of their daily work. But I find it encouraging that people with no programming experience are creating tools that work for them and can benefit others. Many old programmers of my generation will balk and scold, saying 'they don't know what they've built and they can't maintain it'. And while that is true on a purely factual basis, it's also true that the tools can be used with and against each other to find and patch holes in logic and functionality. The tools are at a level where people can build things that are good enough and maintenance is theoretically an input window away. Many articles have been written and will be written about how AI is destroying and/or changing jobs and some of them are premature because no one really knows anything and others are just clickbait. But I do think that some aspect of software development is at the precipice of statistically significant change. No, we're not at a place where the average person will sit down and try to vibe code an app. Most people don't even have a thought in their head of what app they would create if they were pressed to come up with an idea. But some of the things that people do on a regular basis, like creating spreadsheets and presentations, choosing options when making large purchase decisions, or organizing their personal information can be aided with AI tools. Some of these abilities are already being touted by companies like Microsoft and Google. Apple promised similar abilities with Siri, promises which have not yet been kept but are still out there for future fulfillment, possibly with Google's help. Eventually, that tech savvy friend or relative will be doing something appealing with an AI tool and people will have increasing exposure to the possibilities. If nothing else, it should increase the value of teaching logic because that will be a valuable skill as prompt engineering becomes the way more things get done. And that's really all learning programming ever really was anyway.

Learn to use logic, kids.