

On the 2nd of January, I sat down, picked up my iPad, and had a nose at the BBC News website. It’s something I like to get my basic news from, even if I end up then researching further into it. There was a particular article that annoyed me: ‘AI Teachers and Cybernetics: What Could the World Look Like in 2050?’ Classic BBC bumph. Now, the BBC is very good, but one thing it does tend to do is sit in the middle of the road. I’m always amazed when people say the BBC is biased—if anything, the BBC is so middle-of-the-road it’s impossible to be biased. Certainly the news website, anyway. They cannot write an article without somewhere in it putting the alternative point of view, so they never commit to a viewpoint. Now, this is as it should be; it’s a national news website and it should be unbiased. But there are times that it just borders on the ridiculous. Perhaps that’s because when you know about a subject, you can see the ridiculousness. In this case, this particular article wound me up.
My main concern was the fact that the BBC talked about 2050. 2050 is a very long way away—24 years from now. The things they’re reporting may well happen in 24 years, but they’re not really raising the alarm bells about what could be happening in one, two, or three years. AI is at a crazy point. I don’t want to be a doom-monger; I’m optimistic about the future. But as we speak, AI companies are in an arms race to create the most sophisticated AI they can, and the governments supporting them, especially the US, are removing restrictions to let them go ahead.
Imagine if this were the nuclear arms race. Over the years, we’ve been good at getting restrictions in place because people realised heading towards that kind of doom is crazy. I’m not saying AI will turn up and kill us—there are arguments, but let’s not go there. The point is AI is moving so fast that if we paused it now, it’d take five to ten years for businesses to fully catch up with what it can already do. People just aren’t using AI as they should, and as they’re catching up, AI is leaping ahead. That’s exciting, but we need to do it in a human-centric, ethical way. Should AI replace employees or augment them? That’s why at Sea Change AI, I believe AI ethics should be core. If everyone rushes ahead and we lose jobs, we’re shooting ourselves in the foot because eventually no one will have money to buy what we’re selling.
So, in 2026, let’s look at AI with a human-centric, ethics-first lens. And maybe the BBC should be a bit more urgent about the near future rather than just dreaming about 2050.
Not sure where to start? Why not have an informal chat with john about your concerns and needs to gbet a sense of where you are and where you should be. Click the button below.
Click here