Every year around this time, well-regarded analyst group Gartner releases its list of top 10 strategic technology trends for the upcoming year.
Last year, among the 10 trends it identified for 2024 was AI-augmented development, and we've certainly spent a lot of time here on ZDNET discussing AI and programming.
Now, just in time for its Orlando gathering of expense-account-wielding senior executives -- the Gartner IT Symposium/Xpo -- Gartner is back with its 10 strategic trends for 2025.
Also: The best AI for coding, and a bunch that failed miserably
When reading these trend prognostications, it's important to put them into context. Gartner isn't saying these are all initiatives your company should be working on, or that you should feel somehow less than if your company doesn't have active initiatives in all of these areas.
What they are saying is that these are trends and areas of innovation, activity, opportunity, and concern you should start becoming aware of.
For example, if you're not familiar with the new computing technologies of optical, neuromorphic, and novel accelerators, it might not be a bad idea to learn more about them before we proceed to Gartner's trends, since the analyst firm refers to them as underlying technologies, particularly for energy-efficient and hybrid computing. Here's a quick rundown:
Optical computing: Photons can travel much faster than electrons within typical computing materials. Because electrons often collide with the material they use for transport, they also generate teeny-tiny bits of friction that add up to a lot of heat. Optical computing uses lasers or photons to replace electrical signals in chips, potentially making them much faster and operate with less heat. This is ideal for any high-performance compute-intensive task.
Neuromorphic computing: No, the tech industry is not planning on harvesting Spock's brain to drive computing technology. However, the idea of neuromorphic computing is that neuromorphic systems process many tasks in parallel rather than using sequential steps, which is much more of how the human brain works. This could be helpful in AI and in processing inputs from thousands of sensors.
Novel accelerators: This is another buzzword to describe the special-purpose processing units that have become popular as ways to augment traditional CPUs. The best known of these, of course, is the GPU. Initially popular as a way for gamers to get higher quality graphics, GPUs have proven to be amazingly capable in crypto and AI calculations. Other custom augment processors like Tensor Processing Units (Google's machine learning engine) are also proving popular.
Most people reading this article aren't going to run out tomorrow and invest in optical computing or any of the other technologies I will be discussing. But keep these technologies (and the ten trends below) in mind as you start to plan your own business' strategic initiatives.
And with that, let's dive into Gartner's 10 trends for 2025.
Agentic AI is corpspeak's way of saying AI with agents. I like how Microsoft describes this. They say "Think of agents as the new apps for an AI-powered world."
Essentially, the idea is that AI will take the lead in some autonomous actions. Keep in mind that autonomous is not the same as automated. We've done "automated" for decades. Automated systems are those that follow specific instructions to perform tasks. Autonomous systems are those that operate independently, learn, make decisions, and adapt.
Also: AI agents are the 'next frontier' and will change our working lives forever
If you want a deep dive into this difference, read my, "From automated to autonomous, will the real robots please stand up?"
While no decisions are coming out of AI agents today, Gartner predicts that a good 15% of "day-to-day work decisions" will be made by AI agents by 2028.
This one is big -- and well worth the attention of every C-level executive. This is all about trust, accountability, and the legal and ethical underpinnings of AI systems. I have talked to several top executives at Lenovo, Adobe, and Deloitte about this topic:
AI governance is an umbrella term used to describe frameworks for managing these challenges. Gartner uses the acronym TRiSM (for Trust, Risk, and Security Management).
Also: How Lenovo works on dismantling AI bias while building laptops
Now, here's the big takeaway from Gartner's future-looking predictions. The company predicts that within three years, "Organizations that implement comprehensive AI governance platforms will experience 40% fewer AI-related ethical incidents compared to those without such systems."
Ethical incidents. Read that as lawsuits, employee complaints, and very bad PR. A 40% reduction can mean the difference between continuing a successful career or standing in the employment line.
While this name sounds more like you're protecting your right to propagate disinformation, what Gartner is discussing is just the opposite: adding the fight against disinformation into your main security posture.
I did another interview, this one with Trustpilot's chief trust officer Anoop Joshi, to explore this problem in depth. Trustpilot makes its name on providing trusted reviews, so disinformation is the bane of the company's existence.
Gartner describes disinformation security as "an emerging category of technology that systematically discerns trust and aims to provide methodological systems for ensuring integrity, assessing authenticity, preventing impersonation, and tracking the spread of harmful information."
Also: AI-powered 'narrative attacks' a growing threat: 3 defense strategies for business leaders
Today, Gartner isn't seeing much formal work in this area, but predicts that by 2028, a full half of enterprises will have systems that fight against these attacks. With AI in the hands of bad actors, it's not hard to predict that there will be an even more serious rise in very credible-seeming disinformation.
Here's my take on disinformation in the upcoming elections: Elections 2024: How AI will fool voters if we don't do something now.
I'm not going to get into the nuances of what quantum computing is. (We have an excellent explainer for that.) For the purpose of this article, think of quantum computers as insanely faster than our current machines.
Now, think of cryptography. There are lots of encryption methods that don't respond well to brute force attacks, but instead can take thousands of years to decipher. But what happens when you have a computer a million times faster than you did last year? Suddenly a problem that takes a thousand years to solve can be cracked in about eight hours.
Also: IBM promises a 4,000 qubit quantum computer by 2025: Here's what it means
We do have a few years before your run-of-the-mill crook gains access to quantum computing tech. But nation-states? You can bet enemy and rogue nations are looking into this stuff right now.
So what happens to all your encryption when the enemy has a way of compressing time? Gartner estimates that by 2029, most current forms of cryptography will be unsafe to use. They strongly recommend deeper research into building cryptography techniques that can survive in a world where quantum computing is available.
Here's another trend that can give you a bit of a queasy feeling, but also can prove to be enormously helpful. The principle behind ambient invisible intelligence is that your home, work environment, retail environment -- any place, really -- is filled with smart tags and sensors, and then managed by AI.
The idea is to infuse systems with awareness, whether that's awareness of buying behavior, traffic flow, or simply turning on the light as you walk down a dark hallway at night.
Through 2027, Gartner sees this as mostly focused on practical retail and warehouse applications, although smart home geeks like me will undoubtedly deploy all sorts of neat autonomous gadgets that annoy our families and freak out the dog.
Alphabet (Google's parent) chairman John Hennessy told Reuters that a query into a large language model AI like ChatGPT or Google's Gemini costs 10 times as much as a typical Google search.
According to a study published in the academic journal Joule, AI-related energy is expected to use between 85.4 and 134.0 TWh of electricity annually by 2027. For comparison, Finland only uses 81.0 TWh, and Norway less than 132.0 TWh.
It's no wonder Gartner contends that sustainability will be a big focus in the coming year. The analyst firm says that new technologies such as the aforementioned optical, neuromorphic, and novel accelerators may use substantially less memory.
Also: Making GenAI more efficient with a new kind of chip
There is one statement in Gartner's announcement that I just don't find fully credible. They say, "In 2024 the leading consideration for most IT organizations is their carbon footprint." Nope, I don't think so. Not the leading consideration. With the boom in AI, the ongoing extreme nature of cyberthreats, and just the need to get solutions deployed, it's unlikely that IT organizations can be characterized as making their carbon footprint their top priority. I just don't buy it.
Maybe it should be. But it isn't.
Ten years ago, when we talked about hybrid computing, we were referring to some mix of on-premises computing and cloud computing.
Today, what Gartner is referring to is again those new technologies I introduced at the beginning of this article, along with mixes in processor types, different storage and network approaches, and other specialized considerations.
Going forward, Gartner is saying, data centers won't simply look like racks of basic servers, but will be a mix of a wide range of technologies, deployed based on need and performance requirements.
There is no doubt spatial computing, VR, AR, mixed reality, etc., is becoming a thing.
Meta is blasting out its low-cost Quest headsets to consumers. Apple's Vision Pro, while not a success at its over-the-top price point, is still a powerful concept prototype for the future of spatial computing.
Gartner sees spatial computing exploding in the next ten years, jumping from a $110 billion market to over $1.7 trillion by 2033.
Also: XR, digital twins, and spatial computing: An enterprise guide on reshaping user experience
Expect to see adoption in vertical solutions, where the headsets solve specific professional problems. Then there's the whole virtual monitor and entertainment center application, which could replace peoples' needs for large TVs (especially those who travel or live in tight quarters) and for big monitors for computing use.
Stay tuned. It's still not comfortable to wear the big heavy goggles. But if Meta's Orion project reaches fruition sometime soon, AR could suddenly become really compelling.
Today, most robots do one task, and do it well. I have an army of 3D printers in the Fab Lab, and they create plastic objects. I have another set of robots that move cameras on arcs (one robot) or linearly (another set of robots).
Many of us have little robots that vacuum our floors. But I still don't have a robot that will bring me a cup of coffee.
Also: From automated to autonomous, will the real robots please stand up?
While Gartner is seemingly loathe to describe humanoid robots, their description of polyfunctional robots is simple: machines that have the capability to do more than one task.
They don't really define the form those robots will take, or what kinds of tasks they will perform, but they estimate that 80% of people in 2030 will "engage with smart robots on a daily basis."
No. Not a chance. I don't buy this one at all. Gartner claims that one of the trends to watch is the use of technologies that "read and decode brain activity" to improve human cognitive abilities. This will be done with BBMIs (bidirectional brain-machine interfaces).
They look at these as neurological enhancements and claim that, by 2030, 30% of knowledge workers will be "enhanced by, and dependent on, technologies such as BBMIs (both employer-and-self-funded) to stay relevant with the rise of AI."
Also: AI desperately needs a hardware revolution and the solution might be inside your head
Yeah, no. The closest we might get is hanging VR bricks off our faces, and even that has a very low uptake compared to most other productivity technologies. It's far less likely that users will use electrodes to implant or detect brain signals.
Not going to happen.
Did we cover all the future trends you expect for 2025? I was surprised to find no mention of smart cars or smart cities, little about programming automation, no real mention of biotechnology or healthcare, and little detailed focus on anything related to green energy.
What trend are you most excited by? What worries you the most? What did Gartner leave out? Let us know your thoughts in the comments below.