AI in 2025: Key Considerations for Technology Leaders
Is your organization ready to make the most use of AI in 2025?
- By Udo Sglavo
- January 21, 2025
As the year kicks off, organizations and companies across the globe are continuing to pour resources into AI. According to reporting from IDC, AI is set to have a $20 trillion impact on the global economy—leading to more resilient supply chains through anticipating demand better, smarter urban planning to optimize energy use, and improved disaster response via smart alerting. Of course, the possibilities for the technology are wide-ranging, but certain benefits—like expedited drug trials—might not be realized for many years.
Think back to the digital transformation wave of the early 2000s. Companies that embraced the internet, digitized their processes, and invested in e-commerce became the tech behemoths of today, like Google. Organizations that hesitated or followed the wrong adoption path either adapted too late or disappeared entirely. Similarly, organizations that fail to act now will find it increasingly difficult to keep up and compete in the generative AI-powered economy. Generative AI is not just another trend. It’s the next leap in business evolution, and the organizations that understand this and move decisively will be the ones shaping the future. Leaders should be informed about a few core technology considerations to implement this year as each of these will affect their success in deploying AI in 2025 and beyond.
Riding the ripples after the AI tsunami
While research and investment in AI continues, it’s important to note that the generative AI hype cycle is coming back down to earth. Generative AI will never not be cool, but leaders must pivot from pure excitement at AI possibilities to the business of delivering real value for their clients. This happens by simplifying approaches, rules, and models, and complementing them with the targeted use of LLMs.
In 2025, technology leaders will see AI and LLMs become commoditized and nuanced. The real value will shift to specialized services and domain-specific applications and agents built on top of these models. Simultaneously, the rise of open-source LLMs will challenge the dominance of a few key providers, driving a more decentralized AI landscape where customization and integration will be the key differentiators. IT heads will see leaders emerge across verticals, industries, and specialties, allowing each enterprise to choose an AI partner that best complements and serves their specific AI strategy.
In addition to a diverse landscape of companies leading the AI charge, technology professionals will also see smaller, quieter, easier options to choose from in terms of models themselves. Small language models will prove a popular choice for businesses needing a more targeted approach than large language models—such as for copyediting or product quality assessment. All this being said, IT teams will likely still interface with a few larger players in the AI market (like Nvidia), but there is ample opportunity for other companies to build segmented, smaller, and perhaps better-suited LLMs.
Sustaining breakneck AI development
In addition to more nuanced modeling and use cases for AI technology, business leaders should also be aware of the environmental impacts of their LLM research and usage. In the rush to build and adopt AI, companies are creating (sometimes) inefficient models that consume vast amounts of cloud resources—driving up operational costs and contributing to a larger carbon footprint for their organizations and customers. Unfortunately, the scale and speed at which AI is being used will start to severely contribute to greenhouse gas emissions, specifically from the massive amounts of energy needed to run and store data for models.
In addition to ongoing efforts to explore alternative fuel sources—like Google, Microsoft and Amazon’s nuclear bids for LLMs—companies must ensure that they are building models with efficiency in mind. Optimizing AI models reduces cloud costs and minimizes environmental impact, and it’s not just the responsibility of hardware providers and hyperscalers to put in the work. For users, investors, and partners, choosing a model or AI organization as a partner must also involve evaluating the sustainability of that model or business. Just like the home appliance industry and auto industry made huge advancements in energy efficiency, we must make AI models more efficient.
Technology leaders must weigh speed and algorithmic efficiency as critical levers to reduce cloud consumption. Greater efficiency in AI model development—made possible by cloud-optimized data and AI platforms—will help to reduce unnecessary duplication and waste and minimize energy consumption.
Limiting factors on AI development
There are outside forces that will also impact the prevalence and success of LLMs in 2025. One of those is regulation. Regulation is important and its guidelines keep AI in check, however it can also make it challenging for businesses to use pure open source in their innovations. Experimentation and advances take a hit. Silos may crop up. AI leaders need to stay updated on the impact for their regions, ensuring they are in line with regulatory rules while maximizing their innovation. Those companies that ignore the impending wave of AI regulations do so at their own peril. By not incorporating compliance into their AI strategies now, they risk future legal challenges, fines, and loss of customer trust. This short-sightedness could lead to an industry-wide setback.
Another factor for IT leaders to consider in the next twelve months are the skills needed to support their AI programs. A lack of AI skills in a company’s workforce will affect many things—from the ethical use of AI technology to the ability for teams to advance in their careers. If leaders consider AI expertise as the next currency determining the rich from the poor, investing in data literacy in 2025 is a critical step in ensuring an AI-fluent future workforce. Training programs and partnering with reputable data and AI partners will ensure that the AI surge won’t be stalled by an AI skills gap.
Fully AI-enabled organizations are the ones that will win the IT battles of 2025, so leaders should be preparing for all potential challenges, especially as they begin to adapt AI for their businesses. As this technology evolves from a “shiny new toy” to just another feature of business, organizations will fully operationalize AI to automate routine tasks that free employees for higher-value work. Those automations mean they’ll make decisions faster, recognize opportunities more quickly, and drive more innovation than their competitors. In short: they’ll win.
About the Author
Udo Sglavo leads Applied Artificial Intelligence and Modeling Research and Development at SAS. With a 25-year track record of fostering technology innovation and excellence, Udo heads a team of expert developers and data scientists dedicated to pioneering cutting-edge software and leveraging advanced models to transform the way the world works.