Deep impact: preparing for revolutionary changes in AI 

Geoffrey Webb headshot

Geoffrey Webb

Vice President of Product and Portfolio Marketing

04/10/2025
4 min read
Fingers typing on keyboard

In the movie Deep Impact, the world discovers that the unexpected arrival of a large comet is going to end life as we know it on the planet’s surface. However, thankfully, it is also revealed that the government of the US (and presumably others too) had a plan for such an event. I won’t spoil the movie if you haven’t seen it (and who can resist Morgan Freeman as the President?) but let’s just say while not everything turns out for the best, complete disaster is averted.

The emergence of DeepSeek

If you’re in the world of developing and building AI engines, especially for the mega investors at places like OpenAI, Microsoft and Google, these past months may have felt a bit like the plot of the movie. Suddenly, out of the metaphorical blue, emerged a world-changing arrival that could reshape many of the geographic features of AI costs and delivery: DeepSeek.

Despite (or maybe because of) the tight restrictions on high-end GPUs required to scale AI model training, a small team of only a couple hundred developers in China has released (as open source, yet!) an LLM and cognitive engine that appears to deliver very nearly the capabilities of the very best models previously at around 1/45th of the cost.

If initial claims are accurate, and they appear to be, then suddenly, it will be possible to deliver inference engines that are pretty much as powerful as the best out there today at a tiny, tiny fraction of the cost¹.

The implications of cost reduction

How they did this is hugely important to AI developers, but the fact that they could, so quickly, and with such a small team, is what counts here.

ChatGPT is only a handful of years old. The landscape and operating rules of AI capabilities delivered at scale (and the costs associated with delivering them) are barely solidified. This is not just an emerging technology space, this is a *barely formed* space. In other words, the fact this occurred shouldn’t be a huge surprise – rather it should be expected. And most importantly, it should be expected *again*.

Planning for dramatic shifts

But if businesses need to adopt AI to continue to differentiate their offerings and stay ahead of competitors, how do CIO’s and CPO’s plan for such potentially dramatic shifts in the costs and capabilities of AI engines?

Clearly if a business can adopt an AI platform that delivers effectively similar results at 1/40th the cost, they would want to do it. Hardware, software, and algorithmic advances can re-write the economies almost overnight, throwing all those carefully considered strategic decisions out the window.  

Lessons from early PC networking

This reminds me very much of the early years of PC networking. Building network-connected applications was painful and expensive as there were so many proprietary implementations of TCP/IP stacks, each of which needed to be handled differently.

Eventually companies began to deliver virtualizing layers that hid the differences and ultimately Microsoft introduced the Winsock standard, that removed the problem.

While it’s clear that there won’t be a Winsock for AI in the foreseeable future, it should also be clear that committing to any specific LLM (or SLM) carries with it an element of risk. A sudden shift in capabilities driven by the kind of necessity-is-the-mother-of-invention innovation that created DeepSeek could leave your AI-enabled product missing capabilities or performance levels that require rapid retooling.

The path forward

In short, getting on the AI train is essential, but when the train is moving this fast, it’s going to require some careful planning.

At Conga, AI is so central to delivering revenue management capabilities that investment is only going to continue to grow. We’re building AI tools that can read contracts for you, identify optimum product configurations for sellers, and even self-configure our own products to meet changing customer needs.  

As a result, we want to be able to take advantage of every new AI innovation as it occurs, delivering it to customers without significant re-tooling. Our approach is to insulate the core capabilities of the platform from the underlying engines as far as possible (and indeed the infrastructure on which it all runs). This approach, of essentially creating our own virtualizing layer within the platform, would seem to be the best way to tackle the problem of needing to go fast without being tied to a technology that has abruptly become an evolutionary dead-end.

Preparing for continued change

Again, we should expect more of these abrupt and surprising changes in the art of the possible – the widespread use of LLMs is incredibly new, even in an industry that moves as fast as hi-tech, and building resilience into the platforms that rely on these rapidly shifting underlying engines is essential.

Much as Semantic Layers abstract underlying data to present consumable and consistent analytic capabilities, I believe an equivalent virtualizing Inference Layer would be welcome, if unlikely, in the near future.

Whatever happens, businesses build these models into their processes, or into their product and service offerings, have to be ready to make rapid and perhaps significant technology shifts. It won’t be easy. If the likes of Microsoft and Google were caught flat-footed by this, it’s that much harder for other organizations to be ready, but ready is what they will need to be.  

To mis-quote that famous saying: “Surprise me once, shame on you. Surprise me twice, shame on me.”

Expect to be surprised.

 

 

**The views and opinions expressed in this blog post are those of the author and do not necessarily reflect the official policy of any affiliated organization. The advancements in AI technology and their implications discussed herein are subject to change. Readers are encouraged to conduct their own research and consider their unique circumstances before making any decisions based on the content of this post. 

Geoffrey Webb headshot

Geoffrey Webb

Vice President of Product and Portfolio Marketing

Get Conga's latest insights delivered to your inbox weekly.