Q&A: Donald Thompson, Distinguished Engineer, LInkedIn
(Editor's Note: We conduct regular Q&As with technology leaders as part of our Cloud Tracker Pro service. We are offering this Q&A for a limited time, before it goes behind the CTP paywall.)
Donald Thompson is currently a Distinguished Engineer at LinkedIn, primarily overseeing the company's generative AI strategy, architecture, and technology. He has more than 35 years of hands-on experience as a technical architect and CTO, with an extensive background in designing and delivering innovative software and services on a large scale.In 2013, Donald co-founded Maana, which pioneered computational knowledge graphs and visual no-code/low-code authoring environments to address complex AI-based digital transformation challenges in Fortune 50 companies.
During his 15 years at Microsoft, Donald started the Knowledge and Reasoning group within Microsoft's Bing division, where he innovated "Satori," an Internet-scale knowledge graph constructed automatically from the web crawl. He co-founded a semantic computing incubation funded directly by Bill Gates, portions of which shipped as the SQL Server Semantic Engine. Additionally, he created Microsoft's first Internet display ad delivery system and led numerous AI/ML initiatives in Microsoft Research across embedded systems, robotics, wearable computing, and privacy-preserving personal data services.
Futuriom conducted this interview via email. It was delivered early in January 2025.
Donald Thompson.
Futuriom: You've written about how enterprises are shifting from digitization in the shift to cloud to an era of cognitive transformation based on GenAI. How far along are we in the timeline to cognitive transformation?
DT: The integration of generative AI into business operations is just beginning. We're all in the early phase of a decade-long shift in how companies operate and make decisions.
Currently, most businesses are using AI for relatively straightforward tasks like writing assistance, summarization, and customer support. While these applications provide value, they don't yet represent the fundamental transformation of business operations that cognitive technologies promise.
The real transformation will happen when AI becomes central to core business processes. This level of integration isn't happening widely yet - most companies are still in an experimentation phase, trying to understand where AI can provide the most value. It's important to note that successful AI implementation builds on existing data analytics and cloud computing capabilities. Companies that haven't mastered these foundational technologies will face significant challenges in their AI journey.
Looking ahead, the next few years will likely see accelerated AI adoption as more organizations move from experimentation to implementation. Companies that successfully integrate AI into their core operations will gain significant advantages - not just in efficiency, but in their ability to reimagine business models and decision-making processes.
Futuriom: What are the biggest barriers to GenAI in enterprises today?
DT: The two most significant barriers enterprises face with generative AI adoption are interrelated: a bootstrapping problem and misalignment between technical teams and business leadership.
The bootstrapping problem is common - without clear success examples, most organizations remain cautious. This creates a self-reinforcing cycle where businesses want evidence of tangible benefits and proven use cases before committing resources, yet these proof points remain scarce precisely because few organizations are willing to be early adopters. While this hesitation is understandable, it can be detrimental as the gap between early adopters and laggards continues to widen.
The second major barrier lies in the misalignment between technical teams and business leadership in executing AI strategies. Even organizations with strong AI talent often lack the internal frameworks to effectively utilize AI capabilities. Technical teams may pursue interesting AI projects that don't align with business priorities, while business leaders may set unrealistic expectations without understanding technical constraints.
Breaking through these barriers requires both bold action and strategic alignment. Organizations that move decisively while building strong bridges between technical and business teams will be best positioned to realize the transformative potential of generative AI.
Futuriom: Once projects are undertaken, what are some of the biggest mistakes enterprises make when it comes to adopting GenAI?
DT: Based on early observations of GenAI adoption, three critical mistakes consistently emerge in enterprise implementations.
First, organizations often underestimate the depth of organizational change required. Successful GenAI adoption isn't just about implementing new technology - it requires rethinking business processes, decision-making structures, and sometimes entire business models. Without this broader transformation mindset, organizations risk creating isolated AI initiatives that fail to deliver meaningful business value.
Second, many organizations rush into implementation without establishing the necessary data foundation. Clean, well-governed data infrastructure is essential for effective AI systems. Companies often discover too late that their data quality, accessibility, or governance frameworks aren't sufficient to support their AI ambitions.
Third, there's frequently a disconnect in how organizations approach human-AI collaboration. Some companies overestimate current AI capabilities, leading to unrealistic expectations and poorly designed initiatives. Others underinvest in training and change management, leaving their workforce ill-prepared to effectively work alongside AI systems.
The key to avoiding these mistakes lies in taking a measured, strategic approach that balances ambition with practical considerations. This means investing in foundational capabilities, focusing on clear business outcomes, and ensuring your organization is ready to adapt to new ways of working.
Futuriom: Do you think GenAI is taking a bit of the thunder, and budgets, from other necessary projects?
DT: While GenAI's transformative potential warrants significant attention and investment, it shouldn't come at the expense of other critical business initiatives. The key is viewing GenAI as part of a holistic technology strategy rather than a standalone effort.
GenAI often complements and enhances existing initiatives, potentially accelerating digital transformation, improving data analytics, and enhancing cybersecurity. Rather than competing with essential projects, it can serve as a catalyst for broader technological advancement. For example, investments in data infrastructure for GenAI can benefit multiple business functions beyond AI applications.
Forward-thinking organizations need a strategic approach to technology adoption and resource allocation. This requires carefully evaluating how GenAI fits into the broader technology landscape and supports overall business objectives. It's crucial to maintain investments in foundational technologies and infrastructure since they form the backbone for successful AI initiatives.
Futuriom: We hear that some GenAI use cases will depend heavily on edge deployment. Do you see this happening? If so, why?
DT: The reality of GenAI deployment is likely to be more nuanced than a simple shift to edge computing. What is emerging is a hybrid approach that balances the benefits of edge deployment with the need for centralized processing and data management.
Edge deployment of GenAI makes sense for specific use cases, particularly those requiring low latency or high privacy. Local processing can be valuable for immediate decision-making and situations where data privacy is paramount. However, many enterprise use cases require interaction with centralized systems of record and complex orchestration that may be challenging to fully replicate at the edge.
As models become more sophisticated, computational demands increase. This creates practical limitations for edge-only deployment with current technology. The solution for most enterprises will likely involve distributing AI workloads based on specific requirements: edge devices handling certain AI tasks locally, while cloud or centralized systems manage complex orchestration and access to comprehensive data stores.
The key to success lies in creating seamless integration between edge and centralized components, allowing organizations to leverage the benefits of both approaches while working within current technological constraints.
Futuriom: We hear a lot about private AI—the restriction of AI applications to workloads protected by enterprise-specific infrastructure and security. Is this the future direction of enterprise GenAI?
DT: The shift towards private AI for enterprise generative AI applications is gaining momentum, driven by fundamental business needs around data protection, privacy, and governance. This trend reflects the growing recognition that as AI becomes more central to core business operations, enterprises need greater control over their AI models and data.
Private AI addresses several critical enterprise requirements. It enables organizations to meet regulatory compliance standards, protect intellectual property, and maintain data sovereignty. This control becomes particularly crucial as AI systems integrate deeper into core business processes and handle sensitive information.
However, the future likely involves a hybrid approach that balances privacy and security needs with the benefits of broader AI capabilities. While sensitive workloads may require private AI infrastructure, other applications might benefit from public AI services or industry-specific AI collaborations. The key is developing frameworks that allow enterprises to make appropriate choices based on their specific use cases, security requirements, and business objectives.
Futuriom: If you had to define three emerging market trends in GenAI, what would those be?
DT: First, there is a trending shift towards specialized AI models tailored for specific domains and industries. While large general-purpose models have captured headlines, there's growing recognition that specialized models often prove more efficient and effective for specific business tasks. These purpose-built solutions enable deeper integration of AI into core business processes and often deliver better results than general-purpose alternatives.
Second, AI governance and responsible use have moved to the forefront of enterprise concerns. As organizations deploy AI in more critical business functions, they're investing heavily in frameworks and tools to ensure ethical, transparent, and accountable AI use. This isn't just about compliance - it's about building sustainable, trustworthy AI systems that align with business values and stakeholder expectations.
Third, there is an evolution in how organizations approach human-AI collaboration. The focus is shifting from simply deploying AI tools to thoughtfully designing how humans and AI systems can work together effectively. This involves rethinking workflows, developing new skills, and creating frameworks that leverage the unique strengths of both human expertise and AI capabilities.
Futuriom: What areas of study would you advise young people to focus on in planning for careers in AI?
DT: The AI job market continues to expand, creating diverse opportunities across industries. From a technical perspective, core competencies remain fundamental - including machine learning, data science, and deep learning. However, the field increasingly values professionals who can bridge technical implementation with business application.
Several key areas of focus can help position young professionals for success:
- Technical foundations
- Machine learning and data science fundamentals
- Programming proficiency, particularly in Python and related AI frameworks
- Understanding of neural networks and deep learning architectures
- Business acumen
- Problem-solving and analytical thinking
- Project management and stakeholder communication
- Domain expertise in specific industries or business functions
- Emerging skills
- AI ethics and governance
- Human-AI interaction design
- AI system architecture and integration
The key to long-term success is maintaining adaptability through continuous learning. AI technologies evolve rapidly, and successful professionals will need to stay current with new developments while building expertise in areas where human judgment and creativity add unique value.
Futuriom: Will we see multi-agent AI growing in popularity for enterprise use? If so, when and where?
DT: Multi-agent AI systems represent an emerging frontier in enterprise AI adoption, but their implementation will likely follow a measured, practical path. The immediate opportunity lies in specific use cases where multiple AI systems can work together to handle complex business processes more effectively than single-agent approaches.
Early adoption is happening in areas where workflows naturally involve multiple steps and different types of expertise. For example, in customer service, it’s possible to envision systems where one AI agent handles initial customer interaction, another manages knowledge retrieval, and a third coordinates with backend systems - all working together to resolve customer issues more efficiently.
However, implementing multi-agent systems presents unique challenges. Organizations need robust orchestration capabilities to manage agent interactions, clear governance frameworks to ensure responsible operation, and sophisticated monitoring systems to maintain performance and reliability. These requirements mean that adoption will likely start with well-defined, contained use cases before expanding to more complex applications.
The key to successful implementation lies in taking an incremental approach - starting with specific business problems where multi-agent systems can deliver clear value, then expanding based on learned experience and proven results.
Futuriom: Thanks so much for your insights, Donald!