In a new survey by PwC, 73% of US business and technology executives report that they are already using or planning to integrate generative AI into their organizations. However, a significant gap remains in risk management, as only 58% have initiated steps to assess the risks associated with AI implementation. This discrepancy between AI adoption and risk preparedness raises concerns about the long-term sustainability of AI technologies in the corporate world.
Surge in AI Adoption
The PwC survey, which gathered responses from 1,001 executives across the United States, reflects the rapid pace at which generative AI is being embraced by businesses. The technology, with its transformative potential, is being used in various sectors, from customer service automation to product development and content creation.
“We’re now seeing large-scale adoption of generative AI,” said Jenn Kosar, U.S. AI Assurance Leader at PwC. “Six months ago, companies could get away with deploying AI projects without fully considering responsible AI strategies. But now, that’s no longer acceptable.”
Kosar emphasized that the early AI pilot projects within organizations, typically limited to small internal teams, have laid the groundwork for responsible AI strategies. As AI systems are rolled out on a larger scale, enterprises must assess what works best and establish responsible AI frameworks to ensure safe and ethical deployment.
Responsible AI: A Growing Priority
The PwC survey highlights that while companies are quick to adopt AI technologies, fewer are dedicating equal attention to responsible AI practices, which include safeguarding against risks such as data privacy violations, algorithmic bias, and cyberattacks.
PwC defines responsible AI as encompassing three core principles: value, safety, and trust. These elements should be integrated into an organization’s risk management process. However, many companies are struggling to fully implement these safeguards, with only 11% of respondents claiming to have made significant progress across all 11 responsible AI capabilities identified by PwC.
These capabilities include:
- Upskilling employees
- Embedding AI risk specialists
- Periodic training
- Data privacy protocols
- Data governance frameworks
- Cybersecurity enhancements
- Model testing and management
- Third-party risk management
- Specialized software for AI risk management
- Monitoring and auditing of AI systems
Kosar expressed skepticism about the self-reported progress of some companies. “We suspect many are overestimating how far they’ve come,” she said. Data governance and cybersecurity, in particular, pose significant challenges. Legacy cybersecurity measures may not be robust enough to protect AI models from attacks like model poisoning, which can compromise the integrity of an AI system.
Risks Come into Focus Following Controversial xAI Launch
The issue of responsible AI has become more prominent in recent weeks, following the launch of xAI’s image generation service via its Grok-2 model on X (formerly known as Twitter). Early users of the service have reported that the model appears to lack adequate restrictions, enabling the creation of controversial and inflammatory content, including deepfakes of public figures.
The example of xAI underscores the need for comprehensive risk assessments and responsible AI practices across the board. Without these safeguards, organizations risk not only reputational damage but also legal and regulatory repercussions as AI technologies become more pervasive.
Accountability Is Key
One of the primary recommendations from PwC to organizations deploying AI systems is to establish clear ownership and accountability for AI projects. Kosar noted that one of the main challenges identified in the survey is a lack of clear leadership when it comes to responsible AI deployment.
“Companies need to have a designated leader, such as a chief AI officer or a responsible AI leader, who is responsible for overseeing AI safety and ensuring that AI deployments are aligned with business processes and ethical standards,” she said.
By integrating AI safety into broader operational and technological risk management frameworks, companies can better safeguard themselves against the potential dangers of unchecked AI systems.
The Commercial Value of Responsible AI
Interestingly, PwC’s findings suggest that organizations are starting to see responsible AI as more than just a safeguard—many respondents believe it adds commercial value. Companies are increasingly recognizing that responsible AI practices can serve as a competitive advantage, grounding their services in trust and transparency.
“Responsible AI isn’t just about mitigating risk; it can also be a value creator,” Kosar said. “Businesses are beginning to understand that building AI systems rooted in trust and transparency enhances their reputation and strengthens relationships with customers.”
As companies continue to explore and implement generative AI technologies, the focus on responsible AI will likely become a critical factor in shaping the future of AI adoption. Ensuring that AI systems are safe, ethical, and transparent will not only protect organizations but also allow them to harness AI’s full potential while maintaining trust with stakeholders.
Conclusion: A Call to Action for Responsible AI
As the adoption of generative AI accelerates, the need for responsible AI strategies becomes more pressing. PwC’s survey highlights the importance of balancing AI innovation with robust risk management practices. Companies that fail to address AI risks may find themselves vulnerable to reputational damage, regulatory scrutiny, and operational disruptions.
For businesses, the time to act is now. AI adoption is no longer limited to pilot projects; it’s becoming an integral part of business operations. By prioritizing responsible AI, companies can ensure they harness the benefits of AI while protecting themselves from its potential risks.