Thought Leaders
Move Fast, But Don’t Break Things: How To Balance Responsible AI Adoption And Innovation

According to a recent global survey from McKinsey, even though 78% of organizations now use AI in at least one business function, only 13% have hired AI compliance specialists, and a scant 6% have AI ethics specialists on staff.
This is, frankly, reckless behavior.
Though in my not-too-distant past I was a big believer in the “move fast and break things” ethos of Silicon Valley, we cannot afford to be so carefree with AI – a technology that is more powerful than anything we’ve seen before and growing at lightspeed.
Adopting AI with no meaningful guardrails is exactly the kind of fast-moving corner-cutting that is guaranteed to eventually backfire and risk breaking everything. It only takes one incident of AI bias or misuse to undo years of reputational brand building.
And though many CIOs and CTOs are aware of these risks, they seem to be operating under the assumption that regulators will eventually step in and save them from establishing their own frameworks, resulting in a whole lot of talk about risk with very little actual oversight.
While I have no doubt that regulations will eventually come, I’m less confident they will be established any time soon. ChatGPT was introduced roughly three years ago now, and we’re only just starting to see things like the Senate Judiciary Meeting on chatbots and safety risks take place. The reality is, it could be years before we see any meaningful regulation.
Rather than taking this as an excuse to procrastinate on internal governance, this should behoove businesses to take a more proactive approach. Especially considering that when regulations do finally arrive, companies without their own frameworks will be scrambling to retrofit compliance. This was precisely what happened when GDPR and CCPA were enacted.
Just like the scrappy startups of the early aughts are now held to higher standards as the corporate tech giants they’ve grown into, we collectively have to mature in our approach to adopting AI responsibly.
There is no “buy now pay later” with responsible AI deployments – start now
The first step in a more responsible approach to AI is to stop waiting for regulators and set your own rules. Whatever headstart you may think you’re gaining by avoiding safeguards today will only come back to bite you in the future when you’re faced with the extremely expensive and disruptive process of retrofitting.
Of course, for many, the problem is not knowing where to begin. My company recently surveyed 500 CIOs and CTOs at large enterprises and nearly half (48%) cited “determining what constitutes responsible use or deployment of AI” as a challenge to ensuring ethical AI use.
One easy place to start is to expand your focus beyond just the features made possible by AI and considering the possible risks. For example, though AI use may save employees time, it also opens up the possibility of huge amounts of Personally Identifiable Information (PII) data or trade secrets being shared with unlicensed and unapproved LLMs.
Any digital company today is familiar with the Software Development Life Cycle (SDLC), which provides a framework for building quality products. AI governance best practices should be embedded in that day-to-day workflow to ensure that responsible decision making becomes part of the routine, not an afterthought.
A governing body, such as an ethics committee or governance board, should be established that defines the standards on what the applications of AI actually look like within the organization, and similarly defines the metrics of how to monitor and maintain that standard. Functionally, this looks like AI tooling and model governance, solution approvals, risk management, regulatory and standards alignment, and transparent communication. Though technically it may be a “new” process, it isn’t very different from data best practices and maintaining cybersecurity and can be automated to ensure early detection of any issues.
Of course, not all risks require the same level of attention, so it is also important to develop a tiered risk management process so your team can focus the majority of their efforts on what has been defined as high priority.
Finally, and most crucially, clear and transparent communication about governance practices both internally and externally are paramount. This includes maintaining living documents for governance standards and providing ongoing training to keep teams updated.
Stop treating governance as a threat to innovation
It’s very possible that the real threat to responsible AI is the belief that governance and innovation are at odds with one another. Our survey data reflected a whopping 87% of CIOs and CTOs felt too much regulation would limit innovation.
But governance should be treated as a strategic partner, not some kind of innovation break pad.
One reason governance is looked at as a force of friction that slows momentum is that it’s often left for the end of product development, but guardrails should be part of the process. As mentioned above, governance can be built into sprint cycles so that a product team can move quickly, while automated checks for fairness, bias, and compliance are running in parallel. Long-term this pays off as customers, employees, and regulators feel more confident when they see responsibility built in from the start.
And this has been proven to reap financial rewards. Research has shown that organizations with well-implemented data and AI governance frameworks experience a 21-49% improvement in financial performance. A failure to establish these frameworks, however, also comes with its own consequences. According to that same study, by 2027, a majority of organizations (60%) will “fail to realize the anticipated value of their AI use cases due to incohesive ethical governance frameworks.”
A caveat to the argument that governance does not have to come at the expense of innovation is that legal teams getting involved in these conversations does tend to slow things down. In my experience, however, establishing a Governance, Risk, and Compliance (GRC) team goes a long way in keeping things running smoothly and quickly by serving as a bridge between the legal and product teams.
When managed well, the GRC team builds positive relationships with the legal team, serving as their eyes on the ground and getting them the reports they need, while also collaborating with the development team to mitigate future risks of lawsuits and fines. Ultimately, this further reinforces that investing in governance early on is the best way to ensure it does not interfere with innovation.
Create oversight and governance systems that can scale
Despite so many of the surveyed CIOs and CTOs feeling that regulations could limit innovation, a similarly large percentage (84%) expected their company would increase AI oversight in the next 12 months. Considering the likelihood that AI integrations continue to expand and scale over time, it’s equally important that governance systems can scale along with them.
Something I see often in the earlier stages of AI implementations within enterprises is that different units within the business are working in silos such that they are running different deployments concurrently and with different visions of what “responsible AI” entails. To avoid these inconsistencies, companies would be wise to establish a dedicated AI Center of Excellence that blends technical, compliance, and business expertise.
The AI Center of Excellence would establish both company-wide standards and tiered approval processes where there is a glide path for low-risk use cases. This, in turn, maintains speed while also assuring that high-risk deployments go through more formal safety checks. Similarly, the Center of Excellence should also set AI safety KPIs for top executives so that accountability doesn’t get lost in the day-to-day business functions.
But to make this a reality, executives need improved visibility into governance indicator tracking. Dashboards that serve real time data on these indicators would be far more effective than the current norm of static compliance reports that are immediately stale and so often go unread. Ideally, companies would also build AI risk registers, in the same way that they’re already tracking cybersecurity risks, in addition to keeping audit trails that reflect who built an ML/AI implementation, how it was tested, and how it’s performing over time.
The most important takeaway here is that responsible AI requires governance to be an ongoing process. It’s not just about approvals at launch, but continuous monitoring throughout the lifecycle of the model. As such, training is key. Developers, technologists and business leaders should be trained in responsible AI practices so they can spot problems early on and maintain high standards of governance as systems evolve. In doing so, AI deployments are sure to be more trustworthy, effective, and profitable – without having to break anything in the process.












