Over the past 5-6 months, I’ve been diving deep into the world of AI alignment and governance. Given my background as a go-to-market practitioner and my interest in advocacy, I’m particularly fascinated by how this field is becoming more accessible, especially within the private sector. This article is my wrap-up of the research I’ve done so far, along with some thoughts on where the gaps lie in the current AI governance landscape.

If you’re short on time, here’s a quick summary:

Executive Summary (TLDR)

  • The AI governance landscape is evolving, with firms offering services to help organizations comply with regulations like the EU AI Act.
  • These providers include management consulting firms, tech consulting firms, law firms, and specialized AI safety firms, all offering risk management services—although many descriptions are vague.
  • While compliance is crucial, most firms focus on short-term risk evaluation, neglecting broader AI safety and alignment challenges, such as preventing misuse of AI and managing economic impacts.
  • Emerging countries importing AI technologies face unique challenges in auditing these opaque, large-scale systems, making it essential to invest in local AI safety expertise.
  • AI governance should address long-term safety and alignment issues, not just compliance, to ensure that AI benefits everyone, including smaller nations.

The Different Types of AI Governance Providers

There’s a wide variety of AI governance providers, generally falling into four categories:

  • Management consulting firms – Trusted by large organizations, these firms offer broad, high-level strategic advice, including AI governance.
  • Technology consulting firms – Similar to management consultants but with a technical focus, they help companies integrate AI while staying compliant with regulations.
  • Law firms – With growing AI regulations, law firms guide companies through legal compliance, focusing on frameworks like the EU AI Act and data privacy laws (e.g., GDPR).
  • Specialized AI safety firms – Newer, niche players that focus specifically on AI risks and safety, offering services like AI red teaming and model audits.

For curious minds, I’ve listed a few examples below. Feel free to visit their websites to see how they talk about their services in Responsible AI, AI Ethics, AI Governance, or AI Safety (terms that are often used interchangeably):

  • Management consulting: Bain, PwC, McKinsey, Deloitte, Forrester
  • Tech consulting: Accenture, IBM
  • Law firms: Gibson Dunn & Crutcher, DLA Piper, Clifford Chance, Manatt Phelps & Phillips
  • Emerging specialized AI firms: Luminos.Law, Babl.ai

The Rise of Platforms to Democratize AI Governance

Initially, I was concerned that there might be no solution to make AI governance more accessible. Smaller companies or startups, lacking the resources to engage with these firms, seemed to be left out of the conversation. This worried me because AI safety isn’t just a concern for large organizations—it affects everyone.

Fortunately, in addition to these service providers, a growing market for AI governance platforms is emerging. These are software tools that help companies automate and track their AI compliance efforts, making governance more accessible. For example, Fairly.ai specifically mentions use cases for small and medium-sized businesses (SMBs) and startups.

Here are some platforms to check out:

  • Buildaligned.ai
  • Credo.ai
  • enz.ai
  • Fairly.ai
  • getalignai.com
  • Holisticai.com
  • Luminos.ai
  • Monitaur.ai

A Common Problem: Vague Descriptions

While I originally plan to analyze their services, one thing that surprised me in my research is how vague these firms are about what they actually offer. Many websites and materials don’t clearly explain their services in details, making it hard to figure out what sets them apart or how their services actually work.

Take risk management, for example. Almost every firm lists it as a service, but what does that really mean? Most providers evaluate risks around privacy, bias, or safety, using compliance frameworks like the EU AI Act as their guide. In many cases, it feels like companies are simply checking regulatory boxes.

The Differences: Specialized AI Safety Firms and Platforms

Specialized AI safety firms tend to stand out more, offering clearer and more focused services. For instance, they might use terms like “AI red teaming” (where they stress-test AI models to expose vulnerabilities) or model audits (to evaluate fairness and safety).

However, even with this extra clarity, I still think there’s room for improvement. If you’re not already well-versed in AI safety, it can be hard to differentiate these firms from one another. I’d love to see more transparency—more concrete examples of how they’ve helped businesses and a clearer picture of their day-to-day services.

The Bigger Picture: AI Safety and Alignment

While compliance and risk management are important, they’re only part of what AI governance should be about. In an AI Alignment course by BlueDot I enrolled, we discussed how critical it is to ensure that AI systems align with human values and goals—not just meet legal requirements. And that’s the gap I see in this market.

Most AI governance services focus on short-term compliance issues, like making sure your AI doesn’t break the law. But the bigger, more complex problems—like preventing misuse of powerful AI by bad actors or ensuring industry-wide coordination to avoid an AI arms race—aren’t being addressed.

The Importance of AI Safety Audits for Emerging Countries

This issue is especially critical for emerging countries that import AI technologies. These nations often lack the local expertise or resources to fully understand the risks associated with AI systems from large global tech companies. The opaque nature of these AI products makes it even more important for these countries to invest in AI safety audits.

Such audits help local experts understand how these systems operate and whether they’re safe. Without this knowledge, countries risk adopting technologies they can’t fully control, which could lead to unintended consequences—especially when these technologies touch on national security or large-scale business operations.

Governance providers should help emerging countries build their own capacity to conduct these audits. Otherwise, they’re left vulnerable to decisions made by bigger players, putting critical infrastructure at risk.

What I’ve Learned (and What We Can Do)

Through my course on AI Alignment, I’ve learned that “alignment” goes beyond just safety. It’s about making sure AI systems reflect human values. And achieving this is much harder than it sounds. The current focus on compliance is important, but we need to think beyond that. How do we make sure AI systems are actually doing what we want? How do we ensure that AI benefits everyone, not just a select few?

I’m still learning, but what I do know is that the current state of AI governance is only scratching the surface. There’s a lot more we could be doing, especially to solve the biggest challenges facing AI safety.

Final Thoughts

AI governance providers, policymakers, and the industry as a whole need to take a step back and think bigger. Compliance is crucial, but it’s not the end goal. We need to address the long-term challenges of AI safety and alignment, ensuring that AI is trustworthy and beneficial for everyone.

This is particularly important for smaller or emerging countries. They need to build their own capacity for AI audits to understand the risks they’re facing. Without this, they’re left dependent on technologies they can’t fully control.

I’d love to see more companies and governance providers step up to fill these gaps. It’s not easy, but it’s work worth doing. AI is transforming our world, and we need to make sure it’s headed in the right direction.


Disclaimer: I’m not an expert (yet) in this field, and it’s evolving fast. If there’s anything I’ve missed, feel free to let me know!

Thanks: Aaron Scher and Chris Esposo for their feedback on my earlier drafts.