Artificial Intelligence

The Need for Responsible AI: A Q&A with Xiaochen Zhang

Written by Dean Essner | May 30, 2025

As more and more businesses reap the benefits of AI’s great power, not enough are heeding the great responsibility that comes with it.

Xiaochen Zhang, Executive Director and Chief AI Officer, AI 2030

In a 2024 PwC study on how today’s corporations are managing AI risk, only 11% of executives reported having fully implemented responsible AI capabilities. PwC, moreover, suspects that even this small share may have been an overestimation, noting in its summary of the report that many companies are “overlooking the fact that responsible AI isn’t a one-time exercise.”

As Executive Director and Chief AI Officer of AI 2030, a global think tank focused on responsible AI, Xiaochen Zhang is working to harness AI's transformative power while also mitigating its potential risks. “Our belief is that if we can help leaders become responsible AI champions, they can impact their organization and team significantly in a short period of time,” said Zhang, a 20-year technology veteran who has held executive posts at World Bank Group and Amazon Web Services, among other firms.

To learn more about the need for responsible AI, we spoke to Zhang about the repercussions of the tech innovation race — and how entry-level professionals can find a viable career niche in AI ethics.

Why is it essential for businesses to not just know how to leverage AI — but to leverage it responsibly?

Because AI is so powerful. Take bias, for instance. If you want to scale your bias and have it impact more people, you typically don't have the tools to do so, and your personal bias remains limited to your immediate reach.

AI is very different. If you introduce AI with bias, it can be implemented across your entire customer base almost immediately. For global companies with millions or even billions of customers, this means that your bias can reach and impact a vast number of people with just a click. The scale and speed at which AI can propagate bias is unprecedented.

You’ve worked in everything from fintech to blockchain to AI. How have your varied experiences shaped you as a professional?

It’s made me an impact person. My focus for any role — whether it’s at Amazon Web Services or World Bank Group — is to understand how to merge new technologies with traditional business models to help create lasting, effective and less expensive solutions. Emerging technology can address problems like poverty reduction, climate change, financial inclusion and more, and I’m always curious about new ways we can leverage it to do good in the world.

I think technology overall has also evolved in this way. Early on, I remember talking to a blockchain company about their carbon emissions-related work regarding crypto mining; they didn't understand why they should care about energy consumption. However, now, most major blockchain companies have a clear climate-related mandate.

How will the ongoing integration of AI in business impact the maintenance of ethical standards?

I think it’ll be simultaneously both more and less challenging to uphold ethical standards. With many companies entering the AI innovation race, the threat of losing to the competition may push them toward irresponsible behavior. The fear of progressing slower than others may drive companies to implement certain AI-driven practices before they have the patience to try and do the right thing, the ethical thing.

On the other side, there is a growing community of young and entry-level professionals with responsible AI as their core competence. They’re helping fill the talent gap, bringing a willingness to impact the corporate culture.

Where can young and entry-level professionals begin in making responsible AI a primary focus in their careers?

It's important to be intentional in your learning approach, given the many offerings available. These can range from formal education at universities to self-study programs at organizations like GARP, as well as resources from open-source platforms.

Then, application is key. At the end of the day, if you don't apply what you've learned to solve problems, you will forget it, and it won't create any value. Sometimes your current job doesn't allow that or it’s not within your scope, and that’s when you need to find opportunities to apply your knowledge — whether it's through country or industry RFPs, AI consultations or volunteer roles and projects.

Lastly, there’s nothing more important than finding like-minded professionals. Learning alone can be lonely, and you may lose direction, interest or incentive. Join up with a community of people all working toward the goal of becoming ethical AI leaders. Be inspired by their solutions and strategies and learn from questions you had not considered. It’ll help you stay focused.