Menu

Disruptive Technologies

U.S. Seeks Expert Insights on Artificial Intelligence

National advisory committee member and SAS executive Reggie Townsend sees “at the highest levels . . . a realization that AI comes with rewards and risks”

Friday, July 15, 2022

By David Weldon

Advertisement

Although artificial intelligence has been evolving and gaining commercial acceptance for decades, it is advancing at such a pace as to confound efforts to fully grasp its business, economic and societal implications. The U.S. government – with priorities ranging from national security to promoting technology-industry competitiveness to ensuring algorithmic fairness – is now seeking guidance from a recently formed interdisciplinary advisory panel.

Established under the National AI Initiative Act of 2020, part of the National Defense Authorization Act, the National Artificial Intelligence Advisory Committee (NAIAC) held its first meeting in May, with Secretary of Commerce Gina Raimondo in attendance; Miriam Vogel, president and CEO of anti-bias nonprofit EqualAI presiding as committee chair; and Google senior vice president James Manyika as vice chair. Twenty-seven members in all, appointed to initial three-year terms with a mandate to advise the White House and Congress, will be looking at AI-related topics such as science and technology research, development, ethics, standards, education, governance and security.

Sitting on the NAIAC alongside representatives of various academic institutions, the AFL-CIO, companies like Amazon Web Services and IBM, and others is Reggie Townsend of SAS, a leading analytics systems and software innovator active in many industry and government sectors.

Reggie Townsend

“It was certainly evident that at the highest levels of our government, there’s a realization that AI comes with rewards and risks, and that we have to do our best to be intentional about maximizing the rewards and minimizing the risks,” says Townsend, who has more than 20 years of strategic planning, management and consulting experience and is currently director of the SAS Data Ethics practice.

He also recently joined the board of EqualAI, supporting objectives of “AI accountability, inclusivity and equity,” and discussed those and other issues in this interview for GARP Risk Intelligence.

What do you believe you bring to the table at the NAIAC?

I have been told that I bring a well-rounded and contextualized viewpoint, and that’s really important. The committee has some people who are technologists, others with a policy or a legal perspective, and still others who are more community-engagement focused.

I bring a diverse perspective to this, beyond just being an African-American man, but often just having an appreciation of what marginalization looks like in our nation. I’m able to tie that to technology enablement. And quite frankly, the ability to communicate thoughts was recognized as necessary for this group.

What benefits do you hope this will lead to?

AI has the opportunity to be a great equalizer in our society. But if used poorly, it has an opportunity to make things much worse for people who already have a tough time.

Let’s start by thinking about vulnerable populations. You can define vulnerable in a number of ways. It could be based on gender, sexual orientation, geography, economic rights etc. We want to make sure that we take care of the most vulnerable populations. Our assumption here is that, if we take care of the most vulnerable, then everyone else upstream will be okay.

At a government or federal level, if we took a similar approach, we could avoid potential harms. It’s not that we’re going to eliminate the risk [of discrimination]. That will never happen. But we can quantify those risks mathematically, and potentially mitigate against some of those risks.

Tell us about the SAS Data Ethics practice.

We are principally focused on institutionalizing the idea of data ethics, which includes such things as trustworthy AI or sustainable innovation. What we’re really talking about is the idea of being a responsible innovator.

In terms of trustworthy AI, we put in place appropriate levels of oversight for the technology that we are creating, as well as oversight for how our AI technology is being disseminated into the field. It is not just the job of the ethics practice to act in an ethical and trustworthy way. We are enabling everyone within the company to be on the lookout for potential compliance risks.

How do you define ethics in data management, and how does that relate to the NAIAC?

The best way for me to answer is to think in terms of technology capabilities that we believe are important. Desired capabilities include detection, explanation, mitigation, privacy and security, data management, model operations, etc. We could go into each of those with a tremendous amount of detail. But at the highest level, each has a number of features that fall under our thinking about such things as bias detection and fairness assessment.

We’re thinking about being able to explain natural-language work and being able to do explainable machine learning. We’re thinking about how to risk-mitigate providers and how to do things like synthetic data generation. The list goes on. The big takeaway here is that we believe there is a map, if you will, of the core capabilities necessary to meet the definition of trustworthy AI.

That’s what my practice and I are building for. We work with R&D teams and with our consulting team to bring these capabilities. We put these capabilities into the hands of people who use our platform, so that they can go out and build applications that address these values. I see technology as a means to enable our values.

What do you consider to be AI’s top benefits?

You can think about it from the perspective of – anywhere that decisions are made on a routine basis – having the ability to automate those processes. That frees up humans to do higher-purpose work that is more useful.

There are a multitude of applications, across a range of industries, where this can improve productivity or efficiency – such as healthcare, consumer technology, life sciences, financial services, agriculture. So anywhere you have the ability to automate routine decisions, and in some cases investigate and forecast potential causes and effects, are excellent applications.

Conversely, what are the top risks or shortcomings?

Having decisions automated, based on data use from the past or based on faulty logic, can create a highly scalable, hyper-focused risk. That becomes really problematic.

As an example, it is potentially a big problem if we’re making lending decisions on the basis of creditworthiness tied to individuals associated with a particular ZIP code, which may have a history of redlining. When we look at the ability to get a second mortgage, or even a first mortgage in some cases, creditworthy people are affected by that. It isn’t because of their inability to pay back the loan. It is because they were born into circumstances over which they had absolutely no control, or were unable to receive a certain type of education.

These factors become variables that go into the credit decision, and they have negatively impacted many people. You are taking loan officers out of the middle step in this process. As a consequence, you are marginalizing a whole community of folks who otherwise would be able to meet their financial obligations.

How do you view the level of fear versus acceptance of AI?

Let’s look at a couple of statistics. Gartner is predicting this software market is going to reach something like $62 billion in value this year, about a 21% increase year-over-year. The U.S. Patent and Trademark Office is reporting that about 25% of patents that they grant are using AI. At the same time, about two-thirds of decision-makers within enterprises say that issues related to identifying bias in AI have the most significant impact on consumer and customer trust.

That goes to the fear part of the question. Rather than a fear issue, I would describe it as a trust issue. If we’re going to tap into the promise of AI, we’re going to have to get the trust part right. If people won't trust it, they won’t adopt it.




Advertisement

BylawsCode of ConductPrivacy NoticeTerms of Use © 2024 Global Association of Risk Professionals