Let's use AI to clean up government
General Politics Tech

AI- Americans Must Understand How AI Will Be Used by the Government

AI– Americans have been concerned about tech-enabled government surveillance for as long as they have known about it.

Now in the age of artificial intelligence, and with the announcement by the Department of Homeland Security this week that it is embracing the technology, that concern isn’t going away anytime soon.

But federal agencies could mitigate some of that fear. How? By engaging the public.

Since at least the 1928 Supreme Court decision to allow law enforcement use of wiretapping, government use of technology has provoked public debate.

Two years ago, public outcry forced the IRS to shelve newly announced plans for using facial recognition to identify taxpayers.

More recently, the Department of Homeland Security’s CBP One app, which uses facial recognition to identify asylum applicants, was found to be less able to recognize asylum seekers with darker skin, like many other such systems. This, too, has understandably led to public frustration.

Homeland Security has a huge mission set—including protecting borders, election infrastructure, and cyberspace.

But unlike other federal agencies, it has many public-facing missions—such as Transportation Security Administration agents at airports.

This also gives the department a unique opportunity to work with the public to ensure that tech is used responsibly.

People were much more suspicious of the most sweeping uses of facial recognition, like to shrivel protests or monitor polling stations.

Another important factor was the safeguards surrounding a given technology’s use. In our survey, these safeguards included providing alternatives to engaging with the technology, administering regular audits to ensure that the technology was accurate and did not have a disparate impact across demographic groups, and providing notification and transparency about how it is used.

Rather than a one-size-fits-all approach, we found Americans want safeguards sensitive to the context in which the technology is applied, such as whether the technology will be used on the open border or in a dense urban city.

To its credit, the department has implemented some safeguards along these lines, but they are not always uniformly administered.

For example, although facial recognition technology is optional for travellers going through airport security, some individuals report not being made aware that it is not a requirement, including a U.S. senator. Such inconsistency breeds confusion and likely mistrust.

Nevertheless, there is an opportunity for constructive engagement.

Many of the respondents to our survey said that they were either neutral or ambiguous about government use of technology, meaning that they hadn’t yet decided whether the benefits of using a given technology outweighed the risks. Far from having fully formed polarized views on the subject, many Americans are open to being persuaded one way or another.

This might allow government agencies to work within this large group of “swing” Americans to build more trust in how the government uses new tech on all of us. And, counterintuitively, the government’s reputation for moving slowly and deliberately is, in this case, perhaps an asset.

Far from having fully formed polarized views on government use of technology, many Americans are open to being persuaded one way or another.

Slowness is a trait often ascribed to the government. For instance, to field our survey we had to undergo a 15-month approval process.

And that slowness had consequences: By the time we got our approval, large language models had burst onto the scene but because they weren’t factored into our survey, we couldn’t ask people about them.

But when it comes to deploying new technologies, it should be done carefully, with a clear understanding of their benefits and risks—especially from the perspective of communities most deeply affected.

This means that a deliberately paced process can be a feature, not a bug; slowness can be an asset, not a hindrance.

If agencies like the Department of Homeland Security take the time to understand what makes the public more comfortable with how technology is used, the public might gain confidence.

Even better: If agencies using technology to shrivel Americans pulled back the curtain to explain how and why they do it, similar to the process of careful and considered deployment.

As our research showed, people might not be very interested in understanding how the tech works, but they want to know how it will be used—on them and society.

AI Robocalls -FCC Tailors Swift Action:

 In the emergent era of artificial intelligence (AI), it’s becoming increasingly difficult to discern whether an image, voice, or video is real or generated.

AI Robocalls -FCC Tailors Swift Action:
AI Robocalls -FCC Tailors Swift Action:

The Federal Communication Commission (FCC) has noticed how bad actors can exploit this and announced rule changes that would make voice imitation technology used to scam consumers with robocalls illegal.

By keeping their rulemaking specific and using existing laws instead of creating broad new powers, the FCC has demonstrated that novel problems can be tackled without unnecessarily increasing the size of the bureaucratic state.

The issue of voice imitation is only becoming more serious as technology becomes cheaper, easier to access, and better at mimicking human behaviour.

In early February, New Hampshire’s Attorney General revealed that AI-generated robocalls had been used to mimic President Biden’s voice.

This robot Joe Biden was discouraging voters in the state from voting, presenting a serious problem.

The FCC’s new rule is well-tailored and focused on a time when many other agency rules are not.

It builds on an existing legal precedent, the Telephone Consumer Protection Act (TCPA), and classifies the use of AI voices as “artificial” under that existing framework.

Because the TCPA already made the “initiate any telephone call to any residential telephone line using an artificial or pre-recorded voice to deliver a message without the prior express consent of the called party” illegal, the framework for dealing with AI robocalls already existed.

The FCC wisely included AI in this existing act, forgoing any unnecessary and overreaching efforts to expand the bureaucratic state.

Far from the norm, many agency rule changes as of late have been overreaching and counterproductive, even at the FCC.

In the agency’s proposed rulemaking on “Net Neutrality,” the classification of broadband service is changed from Title I to Title II, which effectively increases regulatory overreach without providing a reason.

Unlike the soft touch approach of the AI robocall rulemaking, this change addresses a problem that doesn’t exist, with a solution that is over the top and highly bureaucratic.

The Consumer Financial Protection Bureau (CFPB), likewise, has proposed multiple unnecessary and overreaching rule changes, including a rule that expands the agency’s authority over digital wallets without first demonstrating that it is a necessity.

While digital wallets present a few novel issues, like the potential for criminals to access a user’s finances from their phone, people already have the tools necessary to mitigate these risks through two-factor authentication.

The agency’s decision to impose a blanket rule in this new market, instead of addressing the actual issue at hand, stands in stark contrast to the FCC’s relatively modest definitional clarification of voice imitation technology.

All of this demonstrates that good rulemaking that protects consumers without bloating administration is possible.

Additionally, the FCC is not granting itself new authority and the TCPA pertains to telephone communication, and any illegal activity within that context.

The ruling is within the existing confines of the TCPA’s effort to combat uninitiated mass robocalls, and definitional clarification is limited to this context.

Hopefully, other agencies and other efforts by the FCC take note of this kind of rulemaking.

A rule like this harkens back to the true purpose of agencies like the FCC, which is to protect consumers from malicious and exploitative practices.

AI to clean up government:

AI is not going to kill us. Nor is AI going to save us. Instead, AI has the potential to help us change.

Very few are considering the opportunities this new technology offers to clean up government.

AI Could Shore Up Democracy
AI Could Shore Up Democracy

It could be key in keeping people informed about the government, reforming red tape, and cleaning up waste, fraud and abuse.

ChatGPT needs to be turned on the government. A ChatGVT is needed.

It could provide straight answers about the newest tax plan, if a bill is stuck in committee, or the likelihood that a piece of legislation will pass.

Or a ChatGVT could be turned on the regulatory code to understand its true cost to households and businesses.

Understanding how laws, litigation, hearings, regulatory codes and administrative actions intermingle can elude even the most experienced experts.

The newest generation of Large Language Models (LLMs) appear to be quite effective at working through text with a little bit of tuning.

Using AI to turn law into code will mean that the true impact of government will be understandable and accessible.

Most know that the burden imposed by regulation is colossal but the exact costs are hard to quantify. A ChatGVT could help sort out that problem.

Some of the building blocks are being developed right now by my colleague at the Centre for Growth and Opportunity at Utah State University, Richard Evans.

He has been building an open-source model of the U.S. federal and state household tax and benefit system called FiscalSim that will eventually cover the entire system.

In the not-so-distant future, policymakers and people alike will plug FiscalSim into a chatbot along with other regulatory modules like the ones developed by Dr. Patrick McLaughlin at the Mercatus Centre to better understand government policy on the ground.

Most are failing to see the powerful role this new technology could play in improving government operations.

Those in positions of leadership and advocates for effective governance should proactively engage with these new technologies.

There are many possible versions of ChatGVT.

A ChatGVT could explain the intricacies of government or it could help clean up arcane regulatory codes. Or a ChatGVT could tackle the technical debt of the government, reducing waste, fraud and abuse.

AI tools offer promise in making government more transparent and efficient.


Your email address will not be published. Required fields are marked *

Introducing Lipsa Mohanty, a dynamic journalist and content creator with a passion for crafting captivating narratives across a diverse range of topics. Specializing in copywriting and content creation, Lipsa brings over three years of expertise to the table, blending her Master's in Journalism and Mass Communication with her innate love for writing.With a niche spanning news, education, politics, healthcare, branding, food and beverage, parenting, email marketing, copywriting, and travel, Lipsa's versatility shines through in her work. From engaging blogs to informative websites, compelling e-magazines, and beyond, she leaves her mark with informative, relatable, and unique content that is also SEO-friendly.Driven by a commitment to excellence, Lipsa's writing not only informs but also resonates with readers, leaving a lasting impression. Whether unraveling the complexities of current affairs or whisking readers away on a culinary adventure, she masterfully crafts stories that captivate and inspire.