Fair Housing Advocates Ask Congress for Protection From AI Abuses
WASHINGTON — Computer engineers have controlled artificial intelligence in the financial sector so far but now lawyers and regulators need to oversee the emerging field to avoid injustices, according to housing industry experts at a congressional hearing Wednesday.
Otherwise, algorithms that can be blind to abuses are likely to trample the privacy and economic opportunities of persons seeking housing, they said.
“We need to examine the guiding principles that are used to create these models,” said Vanessa Perry, a George Washington University business professor.
Most of the regulations enforced by the Consumer Financial Protection Bureau and other government agencies predate the development of artificial intelligence. Now they are trying to play catch-up.
“We definitely need stronger protections and processes in place to make this happen,” Perry told the House Banking, Housing and Urban Affairs Subcommittee on Housing, Transportation and Community Development.
Marginalized communities are most at risk, such as from algorithms likely to block residents from low-income neighborhoods from qualifying for mortgages, she and other witnesses said. One result could be allegations of racism.
Despite pitfalls, none of the witnesses or lawmakers want to eliminate use of artificial intelligence in the housing and real estate industries.
“You can also program it to optimize opportunity and fairness,” Perry said.
Used properly, artificial intelligence can raise credit scores for some persons, identify others who are credit worthy for mortgages and speed up the processing to obtain housing.
It also could detect which renters are close to eviction and help them get public assistance to avoid homelessness.
A record was reached last year of more than 653,000 homeless Americans, according to Harvard’s Joint Center for Housing Studies. The number was up 12% from a year earlier.
Rep. Cynthia Lummis, R-Wyo., said the question for Congress is, “What parts of AI fit into today’s regulatory framework and what needs to evolve?”
An added issue is where to find workers with the technical skills to develop or monitor artificial intelligence.
Artificial intelligence refers to computer systems that can learn to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making and language translation. Employers normally require at least a bachelor’s in data science or computer science with certifications in artificial intelligence before workers are considered experts.
“We’re never going to have enough experts on this,” said Rep. Mike Rounds, R-S.D.
In one example discussed at the hearing, the trade association National Fair Housing Alliance did a nationwide search to find an artificial intelligence expert to monitor how the real estate industry used the technology. After failing to find a qualified applicant in the United States, the organization hired an expert from Canada.
Nick Schmidt, chief technology officer of Philadelphia, Pennsylvania-based technology company SolasAI, said humans who design artificial technology are the greatest danger for misuse.
Often the designers “get very excited by the math” but fail to consider negative consequences for some people affected by the technology.
In addition, “The quality of [artificial intelligence] models being built across all industries is low,” Schmidt said.
He recommended that government regulators set standards for quality and privacy protection.
Instead of relying only on computer experts, “We need lawyers and compliance,” he said.
You can reach us at [email protected] and follow us on Facebook and X.