Policy

Do Florida and other states need an artificial intelligence (AI) 'bill of rights'?

Connecticut legislation, for example, requires an inventory of the technology’s use in government and establishes an artificial intelligence working group to make recommendations.

Photo by Levart_Photographer on Unsplash

As the use of artificial intelligence (AI) grows, state are increasingly wrestling with how – or even if – to begin regulating the technologies, though that trend has yet to hit Florida. 

In Connecticut, however, Democratic Gov. Ned Lamont recently signed a bill to govern the state’s use of artificial intelligence, and tasked that state's legislature with building an AI “bill of rights.”

In fact, a National Conference of State Legislatures tracker shows at least 27 states this year considered or enacted legislation "related to AI issues generally," with many bills designed to study the impact of AI or algorithms and the role policymakers could play.

Florida Gov. Ron DeSantis this month signed a wide-ranging bill (SB 262) designed to boost online privacy, including giving people more control over data collected by technology companies. But that legislation does not address AI-specific issues, and the state's GOP-dominated leadership is generally wary of regulation

The Connecticut law, on the other hand, which passed both chambers of the state's General Assembly by the end of May, requires the legislature to form a working group to make recommendations on how AI should be regulated and on a potential bill of rights based on the blueprint released last year by the White House Office of Science and Technology Policy.

The legislation also requires the Department of Administrative Services to undertake an inventory of and provide impact assessments for the state’s use of AI systems by the end of this year. The department then would provide ongoing assessments of the technology’s use. The state’s Judicial Department must conduct a similar inventory and develop policies on AI’s use to prevent discrimination and disparate impacts.

Separately, the state’s Office of Policy and Management must produce by Feb. 1, 2024, “policies and procedures concerning the development, procurement, implementation, utilization and ongoing assessment of systems that employ artificial intelligence” and are used by state agencies, per the bill text.

On alert against biases embedded in machine learning

State Sen. James Maroney, the Senate Chair of the General Law Committee where the bill originated, cited testimony from earlier this year that algorithms are trained on biased data, with police departments relying on predictive policing tools to determine where to deploy officers and resources. When those algorithms rely on historic crime rates—and many communities of color have been over-policed in the past—AI can perpetuate racial profiling and other biases.

“We owe it to our residents to ensure that as a government we do not discriminate in providing or have disparate impacts through the provision of services that our constituents need and deserve,” Maroney said in a statement in May after the bill passed the Connecticut State Senate.

As the legislation worked its way through the General Assembly, it received strong support from the American Civil Liberties Union. In a statement, the organization’s Connecticut branch said that while AI can have “incredible benefits,” it also poses “threats to our civil rights and civil liberties if misused.” In urging its passage, the group also noted that algorithms and AI “can perpetuate racial bias and inequity and deeply change how people interact with the government.”

The legislation was proposed partly as a result of a study conducted last year by the Connecticut Council on Freedom of Information and the Media Freedom and Information Access Clinic at Yale Law School. 

Researchers found that while state agencies had begun using AI and other automated systems to make decisions that affect residents’ lives, algorithmic decision-making was not transparent. The public was not being told whether AI tools had been properly and equitably developed or how they were being used, the report said.

Agencies are using AI and algorithms “in ways neither the public nor the agencies themselves fully understand,” said Kelsey Eberly, clinical lecturer and Abrams fellow at the MFIA Clinic.

“When that happens, we don’t know why certain children find seats at magnet schools, certain job seekers’ applications filter to the top, or certain families are flagged for child welfare visits—decisions far too weighty to be made by black box technology,” Eberly continued. “This legislation brings much-needed ‘sunshine.’”

Chris Teale is a staff reporter for Route Fifty, where a version of this story was first published. 

NEXT STORY: Miami passes 'disorderly conduct' law for public parks, despite free speech concerns