The New York Times: freedom that hangs on algorithms

The New York Times: freedom that hangs on algorithms

9 February 2020, by Cade Metz and Adam Satariano

Criminal justice authorities use software to determine sentences and probation. Darnell Gates sat at a long table in a downtown Philadelphia office building. He wore a black T-shirt with “California” in bright yellow letters on the chest. He had never been to the state, but he hoped to visit family there after finishing his probation.

When Mr. Gates was released from jail in 2018 — he had served time for running a car into a house in 2013 and later for violently threatening his former domestic partner — he was required to visit a probation office once a week after he had been deemed high risk.

He called the visits his “tail” and his “leash”. Eventually, his leash was stretched to every two weeks. Later, it became a month. Mr. Gates wasn’t told why. He complained that conversations with his probation officers were cold and impersonal. They rarely took the time to understand his rehabilitation.

He didn’t realize that an algorithm had tagged him high risk until he was told about it during an interview with The New York Times. “What do you mean?” Mr. Gates, 30, asked. “You mean to tell me I’m dealing with all this because of a computer?” In Philadelphia, an algorithm created by a professor at the University of Pennsylvania has helped dictate the experience of probationers for at least five years.

The algorithm is one of many making decisions about people’s lives in the United States and Europe. The local authorities use so-called predictive algorithms to set police patrols, prison sentences and probation rules. In the Netherlands, an algorithm flagged welfare fraud risks. A British city rates which teenagers are most likely to become criminals.

Nearly every state in America has turned to this new sort of governance algorithm, according to the Electronic Privacy Information Center, a nonprofit dedicated to digital rights. Algorithm Watch, a watchdog in Berlin, has identified similar programs in at least 16 European countries. As the practice spreads into new places and new parts of government, United Nations investigators, civil rights lawyers, labor unions and community organizers have been pushing back.

They are angered by a growing dependence on automated systems that are taking humans and transparency out of the process. It is often not clear how the systems are making their decisions. Is gender a factor? Age? Postal code? It’s hard to say, since many states and countries have few rules requiring that algorithm makers disclose their formulas.

They also worry that the biases — involving race, class and geography — of the people who create the algorithms are being baked into these systems, as ProPublica has reported. In San Jose, Calif., where an algorithm is used during arraignment hearings, an organization called Silicon Valley DeBug interviews the family of each defendant, takes this personal information to each hearing and shares it with defenders as a kind of counterbalance to algorithms.

Two community organizers, the Media Mobilizing Project in Philadelphia and MediaJustice in Oakland, Calif., recently compiled a nationwide database of prediction algorithms. And Community Justice Exchange, a national organization that supports community organizers, is distributing a 50-page guide that advises organizers on how to confront the use of algorithms.

The algorithms are supposed to reduce the burden on understaffed agencies, cut government costs and — ideally — remove human bias. Opponents say governments haven’t shown much interest in learning what it means to take humans out of the decision making. A recent United Nations report warned that governments risked “stumbling zombie-like into a digital-welfare dystopia”.

Last year, Idaho passed a law specifying that the methods and data used in bail algorithms must be publicly available so the general public can understand how they work. In the Netherlands, a district court ruled in the past week that the country’s welfare-fraud software violated European human rights law, one of the first rulings against a government’s use of predictive algorithms.

“Where is my human interaction?” Mr. Gates asked, sitting next to his lawyer in the boardroom of the Philadelphia public defender’s office. “How do you win against a computer that is built to stop you? How do you stop something that predetermines your fate?”

Looking for welfare fraud

Last year in Rotterdam, the Netherlands, a rumor circulating in two predominantly low-income and immigrant neighborhoods claimed that the city government had begun using an experimental algorithm to catch citizens who were committing welfare and tax fraud. Mohamed Saidi learned about it from a WhatsApp message that he initially thought was a joke. Mohamed Bouchkhachakhe first heard from his mother, who had been told by a friend. Driss Tabghi got word from a local union official.

The rumor turned out to be true. The Dutch program, System Risk Indication, scans data from different government authorities to flag people who may be claiming unemployment aid when they are working, or a housing subsidy for living alone when they are living with several others. The agency that runs the program, the Ministry of Social Affairs and Employment, said the data could include income, debt, education, property, rent, car ownership, home address and the welfare benefits received for children, housing and health care.

The algorithm produces “risk reports” on individuals who should be questioned by investigators. In Rotterdam, where the system was most recently used, 1,263 risk reports were produced in two neighborhoods. “You’re putting me in a system that I didn’t even know existed”, said Mr. Bouchkhachakhe, who works for a logistics company. The program has been cloaked in secrecy.

Even those who land on the list aren’t informed. They aren’t told how the algorithm is making its decisions, or given ways to appeal. In 2019, a City Council hearing with the social ministry abruptly ended when members of the city government wouldn’t sign nondisclosure agreements before receiving a briefing about how the system works. Such disclosure would “interfere with the ability to effectively investigate”, the ministry said in response to questions.

In a report in October, the United Nations special rapporteur on extreme poverty and human rights criticized the Dutch system for creating a “digital welfare state” that turns crucial decisions about people’s lives over to machines. “Whole neighborhoods are deemed suspect and are made subject to special scrutiny, which is the digital equivalent of fraud inspectors knocking on every door in a certain area”, the report said. “No such scrutiny is applied to those living in better-off areas”.

Spotting youth

In areas dealing with years of budget cuts, algorithms present a way to help make up for reduced social services. The technology, officials say, helps them do more with less and identify people who may otherwise slip through the cracks.

Once a week in Bristol, England, a team gathers in a conference room to review the latest results from an algorithm meant to identify the most at-risk youths in the city and review caseloads. Representatives from the police and children’s services and a member of the team that designed the software typically attend to scan the list of names.

With youth violence and crime on the rise, and with many youth programs and community centers where young people gathered having been closed, the local government turned to software to help identify children most in need. Officials there say the work provides evidence the technology can work if coupled with a human touch.

Last year, Bristol introduced a program that creates a risk score based on data pulled from police reports, social benefits and other government records. The system tallies crime data, housing information and any known links to others with high risk scores, and if the youth’s parents were involved in a domestic incident. Schools feed in attendance records.

“You can get quite a complete picture”, said Tom Fowler, 29, the data scientist who helped create the youth scoring system for the Bristol government. In Bristol, the government has been open with the public about the program, posting some details online and holding community events. Opponents say it still isn’t fully transparent. The young people and their parents do not know if they are on the list or given a way to contest their inclusion.

Charlene Richardson and Desmond Brown, two city workers, are responsible for organizing interventions and aid for young people flagged by the software. “We put the picture together a bit more”, said Ms. Richardson, who was recruited for the program after running youth centers in the area for two decades. “We know the computer doesn’t always get it right”.

Ms. Richardson and Mr. Brown came to the job with concerns that the system would unfairly target black boys. Now they are confident that the machine helps identify children who need help. “This is not ‘Minority Report’”, Mr. Desmond said, referring to the 2002 Steven Spielberg movie. “We are not just taking the word of the computer”. The pair said they usually focused on the children with the highest risk scores, arranging home visits, speaking with their schools and finding mentors from the community. “It’s about seeing them as victims as well”, she said.

Leave a Reply