Prosecutors in San Francisco are to use artificial intelligence (AI) to try and reduce racial bias when considering whether to charge suspects with a crime.
In a world first, district attorney George Gascon said he hoped the technology would "take race out of the equation" in the courts.
Mr Gascon's office worked with data scientists and engineers at the Stanford Computational Policy Lab to develop a system that takes electronic police reports and automatically removes a suspect's name, race and hair and eye colours.
He said the process would "redact the work without redacting the essence and the quality of the narrative, which was so important to us, so that we could take a look first and make an initial charging decision based on the facts and the facts alone without any attention being paid to a person's race or age".
The names of witnesses and police officers will also be removed, along with specific neighbourhoods or districts that could indicate the race of those involved.
A decision on whether or not to charge is made on the basis of these redacted police reports. The reports are then fully restored and re-evaluated to see if there is any reason to reconsider the original decision.
Mr Gascon said he wanted to find a way to help eliminate an implicit bias that could be triggered by a suspect's race, a name which sounds like it could belong to someone from an ethnic minority background, or a crime-ridden area where they were arrested – in essence, to make justice colourblind.
More from California
The programme will begin in July and progress will be reviewed weekly. Stanford has agreed to put the technology in the public arena for free, so if successful in San Francisco it could be rolled out across the country.Read More – Source