Category Archives: ROBOTICS

This Algorithm & Robots Decides Crime Cases Almost As Well As A Judge

A Robotic computer program could help relieve the massive backlogs facing the world’s highest courts

justice-scales-gif image

A computer algorithm took on the work of real human judges and did a pretty good job, predicting the decisions of one of Europe’s highest courts with 79 percent accuracy. The finding suggests artificial intelligence could help the world’s busiest courts work through their massive backlog of cases, even if an algorithm isn’t about to take up a digital gown and gavel and start actually deciding cases.

The AI analyzed cases tried before the European Court of Human Rights, which hears cases from people and groups who claim their civil or political rights have been violated in their home countries. An international team of computer scientists worked with a legal scholar to determine just how well AI could predict the court’s ultimate judgement based on how the written decision described the factual background of the case and the arguments of the parties involved. They found it agreed with the judges’ decision four of five times — and that the underlying facts of the case were by far the best predictor of the outcome of a case, rather than any of the more abstract legal arguments.

“The fact that we can get this accuracy, it means that there are some consistent patterns of violations that lead to overturning the [previous court’s] decision,” University of Pennsylvania computer scientist Daniel Preoţiuc-Pietro told Vocativ.

That suggests the court is typically less concerned with parsing philosophical questions of whether a specific instance is a human rights violation than it is determining how that situation fits into their already defined categories of violations. Preoţiuc-Pietro pointed to the example of people who allege mistreatment in prison as a situation that typically led to decisions in those people’s favor. “That’s definitely more likely for the court to actually accept that the state made a mistake and the people involved were actually justified,” he said.

More U.S. Military Wants Robots That Can Explain Themselves

The AI used what’s known as natural language processing to analyze the cases. This particular method involved looking at the text of a decision as a big bag of words, not worrying about any particular word order or grammar. Instead, the AI looked at what individual words and combinations of two, three, or four words appeared most frequently in the text, regardless of order. The AI then looked at all these combinations, known as N-grams, and clustered them into different overall topics.

The court’s decisions include lengthy sections recapping not only the factual background of the cases but also the original arguments made by the parties in the case. This gave the AI a broad sense of what each text was talking about and gave it the context necessary to predict the outcome of the case, which it did correctly in nearly four out of every five cases.

But that doesn’t mean the researchers are hoping to see AI judges anytime soon.

“We’re not advocating for automating any decisions,” said Preoţiuc-Pietro. “Decisions should still be made by the judges.” Where the AI can make a difference is in helping determining which cases make it to the judges in the first place.

More Artificial Intelligence Writes Extremely Bad Harry Potter Fan Fic

In 2015, the researchers found that nearly 85,000 petitions were submitted to the court, of which just 891 were actually decided upon. All the rest were thrown out as inadmissible, meaning the court couldn’t take them on and the previous decision by a lower court would have to stand. The European Court of Human Rights relies both on individual judges and committees to work through all these cases and figure out which are worth bringing to the actual court’s attention. Last year, that meant the entire court apparatus had to process more than 230 cases every single day, making it a huge challenge just to give each petition the human attention it deserves.

Artificial intelligence, by contrast, could zip through 85,000 petitions and decide which were most likely to be worth the court’s time, based on how similar each petition is to the court’s previous cases. Preoţiuc-Pietro suggested the algorithm could separate the cases into three groups based on the court’s prior history: those the court would likely rule on, those it likely would rule inadmissible, and those in a gray area. Committees could then devote more time to examining the cases already identified as being of uncertain status, rather than having them take valuable time doing all their own categorization.

“These committees are time-limited and beyond that very costly, so they can actually look at just the flagged cases which are more likely to be disputed and analyze them more thoroughly,” said Preoţiuc-Pietro, “while the others they can be sent for just individuals and they don’t need to be scrutinized by more people.”

The goal then wouldn’t be to take the human element out of the law, but instead the complete opposite: The European Court of Human Rights and other bodies like it would have more time to focus more time on its most difficult cases, while the AI would separate out the cases that would likely just get thrown out anyway.



Henry Sapiecha


Microsoft’s great achievement: AI that’s better than humans at listening… on phone devices

Microsoft’s latest speech-recognition record means pro human transcribers may be the first to lose their jobs to artificial intelligence. AI.

microsoftcortana770x449 image

Microsoft’s speech-recognition AI could eventually be used to enhance Cortana’s accessibility features, say, for deaf people. Image: Microsoft

Microsoft researchers have evolved a system that recognizes speech as accurately as a professional human transcriptionist.

Researchers and engineers from Microsoft’s Artificial Intelligence and Research group have set a new record in speech recognition, achieving a word error rate of 5.9 percent, down from the 6.3 percent reported a month ago.

The word error rate is the percentage of times in a conversation that a system, in this case a combination of neural networks, mishears different words. Microsoft’s system performed as well as humans who were asked to listen to the same conversations.

Microsoft sized its machines up against professional transcribers who were tasked with listening to the same evaluation data over the phone, which included two-way conversation data and a separate set where friends and family have open-ended conversations.

Humans and Microsoft’s automated systems scored 5.9 percent and 11.3 percent error rates, for the respective test data.

The scores are an umbrella figure for the results of three tests, comparing how many times Microsoft’s system and the human transcribers wrongly substituted sounds, dropped a word from a sentence, and or inserted the wrong word.

As Microsoft notes in the paper, humans and the automated system mostly fumbled over the same sounds in the tests, with the exception of “uh-huh” and “uh”.

Microsoft’s system was confused by the sounds “uh-huh”, which can be a verbal nod for someone to go ahead speaking, and “uh”, used as a hesitation in speech. The utterances sound the same but have opposite meanings, which humans had far fewer problems identifying than Microsoft.

chatimity-team-freshdesk image

Freshdesk makes sixth acquisition to build enterprise AI chatbots

Customer engagement software provider Freshdesk has acquired social chat platform Chatimity to strengthen its AI chatbot capabilities.

The transcriptionists, for some reason, frequently dropped the letter ‘I’ from two-way conversations, and did so far more often than Microsoft’s AI.

Overall, Microsoft notes, humans had a lower substitution rate, and higher deletion rate, while both humans and machine produced a low number of insertions.

“The relatively higher deletion rate might reflect a human bias to avoid outputting uncertain information, or the productivity demands on a professional transcriber,” Microsoft speculates.

Still, to achieve parity with a human in this test was an “historic achievement”, said Xuedong Huang, Microsoft’s chief speech scientist.

Improved automated speech-recognition systems could be used in speech-to-text transcription services and enhance Cortana’s accessibility features, say, for deaf people. However, that prospect still appears to be some way off.

Microsoft used 2,000 hours of training data to equip its neural networks for the task. It claims that by parallelizing the data with its AI Computational Network Toolkit on a Linux-based multi-GPU server farm, it was able to cut down training times from months to under three weeks.

Despite the milestone, Microsoft admits it’s still a long way from achieving speech recognition that works well in real-life settings with lots of background noise.

For example, as a live transcription service it’s not yet possible to identify and assign names to multiple speakers who may have different accents, ages, and backgrounds. However, the company says it’s working on the technology, which could open up a whole set of possibilities.

Read more about speech recognition


Henry Sapiecha