Major doubts have been poured over the work of AC Global Risk, a firm that produces software that analyses the risk posed by an individual based on their voice. The Intercept spoke to a number of experts who raised technological and ethical concerns about the firm. The company has previously claimed that Apple is one of its key clients; Facebook and Google are also thought to have a relationship with AC Global Risk.
Judging if an Individual Is a Risk, Based on Their Voice
AC Global Risk claims to produce a product that establishes the level of risk associated with a person by analyzing how they speak.
The report on The Intercept explained:
Clients of AC Global Risk help develop automated, yes-or-no interview questions. The group of people selected for a given screening then answer these simple questions in their native language during a 10-minute interview that can be conducted over the phone. The RRA then measures the characteristics of their voice to produce an evaluation report that scores each individual on a spectrum from low to high risk.
AC Global Risk has said Apple is one of its major tech clients. In July, the firm’s CEO Alex Martin conducted an interview with Bloomberg. The presenter cited Facebook, Palantir and Apple as clients of Mr. Martin’s company, a claim he did not push back on. Instead, Mr. Martin said: “We help companies screen people that are in the security space for them.”
That said, Apple has not publicly said how, or even if, it used AC Global Risk software.
“Bogus” technology
An investigation by The Intercept has poured doubt on AC Global Risk’s claims. It spoke to a number of academics who highlighted issues and risks with the technology. Some experts who analyzed materials associated with the project used words like “bogus” to describe its work.
Alex Todorov, a Princeton University psychologist who studies the science of social perception and first impressions, told The Intercept:
There is some information in dynamic changes in the voice and they’re detecting it. This is perfectly plausible. “But the question is, How unambiguous is this information at detecting the category of people they’ve defined as risky? There is always ambiguity in these kinds of signals.”
Björn Schuller, a professor at the University of Augsburg , highlighted ethical concerns. He said that “from an ethical point of view, it’s very dubious and shady to give the impression that recognizing deception from only the voice can be done with any accuracy.”
Step one: define risk and risky.
Until that is done the rest of this is BS. this looks, sounds, and smells, like a scam. My guess is they pulled the Apple, Google, and FaceBook, names out of thin air.