News
AI has a well-documented but poorly understood transparency problem. 51% of business executives report that AI transparency and ethics are important for their business, and not surprisingly, 41% ...
These kinds of adversarial examples are considered less threatening, because they don’t closely resemble the real world, where an attacker wouldn’t have access to a proprietary algorithm. For ...
All around us, algorithms are invisibly at work. They’re recommending music and surfacing news, finding cancerous tumors, and making self-driving cars a reality. But do people trust them? Not ...
For example, algorithms used in facial recognition technology have in the past shown higher identification rates for men than for women, and for individuals of non-white origin than for whites.
A new algorithm opens the door for using artificial intelligence and machine learning to study the interactions that happen ...
I love both of these examples, because I love the idea that we can take our own democratic action to make the world a bit less complicated. Alas, it is not that simple.
These “sniffing algorithms”—used, for example, by a sell-side market maker—have the built-in intelligence to identify the existence of any algorithms on the buy side of a large order.
A study published Thursday in Science has found that a health care risk-prediction algorithm, a major example of tools used on more than 200 million people in the U.S., demonstrated racial bias ...
A health care algorithm makes black patients substantially less likely than their white counterparts to receive important medical treatment. The major flaw affects millions of patients, and was ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results