Understanding what makes something offensive or hurtful is difficult enough that many people can’t figure it out, let alone AI systems. And people of color are frequently left out of AI training sets. So it’s little surprise that Alphabet/Google-spawned Jigsaw manages to trip over both of these issues at once, flagging slang used by black Americans as toxic.
With lives on the line, tech companies must work to thwart violent white supremacist activity. They should act with clarity, consistency and transparency, all while affording appeal rights.
One researcher’s discovery suggests troubling oversights in Boeing’s cybersecurity.
Many partisans accused President George W. Bush of lying and pressuring the intelligence community to produce intelligence to justify a war that Bush had already chosen. But the situation was complicated, and to understand the problems of speaking truth to power, we must clear away the myths.
It is all about risk management and not all risk are equal for every one. So we need context appropriate standards.
The R Street Institute has begun an initiative intended to build a consensus around how to tackle the problem of measuring cybersecurity.
Just saw this from January. So these cartels already bypass intermediaries and had reps and “deep links” to Canada and B.C. and working with the HAs. Ooh new “sophisticated” Iranian OC too.
Then we have these Vancouver-Seattle clowns coming up with a separate narrative of the cartels and the Banditos moving on the HAs and trying to entangle other networks as recent as last year? And just who is, and is not supposed to believe this B.S., react or not react to it?
Nada Bakos’ book offers a window into the CIA and the hunt for Abu Musaib al-Zarqawi.
A close aide to Italy’s deputy prime minister Matteo Salvini held covert talks to pump Russian oil money to his far-right party. BuzzFeed News has the tape.
“AI has an ‘explainability’ problem. Your algorithm did XYZ, and everyone wants to know why, but because of the way that machine learning works, even its programmers often can’t know why an algorithm reached the outcome that it did. It’s a black box. Now, when you enter the realm of autonomous weapons, and ask, ‘Why did you kill that person,’ the complete lack of an answer simply will not do — morally, legally, or practically.”
‘These vulnerabilities could be exploited and obscured by bad actors’