Three representatives asked the Director of National Intelligence to create a report about how deepfakes could be used against the U.S. by hostile nations.
DARPA, the US Defense Department’s research arm, will spend $2 billion over the next five years on military AI projects.
New York City served as IBM’s “primary testing area” for developing software that enables police to search surveillance video footage for skin color.
Recently, one of us spent a week in China discussing the future of war with a group of American and Chinese academics. Everyone speculated about the role
How do we identify, understand and protect our most valuable AI assets?
Facial recognition is everywhere — airports, police stations, and built into the largest cloud platforms in the world — with few federal rules to govern how it’s used. That’s been true for years, but a string of embarrassing stories in recent months has driven home exactly how dangerous the technology can be in the wrong hands, and it’s led to new calls for regulation. Even Microsoft, one of the largest providers, has called on Congress to place some kind of restriction on how and where the technology can be used.
Artificial intelligence could erase many practical advantages of democracy, and erode the ideals of liberty and equality. It will further concentrate the power among a small elite if we don’t take steps to stop it.
Source: Why Technology Favors Tyranny
Mandeep Gill has a difficult job, though he won’t admit it himself. As chair of the United Nations’ Convention on Conventional Weapons (CCW) meetings on lethal autonomous weapons, he has the task of shepherding 125 member states through discussions on the thorny technical and ethical issue of “killer robots” — military robots that could theoretically engage targets independently.
You can watch our journey into the terrifying future of fake news on BuzzFeed News.