82 stories
·
3 followers

Major Open Source Project Revokes Access to Companies That Work with ICE

jwz
1 Comment and 4 Shares
"Apologies to any contributors who aren't employees of Palantir, but to those who are, please find jobs elsewhere and stop helping Palantir do horrible things"

On Tuesday, the developers behind a widely used open source code-management software called Lerna modified the terms and conditions of its use to prohibit any organization that collaborates with ICE from using the software. Among the companies and organizations that were specifically banned were Palantir, Microsoft, Amazon, Northeastern University, Motorola, Dell, UPS, and Johns Hopkins University. [...]

"Recently, it has come to my attention that many of these companies which are being paid millions of dollars by ICE are also using some of the open source software that I helped build," Jamie Kyle, an open source developer and one of the lead programmers on the Lerna project, wrote in a statement. "It's not news to me that people can use open source for evil, that's part of the whole deal. But it's really hard for me to sit back and ignore what these companies are doing with my code." [...]

Before he changed the license, Kyle left a comment on Palantir's Github asking the company to stop using the software. "Apologies to any contributors who aren't employees of Palantir, but to those who are, please find jobs elsewhere and stop helping Palantir do horrible things," Kyle wrote last week, linking to an article in The Intercept about the company's collaboration with ICE. "Also, stop using my tools. I don't support you and I don't want my work to benefit your awful company." [...]

After Kyle discussed his concerns with some of the other lead developers on the Lerna project, they assented to a change to the Lerna license that would effectively bar any organization that collaborates with ICE from continuing to use the software. This led to some developers calling the change illegitimate and lamenting that it technically meant the project was no longer open source. [...]

"I've been around the block enough to know how every company affected is going to respond," Kyle told me. "They're not going to try and find a loophole. I kinda hope they do try to keep using my tools though -- I'm really excited about the idea of actually getting to take Microsoft, Palantir or Amazon to court."

As for the hate he has received online about how open source projects shouldn't be politicized, Kyle said this misses the point.

"I believe that all technology is political, especially open source," he told me. "I believe that the technology industry should have a code of ethics like science or medicine. Working with ICE in any capacity is accepting money in exchange for morality. I am under no obligation to have a rigid code of ethics allowing everyone to use my open source software when the people using it follow no such code of ethics."

Previously, previously, previously, previously, previously, previously, previously.

Read the whole story
jlvanderzwan
6 days ago
reply
ttencate
5 days ago
See https://github.com/lerna/lerna/pull/1633 though. This was never enforceable.
Levitz
20 days ago
reply
Share this story
Delete
1 public comment
jimwise
21 days ago
reply
(!)
reconbot
17 days ago
I'm hoping we get a third channel of open source licensing. If GPL can enforce it's values, then a 3rd type can hold us to some sort of agreed upon societal low bar, like the human rights accords

Comic: dSports

1 Share
New Comic: dSports
Read the whole story
Levitz
55 days ago
reply
Share this story
Delete

Today in Landfill Capitalism: Realistic Marketing, Inc.

jwz
1 Comment and 2 Shares

Read the whole story
jlvanderzwan
50 days ago
reply
In Berlin they're bikes. Luckily not quite as omnipresent yet though.
Levitz
67 days ago
reply
jlvanderzwan
50 days ago
I think I should start following this channel just to know what kind of bullshit product I'm not missing out on now
jlvanderzwan
50 days ago
https://www.youtube.com/watch?v=TDMfDwDUxKE&index=3&list=PLoK-poB57Y-ZlGNco5rzGFkRsRFRfhJSM
Share this story
Delete

Q&A With Grey: Just Because Edition

1 Comment
From: CGPGrey
Duration: 08:10

- Q&A with Grey, brought to you in part by Skillshare; get 2 months of Skillshare for FREE using this link: http://skl.sh/cgpgrey
- Ask Grey a question for the next one: https://www.reddit.com/r/CGPGrey/comments/8n7wwj/qampa_with_grey_just_because_edition/

Special thank to_DavidSmith for swift code snippets: https://david-smith.org/

http://patreon.com/cgpgrey

Made with the support of:

Andrea Di Biagio, Andrew Proue, Bear, Ben Schwab, Bob Kunz, Cas Eliëns, Chris Chapin, Christian Cooper, Christopher Anthony, chrysilis, Colin Millions, Dag Viggo Lokøen, David F Watson, David Palomares, Derek Bonner, Derek Jackson, Dominick Brockman, Donal Botkin, Edison Franklin, Edward Adams, Elizabeth Keathley, Emil, Erik Parasiuk, Ernesto Jimenez, Esteban Santana Santana, Faust Fairbrook, Freddi Hørlyck, Guillermo, Ian, Jacob Ostling, James Bissonette, James Gill, Jason Lewandowski, John Buchan, John Lee, John Rogers, JoJo Chehebar, Jordan Melville, ken mcfarlane, Kevin Anderson, Kozo Ota, Leon, Maarten van der Blij, Mark Govea, Martin, Maxime Zielony, Michael Cao, Michael Little, Michael Mrozek, Mikko, Nevin Spoljaric, Oliver Steele, Orbit_Junkie, Osric Lord-Williams, Paul Tomblin, Peter Lomax, Phil Gardner, Rescla, Rhys Parry, Richard Comish, Richard Jenkins, rictic, Roman Pinchuk, Ron Bowes, Saki Comandao, ShiroiYami, Stephen W. Carson, Steven Grimm, Tianyu Ge, Tijmen van Dien, Tod Kurt, Tómas Árni Jónasson, Tony DiLascio, Tristan Watts-Willis, Veronica Peshterianu, سليمان العقل

Music by: Broke for Free

Read the whole story
Levitz
111 days ago
reply
Shared for the Dutchness, or lack thereof, rather.
Share this story
Delete

Sorry, But I Don't See How Nyarlathotep's Death Cult Is Negatively Affecting American Discourse

jwz
1 Share
Look, All I'm Saying Is Let's At Least Give Nyarlathotep a Chance (2016)

Like it or not, Nyarlathotep -- God of a Thousand Forms, Stalker Among the Stars -- is our Commander-in-Chief now. And you know what, Jerry? Color me curious. I know a lot of really heated rhetoric and seemingly reckless policy proposals have been bandied about over the past few months -- that bit about "delighting in this dust speck you call Earth's senseless suffering" still bugs me -- but hey, the least we can do is see how He adjusts to His new responsibilities.

I honestly wouldn't be surprised if the election humbled the Black Pharaoh just a tad. [...] I'm telling you, once Nyarlathotep sits behind that desk in the Oval Office, I think the weight and solemnity of the position will start sinking in pretty quickly.

Think about it, Jerry. Does anyone really even expect Him to make good on His promise to cull a maddened horde from the populace that will traverse the globe like ravenous locusts, spreading His malevolence and contempt to all corners of the land? Who's gonna pay for that? It was probably just a soundbite, nothing more. Nyarlathotep knows how to play the game, Jerry. He knows exactly how to manipulate the headlines. And fever dreams, too.

Sorry, But I Don't See How Nyarlathotep's Death Cult Is Negatively Affecting American Discourse (2018)

So, no, I don't see any problem with the death cult's High Priest getting a recurring op-ed in the New York Times. He worked hard to get where he is, and last I checked, this is still the country where, if you put in enough hard work, time, energy -- and self-castration to please the abhorrent Anti-God, apparently -- you can make it. The cult is a small but troubling percentage of our population, but we can't just silence them because they call in eerie unison for a "Great Offering." Yeah, if I was on the editorial board I might see about diversifying with another woman, or perhaps a person of color, or hell, even someone slightly left-of-center, but I imagine it's pretty hard to quickly turn a ship as large as the USS Gray Lady. These institutions don't change overnight. Unless Nyarlathotep wills it, I suppose. [...]

Honestly, I think we as a society have forgotten the art of civil discourse. There was a time when conservatives and liberals could disagree in a debate, and then buy each other a round afterwards. Now everyone's shouting at one another about how wrong they are, how destructive and inhumane their policies will be, how we should be investing our tax dollars into the education of our few remaining children instead of a massive ziggurat aligned with some extra moon that suddenly appeared in the sky last week. We gotta figure out how to agree to disagree again.

And, look, call me crazy for suggesting this -- but what if guys like the High Priest and his death cult are right some of the time? Hey hey, calm down. I'm just playing Elder God's advocate here. I know it might "trigger" some overly sensitive folks, but on a purely rhetorical level, it helps to try seeing things from their side.

Previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously.

Read the whole story
Levitz
119 days ago
reply
Share this story
Delete

Alexa and Siri Can Hear This Hidden Command. You Can’t.

2 Shares

BERKELEY, Calif. — Many people have grown accustomed to talking to their smart devices, asking them to read a text, play a song or set an alarm. But someone else might be secretly talking to them, too.

Over the last two years, researchers in China and the United States have begun demonstrating that they can send hidden commands that are undetectable to the human ear to Apple’s Siri, Amazon’s Alexa and Google’s Assistant. Inside university labs, the researchers have been able to secretly activate the artificial intelligence systems on smartphones and smart speakers, making them dial phone numbers or open websites. In the wrong hands, the technology could be used to unlock doors, wire money or buy stuff online — simply with music playing over the radio.

A group of students from University of California, Berkeley, and Georgetown University showed in 2016 that they could hide commands in white noise played over loudspeakers and through YouTube videos to get smart devices to turn on airplane mode or open a website.

This month, some of those Berkeley researchers published a research paper that went further, saying they could embed commands directly into recordings of music or spoken text. So while a human listener hears someone talking or an orchestra playing, Amazon’s Echo speaker might hear an instruction to add something to your shopping list.

“We wanted to see if we could make it even more stealthy,” said Nicholas Carlini, a fifth-year Ph.D. student in computer security at U.C. Berkeley and one of the paper’s authors.

[Read more on what Alexa can hear when brought into your home]

Mr. Carlini added that while there was no evidence that these techniques have left the lab, it may only be a matter of time before someone starts exploiting them. “My assumption is that the malicious people already employ people to do what I do,” he said.

These deceptions illustrate how artificial intelligence — even as it is making great strides — can still be tricked and manipulated. Computers can be fooled into identifying an airplane as a cat just by changing a few pixels of a digital image, while researchers can make a self-driving car swerve or speed up simply by pasting small stickers on road signs and confusing the vehicle’s computer vision system.

With audio attacks, the researchers are exploiting the gap between human and machine speech recognition. Speech recognition systems typically translate each sound to a letter, eventually compiling those into words and phrases. By making slight changes to audio files, researchers were able to cancel out the sound that the speech recognition system was supposed to hear and replace it with a sound that would be transcribed differently by machines while being nearly undetectable to the human ear.

The proliferation of voice-activated gadgets amplifies the implications of such tricks. Smartphones and smart speakers that use digital assistants like Amazon’s Alexa or Apple’s Siri are set to outnumber people by 2021, according to the research firm Ovum. And more than half of all American households will have at least one smart speaker by then, according to Juniper Research.

Amazon said that it doesn’t disclose specific security measures, but it has taken steps to ensure its Echo smart speaker is secure. Google said security is an ongoing focus and that its Assistant has features to mitigate undetectable audio commands. Both companies’ assistants employ voice recognition technology to prevent devices from acting on certain commands unless they recognize the user’s voice.

Apple said its smart speaker, HomePod, is designed to prevent commands from doing things like unlocking doors, and it noted that iPhones and iPads must be unlocked before Siri will act on commands that access sensitive data or open apps and websites, among other measures.

Yet many people leave their smartphones unlocked, and, at least for now, voice recognition systems are notoriously easy to fool.

There is already a history of smart devices being exploited for commercial gains through spoken commands.

Last year, Burger King caused a stir with an online ad that purposely asked ‘O.K., Google, what is the Whopper burger?” Android devices with voice-enabled search would respond by reading from the Whopper’s Wikipedia page. The ad was canceled after viewers started editing the Wikipedia page to comic effect.

[Read more on how we may soon be living in Alexa’s world]

A few months later, the animated series South Park followed up with an entire episode built around voice commands that caused viewers’ voice-recognition assistants to parrot adolescent obscenities.

There is no American law against broadcasting subliminal messages to humans, let alone machines. The Federal Communications Commission discourages the practice as “counter to the public interest,” and the Television Code of the National Association of Broadcasters bans “transmitting messages below the threshold of normal awareness.” Neither say anything about subliminal stimuli for smart devices.

Courts have ruled that subliminal messages may constitute an invasion of privacy, but the law has not extended the concept of privacy to machines.

Now the technology is racing even further ahead of the law. Last year, researchers at Princeton University and China’s Zhejiang University demonstrated that voice-recognition systems could be activated by using frequencies inaudible to the human ear. The attack first muted the phone so the owner wouldn’t hear the system’s responses, either.

The technique, which the Chinese researchers called DolphinAttack, can instruct smart devices to visit malicious websites, initiate phone calls, take a picture or send text messages. While DolphinAttack has its limitations — the transmitter must be close to the receiving device — experts warned that more powerful ultrasonic systems were possible.

That warning was borne out in April, when researchers at the University of Illinois at Urbana-Champaign demonstrated ultrasound attacks from 25 feet away. While the commands couldn’t penetrate walls, they could control smart devices through open windows from outside a building.

This year, another group of Chinese and American researchers from China’s Academy of Sciences and other institutions, demonstrated they could control voice-activated devices with commands embedded in songs that can be broadcast over the radio or played on services like YouTube.

More recently, Mr. Carlini and his colleagues at Berkeley have incorporated commands into audio recognized by Mozilla’s DeepSpeech voice-to-text translation software, an open-source platform. They were able to hide the command, “O.K. Google, browse to evil.com” in a recording of the spoken phrase, “Without the data set, the article is useless.” Humans cannot discern the command.

The Berkeley group also embedded the command in music files, including a four-second clip from Verdi’s “Requiem.”

How device makers respond will differ, especially as they balance security with ease of use.

“Companies have to ensure user-friendliness of their devices, because that’s their major selling point,” said Tavish Vaidya, a researcher at Georgetown. He wrote one of the first papers on audio attacks, which he titled “Cocaine Noodles” because devices interpreted the phrase “cocaine noodles” as “O.K., Google.”

Mr. Carlini said he was confident that in time he and his colleagues could mount successful adversarial attacks against any smart device system on the market.

“We want to demonstrate that it’s possible,” he said, “and then hope that other people will say, ‘O.K. this is possible, now let’s try and fix it.’ ”

Follow Craig S. Smith on Twitter: @craigss

A version of this article appears in print on

, on Page

B

1

of the New York edition

with the headline:

How Your Smart Speaker Will Be Hijacked By Dog Whistle

. Order Reprints | Today’s Paper | Subscribe
Read the whole story
Levitz
119 days ago
reply
Share this story
Delete
Next Page of Stories