• Buro Jansen & Janssen, gewoon inhoud!
    Jansen & Janssen is een onderzoeksburo dat politie, justitie, inlichtingendiensten, overheid in Nederland en de EU kritisch volgt. Een grond- rechten kollektief dat al 40 jaar, sinds 1984, publiceert over uitbreiding van repressieve wet- geving, publiek-private samenwerking, veiligheid in breedste zin, bevoegdheden, overheidsoptreden en andere staatsaangelegenheden.
    Buro Jansen & Janssen Postbus 10591, 1001EN Amsterdam, 020-6123202, 06-34339533, signal +31684065516, info@burojansen.nl (pgp)
    Steun Buro Jansen & Janssen. Word donateur, NL43 ASNB 0856 9868 52 of NL56 INGB 0000 6039 04 ten name van Stichting Res Publica, Postbus 11556, 1001 GN Amsterdam.
  • Publicaties

  • Migratie

  • Politieklachten

  • How Facebook could get you arrested

    Smart technology and the sort of big data available to social networking sites are helping police target crime before it happens. But is this ethical?

    Companies such as Facebook have begun using algorithms and historical data to predict which of their users might commit crimes. Illustration: Noma Bar

    The police have a very bright future ahead of them – and not just because they can now look up potential suspects on Google. As they embrace the latest technologies, their work is bound to become easier and more effective, raising thorny questions about privacy, civil liberties, and due process.

    For one, policing is in a good position to profit from “big data”. As the costs of recording devices keep falling, it’s now possible to spot and react to crimes in real time. Consider a city like Oakland in California. Like many other American cities, today it is covered with hundreds of hidden microphones and sensors, part of a system known as ShotSpotter, which not only alerts the police to the sound of gunshots but also triangulates their location. On verifying that the noises are actual gunshots, a human operator then informs the police.

    It’s not hard to imagine ways to improve a system like ShotSpotter. Gunshot-detection systems are, in principle, reactive; they might help to thwart or quickly respond to crime, but they won’t root it out. The decreasing costs of computing, considerable advances in sensor technology, and the ability to tap into vast online databases allow us to move from identifying crime as it happens – which is what the ShotSpotter does now – to predicting it before it happens.

    Instead of detecting gunshots, new and smarter systems can focus on detecting the sounds that have preceded gunshots in the past. This is where the techniques and ideologies of big data make another appearance, promising that a greater, deeper analysis of data about past crimes, combined with sophisticated algorithms, can predict – and prevent – future ones. This is a practice known as “predictive policing”, and even though it’s just a few years old, many tout it as a revolution in how police work is done. It’s the epitome of solutionism; there is hardly a better example of how technology and big data can be put to work to solve the problem of crime by simply eliminating crime altogether. It all seems too easy and logical; who wouldn’t want to prevent crime before it happens?

    Police in America are particularly excited about what predictive policing – one of Time magazine’s best inventions of 2011 – has to offer; Europeans are slowly catching up as well, with Britain in the lead. Take the Los Angeles Police Department (LAPD), which is using software called PredPol. The software analyses years of previously published statistics about property crimes such as burglary and automobile theft, breaks the patrol map into 500 sq ft zones, calculates the historical distribution and frequency of actual crimes across them, and then tells officers which zones to police more vigorously.

    It’s much better – and potentially cheaper – to prevent a crime before it happens than to come late and investigate it. So while patrolling officers might not catch a criminal in action, their presence in the right place at the right time still helps to deter criminal activity. Occasionally, though, the police might indeed disrupt an ongoing crime. In June 2012 the Associated Press reported on an LAPD captain who wasn’t so sure that sending officers into a grid zone on the edge of his coverage area – following PredPol’s recommendation – was such a good idea. His officers, as the captain expected, found nothing; however, when they returned several nights later, they caught someone breaking a window. Score one for PredPol?

    Trials of PredPol and similar software began too recently to speak of any conclusive results. Still, the intermediate results look quite impressive. In Los Angeles, five LAPD divisions that use it in patrolling territory populated by roughly 1.3m people have seen crime decline by 13%. The city of Santa Cruz, which now also uses PredPol, has seen its burglaries decline by nearly 30%. Similar uplifting statistics can be found in many other police departments across America.

    Other powerful systems that are currently being built can also be easily reconfigured to suit more predictive demands. Consider the New York Police Department’s latest innovation – the so-called Domain Awareness System – which syncs the city’s 3,000 closed-circuit camera feeds with arrest records, 911 calls, licence plate recognition technology, and radiation detectors. It can monitor a situation in real time and draw on a lot of data to understand what’s happening. The leap from here to predicting what might happen is not so great.

    If PredPol’s “prediction” sounds familiar, that’s because its methods were inspired by those of prominent internet companies. Writing in The Police Chief magazine in 2009, a senior LAPD officer lauded Amazon’s ability to “understand the unique groups in their customer base and to characterise their purchasing patterns”, which allows the company “not only to anticipate but also to promote or otherwise shape future behaviour”. Thus, just as Amazon’s algorithms make it possible to predict what books you are likely to buy next, similar algorithms might tell the police how often – and where – certain crimes might happen again. Ever stolen a bicycle? Then you might also be interested in robbing a grocery store.

    Here we run into the perennial problem of algorithms: their presumed objectivity and quite real lack of transparency. We can’t examine Amazon’s algorithms; they are completely opaque and have not been subject to outside scrutiny. Amazon claims, perhaps correctly, that secrecy allows it to stay competitive. But can the same logic be applied to policing? If no one can examine the algorithms – which is likely to be the case as predictive-policing software will be built by private companies – we won’t know what biases and discriminatory practices are built into them. And algorithms increasingly dominate many other parts of our legal system; for example, they are also used to predict how likely a certain criminal, once on parole or probation, is to kill or be killed. Developed by a University of Pennsylvania professor, this algorithm has been tested in Baltimore, Philadelphia and Washington DC. Such probabilistic information can then influence sentencing recommendations and bail amounts, so it’s hardly trivial.
    Los Angeles police arrest a man. The force is using predictive software to direct its patrols. Photograph: Robert Nickelsberg/Getty Images

    But how do we know that the algorithms used for prediction do not reflect the biases of their authors? For example, crime tends to happen in poor and racially diverse areas. Might algorithms – with their presumed objectivity – sanction even greater racial profiling? In most democratic regimes today, police need probable cause – some evidence and not just guesswork – to stop people in the street and search them. But armed with such software, can the police simply say that the algorithms told them to do it? And if so, how will the algorithms testify in court? Techno-utopians will probably overlook such questions and focus on the abstract benefits that algorithmic policing has to offer; techno-sceptics, who start with some basic knowledge of the problems, constraints and biases that already pervade modern policing, will likely be more critical.

    Legal scholar Andrew Guthrie Ferguson has studied predictive policing in detail. Ferguson cautions against putting too much faith in the algorithms and succumbing to information reductionism. “Predictive algorithms are not magic boxes that divine future crime, but instead probability models of future events based on current environmental vulnerabilities,” he notes.

    But why do they work? Ferguson points out that there will be future crime not because there was past crime but because “the environmental vulnerability that encouraged the first crime is still unaddressed”. When the police, having read their gloomy forecast about yet another planned car theft, see an individual carrying a screwdriver in one of the predicted zones, this might provide reasonable suspicion for a stop. But, as Ferguson notes, if the police arrested the gang responsible for prior crimes the day before, but the model does not yet reflect this information, then prediction should be irrelevant, and the police will need some other reasonable ground for stopping the individual. If they do make the stop, then they shouldn’t be able to say in court, “The model told us to.” This, however, may not be obvious to the person they have stopped, who has no familiarity with the software and its algorithms.

    Then there’s the problem of under-reported crimes. While most homicides are reported, many rapes and home break-ins are not. Even in the absence of such reports, local police still develop ways of knowing when something odd is happening in their neighbourhoods. Predictive policing, on the other hand, might replace such intuitive knowledge with a naive belief in the comprehensive power of statistics. If only data about reported crimes are used to predict future crimes and guide police work, some types of crime might be left unstudied – and thus unpursued.

    What to do about the algorithms then? It is a rare thing to say these days but there is much to learn from the financial sector in this regard. For example, after a couple of disasters caused by algorithmic trading in August 2012, financial authorities in Hong Kong and Australia drafted proposals to establish regular independent audits of the design, development and modification of the computer systems used for algorithmic trading. Thus, just as financial auditors could attest to a company’s balance sheet, algorithmic auditors could verify if its algorithms are in order.

    As algorithms are further incorporated into our daily lives – from Google’s Autocomplete to PredPol – it seems prudent to subject them to regular investigations by qualified and ideally public-spirited third parties. One advantage of the auditing solution is that it won’t require the audited companies publicly to disclose their trade secrets, which has been the principal objection – voiced, of course, by software companies – to increasing the transparency of their algorithms.

    The police are also finding powerful allies in Silicon Valley. Companies such as Facebook have begun using algorithms and historical data to predict which of their users might commit crimes using their services. Here is how it works: Facebook’s own predictive systems can flag certain users as suspicious by studying certain behavioural cues: the user only writes messages to others under 18; most of the user’s contacts are female; the user is typing keywords like “sex” or “date.” Staffers can then examine each case and report users to the police as necessary. Facebook’s concern with its own brand here is straightforward: no one should think that the platform is harbouring criminals.

    In 2011 Facebook began using PhotoDNA, a Microsoft service that allows it to scan every uploaded picture and compare it with child-porn images from the FBI’s National Crime Information Centre. Since then it has expanded its analysis beyond pictures as well. In mid-2012 Reuters reported on how Facebook, armed with its predictive algorithms, apprehended a middle-aged man chatting about sex with a 13-year-old girl, arranging to meet her the day after. The police contacted the teen, took over her computer, and caught the man.

    Facebook is at the cutting edge of algorithmic surveillance here: just like police departments that draw on earlier crime statistics, Facebook draws on archives of real chats that preceded real sex assaults. Curiously, Facebook justifies its use of algorithms by claiming that they tend to be less intrusive than humans. “We’ve never wanted to set up an environment where we have employees looking at private communications, so it’s really important that we use technology that has a very low false-positive rate,” Facebook’s chief of security told Reuters.

    It’s difficult to question the application of such methods to catching sexual predators who prey on children (not to mention that Facebook may have little choice here, as current US child-protection laws require online platforms used by teens to be vigilant about predators). But should Facebook be allowed to predict any other crimes? After all, it can easily engage in many other kinds of similar police work: detecting potential drug dealers, identifying potential copyright violators (Facebook already prevents its users from sharing links to many file-sharing sites), and, especially in the wake of the 2011 riots in the UK, predicting the next generation of troublemakers. And as such data becomes available, the temptation to use it becomes almost irresistible.

    That temptation was on full display following the rampage in a Colorado movie theatre in June 2012, when an isolated gunman went on a killing spree, murdering 12 people. A headline that appeared in the Wall Street Journal soon after the shooting says it all: “Can Data Mining Stop the Killing?” It won’t take long for this question to be answered in the affirmative.

    In many respects, internet companies are in a much better position to predict crime than police. Where the latter need a warrant to assess someone’s private data, the likes of Facebook can look up their users’ data whenever they want. From the perspective of police, it might actually be advantageous to have Facebook do all this dirty work, because Facebook’s own investigations don’t have to go through the court system.

    While Facebook probably feels too financially secure to turn this into a business – it would rather play up its role as a good citizen – smaller companies might not resist the temptation to make a quick buck. In 2011 TomTom, a Dutch satellite-navigation company that has now licensed some of its almighty technology to Apple, found itself in the middle of a privacy scandal when it emerged that it had been selling GPS driving data collected from customers to the police. Privacy advocate Chris Soghoian has likewise documented the easy-to-use “pay-and-wiretap” interfaces that various internet and mobile companies have established for law enforcement agencies.

    Publicly available information is up for grabs too. Thus, police are already studying social-networking sites for signs of unrest, often with the help of private companies. The title of a recent brochure from Accenture urges law enforcement agencies to “tap the power of social media to drive better policing outcomes”. Plenty of companies are eager to help. ECM Universe, a start-up from Virginia, US, touts its system, called Rapid Content Analysis for Law Enforcement, which is described as “a social media surveillance solution providing real-time monitoring of Twitter, Facebook, Google groups, and many other communities where users express themselves freely”.

    “The solution,” notes the ECM brochure, “employs text analytics to correlate threatening language to surveillance subjects, and alert investigators of warning signs.” What kind of warning signs? A recent article in the Washington Post notes that ECM Universe helped authorities in Fort Lupton, Colorado, identify a man who was tweeting such menacing things as “kill people” and “burn [expletive] school”. This seems straightforward enough but what if it was just “harm people” or “police suck”?

    As companies like ECM Universe accumulate extensive archives of tweets and Facebook updates sent by actual criminals, they will also be able to predict the kinds of non-threatening verbal cues that tend to precede criminal acts. Thus, even tweeting that you don’t like your yoghurt might bring police to your door, especially if someone who tweeted the same thing three years before ended up shooting someone in the face later in the day.

    However, unlike Facebook, neither police nor outside companies see the whole picture of what users do on social media platforms: private communications and “silent” actions – clicking links and opening pages – are invisible to them. But Facebook, Twitter, Google and similar companies surely know all of this – so their predictive power is much greater than the police’s. They can even rank users based on how likely they are to commit certain acts.

    An apt illustration of how such a system can be abused comes from The Silicon Jungle, ostensibly a work of fiction written by a Google data-mining engineer and published by Princeton University Press – not usually a fiction publisher – in 2010. The novel is set in the data-mining operation of Ubatoo – a search engine that bears a striking resemblance to Google – where a summer intern develops Terrorist-o-Meter, a sort of universal score of terrorism aptitude that the company could assign to all its users. Those unhappy with their scores would, of course, get a chance to correct them – by submitting even more details about themselves. This might seem like a crazy idea but – in perhaps another allusion to Google – Ubatoo’s corporate culture is so obsessed with innovation that its interns are allowed to roam free, so the project goes ahead.

    To build Terrorist-o-Meter, the intern takes a list of “interesting” books that indicate a potential interest in subversive activities and looks up the names of the customers who have bought them from one of Ubatoo’s online shops. Then he finds the websites that those customers frequent and uses the URLs to find even more people – and so on until he hits the magic number of 5,000. The intern soon finds himself pursued by both an al-Qaida-like terrorist group that wants those 5,000 names to boost its recruitment campaign, as well as various defence and intelligence agencies that can’t wait to preemptively ship those 5,000 people to Guantánamo.

    Evgeny Morozov
    The Observer, Saturday 9 March 2013 19.20 GMT

    Find this story at 9 March 2013

    © 2013 Guardian News and Media Limited or its affiliated companies. All rights reserved.

    Police software mines social media

    Police scan your Facebook comments.Photo / File

    Police have developed a specialist software tool which mines social media for information.

    The Signal tool was developed for high-profile public events and emergencies and works by scanning public-facing material on social media networks such as Facebook and Twitter.

    Police director of intelligence Mark Evans said it was “not typically” used as an evidence gathering or investigative tool although it could be.

    Social media use by law enforcement around the world has grown with the International Association of Police Chiefs finding 77 per cent of agencies used it most commonly to investigate crime. The survey of 600 agencies across the United States found it had helped solve crimes.

    Mr Evans said the tool was developed as part of preparations for the Rugby World Cup because police “wanted the ability to scan social media comments in and around the stadiums in real time”.

    Since then, it had been used for royal visits, Waitangi Day and during the Auckland cyclone. Mr Evans said Signal was not used to crawl random postings. Instead, police would set a geographical area and put in key words.

    As an example, he said a large sporting event could see “protest”, “traffic”, “accident” or “delays”.

    He said the strength of Signal was its ability to help police “identify and analyse social media feeds relevant to crime and public safety” at a specific time and place.

    In doing so, Mr Evans said police were able to judge the impact of an event which had happened or stop a problem escalating. It also helped target people and resources where they were needed, he said.

    During the Rugby World Cup, it allowed police to detect a boy racer convoy heading from Auckland to Hamilton.

    The drivers “felt they would be able to get away with dangerous behaviour on the roads because they believed police resources would be busy elsewhere”, he said.

    Signal was developed as part of a $60,000 emergency management tool.

    Global police use of social media

    53 per cent – Created a fake profile or undercover identity
    48 per cent – Posted surveillance video or images
    86 per cent – Viewed profiles of suspects
    49 per cent – Viewed profiles of victims

    Source: IACP Social Media Survey 2012

    By David Fisher @@DFisherJourno
    5:30 AM Saturday Feb 23, 2013

    Find this story at 23 February 2013

    © Copyright 2013, APN Holdings NZ Limited

    Spam vom Staat

    Er gilt als der böseste Deutsche im Internet: Martin Münch liefert Polizei und Geheimdiensten Überwachungs-Software. Auch Diktatoren drangsalieren mit den Programmen ihre Bürger.

    Im Disney-Film “Mulan” ist alles so einfach. Die Heldin kämpft zusammen mit lauter Männern im chinesischen Militär gegen die Hunnen. Der Film zeichnet Mulans Gegner als schattige, gesichtslose Wesen. Die feindliche Reiterarmee verdunkelt den Horizont. Gut gegen Böse – ein Klassiker.

    Martin Münch lebt in einem Disney-Film. Er weiß, wer die Bösen sind. Er weiß, dass er zu den Guten gehört. Es gibt nur ein Problem: Alle anderen wissen es nicht. Für sie steht Münch auf der falschen Seite des arabischen Frühlings, auf der Seite der Unterdrücker. Menschenrechtler prangern an, er liefere Überwachungssoftware an Diktaturen, willentlich oder leichtfertig.

    Münch, 31, entwickelt Spähsoftware für Computer und Handys. Sie infiziert das digitale Gedächtnis, sie schnüffelt in der virtuellen Intimsphäre. Polizei und Geheimdienst können dank ihr sehen, welche Krankheitssymptome der Überwachte im Web googelt. Sie hören, was er mit der Mutter über das Internet-Telefon-Programm Skype bespricht. Sie lesen seinen Einkaufszettel auf dem Smartphone. Der Trojaner, der das alles kann, heißt Finfisher. Trojaner wird diese Art Software genannt, weil die Spionagefunktionen eingeschmuggelt werden in einer harmlosen Hülle.
    Bild vergrößern

    Martin Münchs Firma Gamma entwickelt den Trojaner Finfisher. (Foto: Robert Haas)

    Seit kurzem testet auch das Bundeskriminalamt, ob Finfisher als Bundestrojaner taugt. Auf sein Produkt ist Münch stolz. Zum ersten Mal zeigte er jetzt deutschen Journalisten, dem NDR und der Süddeutschen Zeitung, wie Finfisher funktioniert. Bisher durften Medien nicht in die Entwicklerbüros in Obersendling in München.

    Auf den Glastüren steht der Firmenname: Gamma Group. Ein Dutzend Mitarbeiter sitzt vor Bildschirmen, die Programmierer gleich vor mehreren. Hinter dem Bürostuhl des Chefs Münch hängt eine Aluminiumplatte mit dem Firmenlogo. Er teilt sich seinen Schreibtisch mit dem Kollegen, der den IT-Notruf betreut. Ihm gegenüber klingelt also das Telefon, wenn irgendwo auf der Welt die Strafverfolgung klemmt. Er ist also sehr nah dran an den Ermittlern, auch sprachlich. “Wenn wir Pädophile verhaften, haben wir ein Problem: Die sperren ihre Rechner automatisch”, sagt Münch, als fahre er bei den Einsätzen mit, und präsentiert schwungvoll die Lösung: einen USB-Stick von Gamma in den PC, und die Daten sind gerichtsfest gesichert.

    Münch kann so technisches Spielzeug gut erklären. Vielleicht, weil er sich das alles selbst beigebracht hat. Er hat keine Fachausbildung, er hat nicht Informatik studiert, nur drei Semester Jazzklavier und Gitarre. Er war mit einer Band auf Deutschlandtournee, trat als Bassist einer Casting-Girlband bei “Popstars” auf. Steht er dagegen heute auf der Bühne, zeigt er auf Sicherheitskonferenzen, wie man Rechner infiziert. Für die Ermittler ist Münch ein bisschen wie Mushu, der kleine Drache aus “Mulan”, dem Disney-Film von 1998. Er ist der coole Helfer, der Mulan bei der Armeeausbildung und im Kampf beisteht. Münch hat eine Firma, über die er 15 Prozent der Anteile der Gamma International GmbH hält. Er hat sie Mushun genannt, nach dem Drachen aus dem Film, nur mit einem zusätzlichen “n” am Ende, sagt er. Dann lacht er verlegen. Doch ist er nicht nur Miteigentümer, sondern auch Geschäftsführer bei Gamma.

    Mit Medien hat Münch noch nicht viel Erfahrung. Der Süddeutschen Zeitung und dem britischen Guardian liegen Dokumente vor, die zeigen, dass die Gamma-Gruppe eine Firma im Steuerparadies Britische Jungferninseln besitzt. Darauf angesprochen, bestritt Münch vor einigen Wochen erst vehement, dass die Gesellschaft überhaupt existiert. Als der Guardian dann Belege schickte, entschuldigte er sich. Er habe gedacht, dass die Tochter wirklich nicht existiert, schrieb er nach London. Auch nun beantwortet Geschäftsführer Münch Fragen zum Geschäft immer wieder ausweichend. Zahlen, Firmenpartner kenne er nicht. “Ich bin ein kleiner Techniker”, sagt Münch. Die strategischen Entscheidungen in der Firma treffe aber trotzdem er.
    Bild vergrößern

    So bewirbt der Gamma-Prospekt den Trojaner für Handys namens Finspy Mobile.

    Gammas Bestseller aus der Finfisher-Familie heißt Finspy. Münch beugt sich über den Apple-Laptop und zeigt, was das Programm kann. Er steckt das Internetkabel in den Rechner und tippt “mjm” in das Feld für den Benutzernamen, für Martin Johannes Münch. Zuerst wählt der Nutzer das Betriebssystem aus, das er angreifen will: ein iPhone von Apple, ein Handy mit Googles Betriebssystem Android oder einen PC mit Windows oder dem kostenlosen System Linux? Der Ermittler kann eingeben, über wie viele Server in verschiedenen Ländern der Trojaner Haken schlägt, bis auch technisch versierte Opfer nicht mehr nachvollziehen können, wer sie da eigentlich überwacht. Der Trojaner kann ein Sterbedatum bekommen, an dem er sich selbst löscht. Genehmigt ein Richter später eine längere Überwachung, kann das Datum nach hinten geschoben werden.

    Dann darf der Ermittler auswählen, wie fies der Trojaner werden soll, was er können darf: das Mikrofon als Wanze benutzen. Gespeicherte Dateien sichten und sichern, wenn sie gelöscht oder geändert werden. Mitlesen, welche Buchstaben der Nutzer auf der Tastatur drückt. Den Bildschirm abfilmen. Skype-Telefonate mitschneiden. Die Kamera des Rechners anschalten und sehen, wo das Gerät steht. Handys über die GPS-Ortungsfunktion zum Peilsender machen. Finspy präsentiert die überwachten Geräte als Liste. Flaggen zeigen, in welchem Land sich das Ziel befindet. Ein Doppelklick, und der Ermittler ist auf dem Rechner.

    Der Trojaner ist so mächtig, als würde jemand dem Computernutzer über die Schulter gucken. Deswegen kommen Ermittler so auch Verdächtigen auf die Schliche, die ihre Festplatte mit einem Passwort sichern und nur verschlüsselt kommunizieren. Der Trojaner liest einfach das Passwort mit. Doch die meisten Funktionen von Finspy sind in Deutschland illegal.

    Und Finspy kostet. Der Preis geht bei etwa 150.000 Euro los und kann ins siebenstellige gehen, sagt Münch. Denn Gamma baut für jeden Kunden eine eigene Version des Trojaners, die mit dem Recht des Landes konform sein soll. Für jeden überwachten Computer müssen Ermittler eine Lizenz von Gamma kaufen. Die meisten Behörden würden fünf Lizenzen erwerben, sagt Münch, manchmal vielleicht auch zwanzig. “Ziel sind einzelne Straftäter.” Ein “mutmaßlich” benutzt er nicht, im Gespräch verwendet er die Worte “Kriminelle” und “Straftäter”, als seien es Synonyme für “Verdächtige” und “Zielperson”.

    Alaa Shehabi ist so eine Zielperson. Ihr Vergehen: Sie kritisierte die Regierung ihres Landes. Die junge Frau ist in Bahrain geboren, einem Inselstaat im Persischen Golf, etwa so groß wie das Stadtgebiet von Hamburg. Ein Königreich – und ein Polizeistaat. Der sunnitische Regent Hamad Ben Isa al-Khalifa herrscht über eine schiitische Bevölkerungsmehrheit. Als der arabische Frühling vor zwei Jahren auch in sein Land schwappte und Shehabi mit Tausend anderen Reformen forderte, rief der König die Armee von Saudi-Arabien zur Hilfe. Fotos und Videos im Internet zeigen geschundene Körper, von Tränengas verätzte Augen und von Schrotkugeln durchlöcherte Leiber. Es sind die Bilder eines blutig niedergeschlagenen Protestes.
    Bild vergrößern

    Die Polizei greift mit Tränengas an: Bei Protesten starben Demonstranten (Foto: Getty Images)

    Die Formel-1-Veranstalter sahen darin kein Problem und luden vergangenen April zum Großen Preis von Manama, einem glitzernden Großereignis mitten in einem gebeutelten Land. König Khalifa wollte zeigen, wie weltoffen Bahrain sei. Die Opposition hingegen versuchte, zumindest einigen angereisten Journalisten die Wahrheit zu berichten. Auch Shehabi, die ihre dunklen Haare unter einem Schleier verbirgt, traf sich mit Reportern. Sie erzählte von der Polizeigewalt, von den Verletzten, den Toten. Sie brach ein Tabu.

    Shehabi war vorsichtig, achtete darauf, dass niemand sie beobachtete, schaltete während des Interviews ihr Handy aus. Trotzdem besuchten Polizisten sie wenig später. Sie fragten, was sie den Journalisten erzählt habe, und warnten sie, so etwas nie wieder zu tun. Die Beamten ließen sie laufen, doch dann kam die erste E-Mail. Im Betreff stand “torture report on Nabeel Rajab”, im Anhang angeblich Fotos des gefolterten Rajab. Er ist ein Freund Shehabis, ein Oppositioneller wie sie. Shehabi versuchte, die Datei zu öffnen. Es ging nicht. Gut für sie: Denn im Anhang war ein Trojaner von Gamma versteckt. Shehabis E-Mails sollten mitgelesen, ihre Telefonate abgehört werden. Der Polizeistaat Bahrain hatte sie im Visier, und Martin Münchs Software half dabei. Auch andere Oppositionelle berichten von ominösen E-Mails. Mal lockten sie ihre Opfer damit, dass der König zum Dialog bereit sei, mal mit vermeintlichen Folterfotos.

    Selbst im Ausland haben Exil-Bahrainer diesen Regierungs-Spam bekommen. Husain Abdulla etwa, der im US-Bundesstaat Alabama eine Tankstelle betreibt und in Washington Lobbyarbeit für Bahrains Opposition macht. Das Königshaus hat ihm deswegen die Staatsbürgerschaft entzogen, wollte ihn aber trotzdem überwachen und schickte ihm einen Trojaner. Die bahrainische Regierung versuchte also, auf US-Boden einen US-Bürger auszuspähen. Gamma macht’s möglich: “Wenn Finspy Mobile auf einem Handy installiert ist, kann es aus der Ferne überwacht werden, wo auch immer sich das Ziel in der Welt befindet”, heißt es dazu in einem Prospekt.

    Die Universität von Toronto in Kanada hat die EMails an Shehabi und Abdulla untersucht. An ihrem Forschungsinstitut Citizen Lab entschlüsselte Morgan Marquis-Boire, Software-Ingenieur bei Google, das Spähprogramm. Er baut einen virtuellen Sandkasten, setzt einen Computer in die Mitte und lässt den Trojaner auf das abgegrenzte Spielfeld. Dann protokolliert Marquis-Boire, wie das Programm den PC kapert, Passwörter kopiert, Skype-Gespräche aufzeichnet, den Bildschirm abfotografiert. Die gesammelten Daten funkt der Trojaner an einen Server in Bahrain. Marquis-Boire entdeckt im Programmcode das Kürzel “finspyv2” – die zweite Version von Finspy. Auch “Martin Muench” steht da. Münch schreibt seinen Namen seit Jahren mit “ue”.
    Citizen Lab fand Münchs Namen im Code des Trojaners. (Foto: Citizen Lab)

    Schnüffelsoftware für einen Polizeistaat? Auf die Vorwürfe reagiert Gamma merkwürdig. Münch verschickt eine Pressemitteilung, in der steht, dass eine Demoversion für Kunden gestohlen worden sei. Eine klare Aussage zu Bahrain gibt es nicht. Münch sagt nicht, wer Gammas Kunden sind. Er sagt auch nicht, wer nicht Kunde ist. Alles ganz geheim. So muss die Firma damit leben, dass Reporter ohne Grenzen und andere Menschenrechtsaktivisten in dieser Woche eine offizielle Beschwerde beim Bundeswirtschaftsministerium einlegten. Sie verlangen schärfere Kontrollen, wohin Gamma exportiert, und berufen sich dabei auf – allerdings freiwillige – Empfehlungen der Organisation für wirtschaftliche Zusammenarbeit und Entwicklung (OECD). Nimmt das Ministerium die Beschwerde an, könnten als nächster Schritt Gamma und die Aktivisten versuchen, hinter verschlossenen Türen im Ministerium eine Einigung zu finden.

    Münch wiederholt bei jeder Gelegenheit, dass seine Firma die Exportgesetze in Deutschland einhält. Das soll vorbildlich wirken, aber in Wirklichkeit werden aus München gar keine Finfisher-Produkte verschickt. Das geschieht von England aus. In Andover, nicht weit von Stonehenge, sitzt die Muttergesellschaft von Gamma International, die Gamma Group. Gründer und neben Münch Mehrheitseigentümer ist Louthean Nelson; die Gruppe beschäftigt 85 Mitarbeiter.

    In Großbritannien und Deutschland gilt allerdings dieselbe EU-Verordnung über den Export von Überwachungstechnik. Überwachungstechnologien sind im Sinne dieses Gesetzes keine Waffen, sondern Güter, die sowohl zivil als auch militärisch genutzt werden können. Fachwort: dual use. Dementsprechend sind die Auflagen deutlich harmloser als für Panzerverkäufe. Am Ende läuft es darauf hinaus, dass Gamma vom Kunden ein Zertifikat bekommt, demzufolge Finfisher wirklich beim richtigen Adressaten installiert wurde, gestempelt vom Staat selbst. Das Papier heftet Gamma ab. Wie oft und genau das Bundesamt für Ausfuhrkontrolle Gamma prüft, wollen weder Münch noch das dafür zuständige Bundeswirtschaftsministerium sagen.

    Wie viele Diktaturen Gamma-Kunden sind, ist nicht bekannt. Das Institut Citizen Lab aus Toronto hat in vielen Ländern Server mit Spuren von Finfisher gefunden. Brunei, Äthiopien, Turkmenistan, die Vereinigten Arabischen Emirate – klingt wie das Kellerduell im Demokratie-Ranking. Doch auch in Staaten wie Tschechien und den Niederlanden fanden die Informatiker Gamma-Server. All diese Länder müssen aber nicht Kunden sein. Jeder Geheimdienst könne schließlich die Daten seines Finfisher-Trojaners durch diese Staaten umleiten, um sich zu tarnen, erklärt Münch. Solche Aussagen können Externe technisch nicht überprüfen.

    In der ungeliebten Öffentlichkeit steht Gamma seit dem arabischen Frühling. Ägyptische Protestler fanden in einer Behörde ein Angebot der Firma an ihre gestürzte Regierung, einen Kostenvoranschlag für Software, Hardware, Training, 287.137 Euro. Eine Lieferung habe es nie gegeben, behauptet Münch.

    Für Andy Müller-Maguhn ist Gamma trotzdem ein “Software-Waffenlieferant”. Er hat eine Webseite zu dem Thema aufgesetzt mit dem Namen buggedplanet.info. Dort protokolliert er Unternehmensdaten, Presseberichte, verwickelte Personen. Müller-Maguhn war früher Sprecher des Chaos Computer Clubs. Ein Video auf Youtube zeigt, wie er sein Projekt 2011 auf der Jahreskonferenz des deutschen Hackervereins präsentiert. Müller-Maguhn ruft seine Seite über Münch auf; die erscheint auf einer Leinwand, mit Geburtsdatum, Privatadresse und Foto von Münch. Der steigt da gerade aus einer Cessna, mit Sonnenbrille und Fliegerjacke, und sieht ein bisschen proletenmäßig aus. Müller-Maguhns Zuschauer lachen.

    Seine Webseite ist auch ein Pranger. “Dass ihre privaten Details in der Öffentlichkeit diskutiert wurden, halte ich für sehr fair, wenn man sich anschaut, was die mit den Leben anderer gemacht haben”, sagt Müller-Maguhn auf der Bühne. “Ich glaube, das ist ein Weg, damit die Leute über Privatsphäre nachdenken.” Applaus und Jubel sind kurz lauter als seine Stimme. Er zuckt mit den Schultern. “Sie wollten nicht am öffentlichen Diskurs teilnehmen. Das wäre vielleicht die Alternative.”

    Seit seine Adresse bekannt ist, bekommt Münch Postkarten, auf denen nur steht: “Ich habe ein Recht auf Privatsphäre.” Kein Absender.

    Spricht Münch über seine Kritiker, klingt er ehrlich entrüstet: “Wir haben immer dieses Bad-Boy-Image. Ist aber kein schönes Gefühl.” Zumal es unverdient sei: “Manche Leute sagen: ,Das mag ich nicht, das geht ins Privatleben.’ Aber die Tatsache, dass sie es nicht mögen, heißt nicht, dass wir etwas Illegales machen.” Er selbst finde zum Beispiel die Fernsehsendung Deutschland sucht den Superstar “scheiße”, aber deswegen sei die nicht illegal.

    Quelle: SZ vom 09.02.2013/bbr

    9. Februar 2013 10:46 Finfisher-Entwickler Gamma
    Von Bastian Brinkmann, Jasmin Klofta und Frederik Obermaier

    Find this story at 9 February 2013

    Copyright: Süddeutsche Zeitung Digitale Medien GmbH / Süddeutsche Zeitung GmbH

    Software that tracks people on social media created by defence firm

    Exclusive: Raytheon’s Riot program mines social network data like a ‘Google for spies’, drawing ire from civil rights groups

    A multinational security firm has secretly developed software capable of tracking people’s movements and predicting future behaviour by mining data from social networking websites.

    A video obtained by the Guardian reveals how an “extreme-scale analytics” system created by Raytheon, the world’s fifth largest defence contractor, can gather vast amounts of information about people from websites including Facebook, Twitter and Foursquare.

    Raytheon says it has not sold the software – named Riot, or Rapid Information Overlay Technology – to any clients.

    But the Massachusetts-based company has acknowledged the technology was shared with US government and industry as part of a joint research and development effort, in 2010, to help build a national security system capable of analysing “trillions of entities” from cyberspace.

    The power of Riot to harness popular websites for surveillance offers a rare insight into controversial techniques that have attracted interest from intelligence and national security agencies, at the same time prompting civil liberties and online privacy concerns.

    The sophisticated technology demonstrates how the same social networks that helped propel the Arab Spring revolutions can be transformed into a “Google for spies” and tapped as a means of monitoring and control.

    Using Riot it is possible to gain an entire snapshot of a person’s life – their friends, the places they visit charted on a map – in little more than a few clicks of a button.

    In the video obtained by the Guardian, it is explained by Raytheon’s “principal investigator” Brian Urch that photographs users post on social networks sometimes contain latitude and longitude details – automatically embedded by smartphones within “exif header data.”

    Riot pulls out this information, showing not only the photographs posted onto social networks by individuals, but also the location at which the photographs were taken.

    “We’re going to track one of our own employees,” Urch says in the video, before bringing up pictures of “Nick,” a Raytheon staff member used as an example target. With information gathered from social networks, Riot quickly reveals Nick frequently visits Washington Nationals Park, where on one occasion he snapped a photograph of himself posing with a blonde haired woman.

    “We know where Nick’s going, we know what Nick looks like,” Urch explains, “now we want to try to predict where he may be in the future.”

    Riot can display on a spider diagram the associations and relationships between individuals online by looking at who they have communicated with over Twitter. It can also mine data from Facebook and sift GPS location information from Foursquare, a mobile phone app used by more than 25 million people to alert friends of their whereabouts. The Foursquare data can be used to display, in graph form, the top 10 places visited by tracked individuals and the times at which they visited them.

    The video shows that Nick, who posts his location regularly on Foursquare, visits a gym frequently at 6am early each week. Urch quips: “So if you ever did want to try to get hold of Nick, or maybe get hold of his laptop, you might want to visit the gym at 6am on a Monday.”

    Mining from public websites for law enforcement is considered legal in most countries. In February last year, for instance, the FBI requested help to develop a social-media mining application for monitoring “bad actors or groups”.

    However, Ginger McCall, an attorney at the Washington-based Electronic Privacy Information Centre, said the Raytheon technology raised concerns about how troves of user data could be covertly collected without oversight or regulation.

    “Social networking sites are often not transparent about what information is shared and how it is shared,” McCall said. “Users may be posting information that they believe will be viewed only by their friends, but instead, it is being viewed by government officials or pulled in by data collection services like the Riot search.”

    Raytheon, which made sales worth an estimated $25bn (£16bn) in 2012, did not want its Riot demonstration video to be revealed on the grounds that it says it shows a “proof of concept” product that has not been sold to any clients.

    Jared Adams, a spokesman for Raytheon’s intelligence and information systems department, said in an email: “Riot is a big data analytics system design we are working on with industry, national labs and commercial partners to help turn massive amounts of data into useable information to help meet our nation’s rapidly changing security needs.

    Ryan Gallagher
    The Guardian, Sunday 10 February 2013 15.20 GMT

    Find this story at 10 February 2013

    © 2013 Guardian News and Media Limited or its affiliated companies. All rights reserved.

    Raytheon’s “Riot” Social-Network Data Mining Software

    A video touting software created by Raytheon to mine data from social networks has been attracting an increasing amount of attention in the past few days, since it was uncovered by Ryan Gallagher at the Guardian.

    As best as I can tell from the video and Gallagher’s reporting, Raytheon’s “Riot” software gathers up only publicly available information from companies like Facebook, Twitter, and Foursquare. In that respect, it appears to be a conceptually unremarkable, fairly unimaginative piece of work. At the same time, by aspiring to carry out “large-scale analytics” on Americans’ social networking data—and to do so, apparently, on behalf of national security and law enforcement agencies—the project raises a number of red flags.

    In the video, we see a demonstration of how social networking data—such as Foursquare checkins—is used to predict the schedule of a sample subject, “Nick.” The host of the video concludes,

    Six a.m. appears to be the most frequently visited time at the gym. So if you ever did want to try to get ahold of Nick—or maybe get ahold of his laptop—you might want to visit the gym at 6:00 a.m. on Monday.

    (The reference to the laptop is certainly jarring. Remember, this is an application apparently targeted at law enforcement and national security agencies, not at ordinary individuals. Given this, it sounds to me like the video is suggesting that Riot could be used as a way to schedule a black-bag job to plant spyware on someone’s laptop.)

    At the end of the video, there’s also a brief visual showing how Riot can use such data to carry out a link analysis of a subject. In link analysis, people’s communications and other connections to each other are mapped out and analyzed. It first came to the attention of many people in and out of government via an influential 2002 slide presentation by data mining expert Jeff Jonas showing how the 9/11 hijackers might have easily been linked together had the government focused on the two who were already wanted by the authorities. As Jonas later emphasized in the face of attempts to make too much of this:

    Both Nawaf Alhamzi and Khalid Al-Midhar were already known to the US government to be very bad men. They should have never been let into the US, yet they were living in the US and were hiding in plain sight—using their real names…. The whole point of my 9/11 analysis was that the government did not need mounds of data, did not need new technology, and in fact did not need any new laws to unravel this event!

    Nevertheless, link analysis appears to have been wholeheartedly embraced by the national security establishment, especially the NSA, and to be justifying unconstitutionally large amounts of data collection on innocent people.

    We don’t know that Raytheon’s software will ever play any such role—it just appears to aspire to do so. As with any tool, everything depends on how it’s used. But the fact is, we’re living in an age where disparate pieces of information about us are being aggressively mined and aggregated to discover new things about us. When we post something online, it’s all too natural to feel as though our audience is just our friends—even when we know intellectually that it’s really the whole world. Various institutions are gleefully exploiting that gap between our felt and actual audiences (a gap that is all too often worsened by online companies that don’t make it clear enough to their users who the full audience for their information is). Individuals need to be aware of this and take steps to compensate, such as double-checking their privacy settings and being aware of the full ramifications of data that they post.

    At the same time, the government has no business rooting around people’s social network postings—even those that are voluntarily publicly posted—unless it has specific, individualized suspicion that a person is involved in wrongdoing. Among the many problems with government “large-scale analytics” of social network information is the prospect that government agencies will blunderingly use these techniques to tag, target and watchlist people coughed up by programs such as Riot, or to target them for further invasions of privacy based on incorrect inferences. The chilling effects of such activities, while perhaps gradual, would be tremendous.

    Finally, let me just make the same point we’ve made with regards to privacy-invading technologies such as drones and cellphone and GPS tracking: these kinds of tools should be developed transparently. We don’t really know what Riot can do. And while we at the ACLU don’t think the government should be rummaging around individuals’ social network data without good reason, even a person who might disagree with us on that question could agree that it’s a question that should not be decided in secret. The balance between the intrusive potential of new technologies and government power is one that should be decided openly and democratically.

    By Jay Stanley, Senior Policy Analyst, ACLU Speech, Privacy and Technology Project at 2:08pm

    Find this story at 12 February 2013

    © ACLU

    ‘Google for spies’ software mines social networks to track users’ movements and could even predict what you’ll do next

    Raytheon’s Riot software sifts through data from suspects’ online accounts
    Critics say it will be used for monitoring citizens’ online lives
    Similar to Geotime software bought by London’s Met police two years ago

    New software which mines data from social networks to track people’s movements and even predict future behaviour poses a ‘very real threat to personal freedom’, civil rights groups warned today.

    Multinational defence contractor Raytheon has developed the ‘extreme-scale analytics’ software which can sift through vast quantities of data from services like Facebook, Twitter and Google.

    Critics have already dubbed it a ‘Google for spies’ and say it is likely to be used by governments as a means of monitoring and tracking people online to detect signs of dissent.

    ‘Google for spies’: A screengrab of a video demonstrating Raytheon’s Riot software, which mines the personal data from social networking websites to track people’s movements and even predict their future behaviour

    Raytheon claims it has not yet sold the software – known as Rapid Information Overlay Technology, or Riot – to any clients but admitted it had shared the technology with the U.S. government in 2010.

    However, it is similar to another social tracking software known as Geotime which the U.S. military already uses and was in recent years purchased for trials by London’s Metropolitan Police.

    Such tools are likely to form the backbone of future surveillance systems which will exploit the information we share online to automatically monitor citizens’ behaviour.

    Val Swain, from the Network for Police Monitoring, told MailOnline that police had already publicly indicated they want to use ‘advanced analytical software’ to keep tabs on social media.

    ‘The HMIC report ‘rules of engagement’ on the policing of the riots included a recommendation for the development of a ‘data-mining engine’ to scan across publicly available social media,’ she said.

    ‘Technologically advanced methods now exist that make this possible.

    ‘This [kind of] software is extremely powerful, able to identify and monitor people who are ‘of interest to the police”, even if they have committed no criminal activity.

    ‘The software identifies ‘people, organisations and concepts’ and even sentiments, as the software is able to automatically pick up on ’emotional states’.

    ‘It was also recommended that this software be used as part of a vast “intelligence hub” to be developed by the new National Crime Agency.’

    There’s nowhere to hide: The software aggregates data from suspects’ social media profiles to build a detailed picture of their movements, their current whereabouts and where they are likely to go next

    A restricted video put together by Raytheon as a ‘proof of concept’ demonstration to potential buyers was obtained by British daily the Guardian and published on its website today.

    It shows an executive for the security firm, Brian Urch, explaining how photos posted on social media from smartphones frequently contain metadata revealing the precise location where they were taken.

    As an example, Mr Urch demonstrates how this information can be used to track a Raytheon worker called ‘Nick’, whose social media profiles reveal he frequently visits Washington National Park.

    Nick is pictured on one occasion posing with a blonde woman, revealing to any agency using Riot what he looks like.

    ‘Now we want to predict where he may be in the future,’ Mr Urch said. He demonstrates how Riot can display a diagram of the relationships between individuals online by looking at their Twitter communications.

    We know your friends: As an example, the video shows how a Raytheon worker called Nick can be tracked. This is an image he posted onto a social network, which can be analysed to reveal the location it was taken

    The software is also able to mine information from Facebook and track GPS location data from Foursquare, which over 25million people use on their smartphones to share their whereabouts with friends.

    This Foursquare data can be analysed to show the top 10 locations visited by individuals using the service, and also at what times they went there.

    Nick, for example, frequently checks into Foursquare at a particular gym at 6am.

    ‘So if you ever did want to try to get hold of Nick, or maybe get hold of his laptop, you might want to visit the gym at 6am on a Monday,’ says Mr Urch.

    Riot’s features are similar to that of Geotime, which MailOnline revealed two years ago had been bought by the Met Police.

    Geotime aggregates information gathered from social networking sites, GPS devices like the iPhone, mobile phones, financial transactions and IP network logs to build a detailed picture of an individual’s movements.

    The Met, Britain’s largest police force, confirmed at the time that it had purchased the software and refused to rule out its use in investigating public order disturbances.

    Open book: This pie chart reveals the top 10 places that Nick has visited, as harvested from his Foursquare account

    How to find Nick: This graphic breaks down the details of the times and dates that Nick has visited the gym

    The effectiveness of both Riot and Geotime would be multiplied by plans by the UK government to install ‘black box’ spy devices on Britain’s internet and mobile infrastructure to track all communications traffic.

    Those plans, part of the Data Communications Bill, have been stalled by opposition from some Liberal Democrats, but an influential committee of MPs last week revealed that British spy agencies were keen for them to go ahead.

    The spy network would rely on a technology known as Deep Packet Inspection to log data from communications ranging from online services like Facebook and Twitter, Skype calls with family members and visits to pornographic websites.

    The government argues that swift access to communications data is critical to the fight against terrorism, paedophilia and other high-level crime, but it has been delayed after the Liberal Democrats dropped support for the bill.

    Already in use: Two years ago London’s Metropolitan Police confirmed it had purchased Geotime, another program with similar online tracking functions to that of Raytheon’s Riot software

    If it were to go ahead, such a spy network would offer a wealth of easily accessible data for software such as Riot and Geotime to work with.
    HOW RIOT COULD BE PART OF THE GOVERNMENT’S SPYING PLANS

    Social media tracking software like Riot and Geotime could have their effectiveness multiplied by plans to install ‘black box’ surveillance devices across the UK’s internet and mobile communications infrastructure.

    At the moment spy agencies rely on communications providers willingly revealing personal information from users’ accounts to investigate suspects’ communications.

    But a report by an influential committee of MPs has revealed such agencies are keen to implement a nationwide surveillance regime that would give them automatic access to the data.

    The network will rely on a technology known as Deep Packet Inspection to log data from communications ranging from online services like Facebook and Twitter, to Skype calls with family members and visits to pornographic websites.

    Authorities say swift access to communications is critical to the fight against terrorism and other high-level crime, but civil liberties have reacted with outrage, saying that the technology will give the government a greater surveillance capability than has ever been seen before.

    MI5 chief Jonathan Evans told the committee: ‘Access to communications data of one sort or another is very important indeed. It’s part of the backbone of the way in which we would approach investigations.

    ‘I think I would be accurate in saying there are no significant investigations that we undertake across the service that don’t use communications data because of its ability to tell you the who and the when and the where of your target’s activities.’

    A key part of security agencies’ plans is a ‘filter’ which would make the data collected easily searchable – a function that could be carried out by software like Riot.

    Jim Killock, executive director of the Open Rights Group, explained that this would work as a kind of search engine for everyone’s private data, linking it together from the various online and telecoms accounts people use to communicate.

    ‘This would put data from your mobile phone, email, web history and phones together, so the police can tell who your friends are, what your opinions are, where you’ve been and with who,’ he said.

    ‘It could make instant surveillance of everything you do possible at the click of a button.’

    Either program could form the backbone of the government’s planned ‘filter’, a kind of search engine for personal data described by the report from Parliament’s Intelligence and Security Committee published last week.

    Jim Killock, executive director of the Open Rights Group, which campaigns for freedom online, explained: ‘This would put data from your mobile phone, email, web history and phones together, so the police can tell who your friends are, what your opinions are, where you’ve been and with who.

    ‘It could make instant surveillance of everything you do possible at the click of a button.’

    Ms Swain revealed that Raytheon is just one company which is developing this kind of software for sale to governments and domestic spy agencies.

    ‘IBM are also marketing analytic software which has this functionality, and there are a number of others,’ she said.

    ‘It is being used by companies who want to identify, understand and influence existing and potential customers, and it is extremely expensive.

    ‘The police will use this, not just to investigate crime, but to identify and stop crime and disorder, even before it happens.

    ‘Some may consider that a good thing – but the level of social control involved poses a very real threat to individual freedom.

    ‘The software will inevitably be used to monitor political dissent and activity, as well as crime and disorder. Surveillance already exercises a ‘chilling effect’ over basic freedoms – this can only make things a great deal worse.’

    Her sentiments were echoed by Nick Pickles, director of privacy and civil liberties campaign group Big Brother Watch.

    He said: ‘Privacy as we know it is being slowly eroded and it’s not just our friends that are looking at what we share.

    ‘A wide range of companies are trying to develop tools that capture data online and analyse it in difference ways, exploiting the growing amount of information we share online and the wider opportunities to track us.

    ‘If the only barrier is the amount of computing power at your disposal, clearly Governments have the potential to use these tools to profile and analyse their populations in ways never before possible.

    ‘This kind of tool joins the dots of our online lives, exploiting data for whatever purpose the user wants.

    ‘The best way to protect yourself is to control the data you share, but Governments around the world need to be clear with their citizens how they are using these kinds of tools and if they are trying to search for criminals before they have committed a crime.’

    By Damien Gayle

    PUBLISHED: 10:19 GMT, 11 February 2013 | UPDATED: 12:13 GMT, 11 February 2013

    Find this story at 11 February 2013

    © Associated Newspapers Ltd

    AIVD: we lezen niet elke e-mail

    De Algemene Inlichtingen- en Veiligheidsdienst (AIVD) leest niet elke e-mail die wordt verstuurd, ook al is dit een hardnekkige mythe die blijft bestaan, zo liet de dienst onlangs weten. Toch wordt mogelijk dit jaar de wet aangepast waardoor de AIVD meer bevoegdheden krijgt om internetverkeer te onderscheppen.

    Tijdens de NCSC Conferentie in Den Haag sprak Sebastian Reyn van de AIVD over de rol die de inlichtingendiensten op internet spelen en welke risico’s Nederland bedreigen. Dit om meer inzicht in de werking van de diensten te geven en waar die zich mee bezighouden, voor zover het grote publiek dit mag weten.

    “Ik kan jullie niets over onze bronnen, werkwijze en huidige informatiepositie vertellen. Dit vereist geen verdere uitleg”, liet Reyn de zaal weten. Hij begon met het ontzenuwen van populaire mythes, zoals de mythe dat de AIVD al het e-mailverkeer zou afluisteren. “Dat is niet het geval.”

    Dreigingen
    “Het is belangrijk dat burgers begrijpen wat we doen en waarom wat we doen van belang is voor hun veiligheid.” Volgens Reyn zijn cybercrime en cyberspionage in dat licht twee van de grootste dreigingen voor de nationale veiligheid. “Er is geen twijfel dat cyberspionage, samen met cybercrime, de grootste dreiging is waar we in het cyberdomein mee te maken hebben.”

    Vanwege de omvang van de dreiging is het belangrijk dat partijen samenwerken. “Deze dreiging is te groot om alleen aan te pakken.” Daarin spelen ook internetgebruikers een rol. Volgens Reyn gedragen veel mensen zich nog altijd op onveilige wijzen en zijn zich niet van de risico’s bewust. Zo wordt software niet gepatcht, worden wachtwoorden nauwelijks gewijzigd en laat men overal persoonlijke informatie op het web slingeren.

    Voorbeelden hiervan verschijnen dagelijks in de media. Het probleem met cyberspionage is dat het onzichtbaar is. “Het is een feit dat buitenlandse inlichtingendiensten op geheime wijze toegang tot belangrijke informatiesystemen proberen te krijgen.” Veel van deze aanvallen worden door bestaande beveiligingssystemen nauwelijks gedetecteerd.

    E-mail
    “We zijn niet geinteresseerd in elke e-mail of verstuurd sms-bericht, of elk cyberincident. Onze focus ligt bij dreigingen voor de nationale veiligheid.” Cyberspionage en cybercrime ziet de AIVD als serieuze dreigingen, dat geldt echter niet voor cyberterrorisme. Cyberterroristen vormen nog geen grote bedreiging voor de nationale veiligheid, aldus Reyn. “De mogelijkheden die cyberterroristen hebben zijn op dit moment beperkt.”

    Terroristen zouden dan ook nog niet bij grote cyberaanvallen betrokken of hiervoor verantwoordelijk zijn geweest. Ook voor hacktivisten is Reyn niet bang. Hij vergeleek ze met het digitale equivalent van demonstrerende mensen.

    De grootste dreiging komt dan ook van andere staten. Reyn stelt dat in veel landen het de juridische taak van de overheid is om andere landen te bespioneren om hun eigen positie in de wereld te versterken. “Elke dag proberen duizenden mensen die voor legio inlichtingendiensten werken toegang tot de informatie van andere landen te krijgen. En je kunt ervan uitgaan dat een aantal in Nederland is geinteresseerd.”

    En Nederland is voor deze landen een interessant doelwit op zowel economisch, technologisch als wetenschappelijk gebied. Volgens Reyn is er nog een te groot vertrouwen in de veiligheid van ICT-systemen.

    Spionage
    Cyberspionage is voor veel landen aantrekkelijk, ging Reyn verder. “Het is een goedkope manier om in korte tijd een grote hoeveelheid data te verzamelen en is voor een groot aantal doelen te gebruiken. Daarnaast is het risico op detectie klein en is ‘attributie’ lastig.” Het is bijna onmogelijk voor aangevallen landen om te bewijzen wie de dader is. “Landen zoeken bewust naar lekken in software en systemen”, stelt Reyn.

    Volgens Reyn worden soms agenten gebruikt die USB-sticks op systemen aansluiten die niet op het internet zijn aangesloten. Daarnaast worden ook bekendere tactieken toegepast. “We zien vaak valse e-mails met verborgen malware.” Om ervoor dat te zorgen dat het slachtoffer deze e-mails ook opent, gebruiken staten klassieke spionagetactieken. “Ze zoeken naar menselijke kwetsbaarheden.”

    Aftappen
    Aan het eind van de lezing stelde Simone Halink van digitale burgerrechtenbeweging Bits of Freedom nog een vraag over de transparantie en openheid die de AIVD wil uitdragen, terwijl de bevoegdheden waaronder de dienst opereert mogelijk verder worden uitgebreid. Daardoor kan de dienst wel elke e-mail onderscheppen, zowel van Nederlandse als buitenlandse internetgebruikers.

    Nederlandse veiligheidsdiensten hebben op dit moment de bevoegdheid om “ongericht” communicatie te onderscheppen. Ze mogen onder bepaalde voorwaarden de recorder aanzetten. Het gaat dan om “ongericht ontvangen en opnemen van niet-kabelgebonden telecommunicatie”.

    Op dit moment is er een wijzigingsvoorstel voor de Wet op de Inlichtingen- en Veiligheidsdiensten (Wiv) in de maak, waardoor die bevoegdheid wordt uitgebreid naar het aftappen van communicatie via kabels.

    “De wet is in 2002 gemaakt en sindsdien heeft de technologie zich verder ontwikkeld. De meeste communicatie verloopt tegenwoordig via de kabel. Het is belangrijk dat de wet de spelregels beschrijft waar de inlichtingendiensten aan moeten voldoen, maar dat de wet zelf niet afhankelijk van technologie is”, aldus Reyn.

    Vrijdag,11:38 doorRedactie

    Find this story at 01 February 2013

    © 2001-2013 Security.nl – The Security Council

    Random afluisteren in India

    In het voorjaar van 2010 was India een paar weken in de ban van een afluisterschandaal, maar vervolgens verdween dat in de vergetelheid. Dit is opmerkelijk gezien de staat van dienst van de inlichtingenwereld in India. Schandalen die gewone Indiërs raken, maar ook corruptie, slecht management, verkeerde technologie en apparatuur en bovenal incompetentie lijken de boventoon te voeren bij de NTRO, die verantwoordelijk wordt gehouden voor het schandaal. NTRO, National Technical Research Organisation, gebruikt IMSI Catchers om voor lange tijd en op grote schaal politici, ambtenaren, zakenmensen, beroemdheden en gewone Indiërs af te luisteren.

    Find this story at 20 April 2011

     

    Israeli security ‘read’ tourists’ private emails

    How would you feel if when you arrived at your holiday destination, security staff demanded to read your personal emails and look at your Facebook account?

    Israel’s attorney general has been asked to look into claims that security officials have been doing just that – threatening to refuse entry to the country unless such private information is divulged by some tourists. Keith Wallace reports.

    Find this story at 31 July 2012

    Get in touch with Fast Track via email or Facebook. And follow us on Pinterest.

    Watch Fast Track on the BBC World News channel on Saturdays at 04:30, 13:30 and 19:30 GMT or Sundays at 06:30 GMT.

    Police demand the right to snoop on everyone’s emails: Scotland Yard chief is accused of playing politics

    Met Police Commissioner slammed for his public support of the Government’s Communications Data Bill
    Tory MP Dominic Raab accuses him of being ‘deeply unprofessional’ and jeopardising principle of ‘innocent until proven guilty’
    Bill would force communications companies to store data on every website visit, email, text message and social network use for 12 months

    Britain’s most senior police officer was accused of playing politics yesterday after he gave his full backing to Government plans to monitor the public’s every internet click.

    Metropolitan Police Commissioner Bernard Hogan-Howe endorsed a draft law that critics claim amounts to a snoopers’ charter, saying that in some cases it could be a matter of ‘life or death’.

    His actions were branded ‘deeply unprofessional’ and prompted calls for official censure.

    Mr Hogan-Howe’s intervention in the Communications Data Bill was compared to that of former Met commissioner Sir Ian Blair, who was accused of lobbying for a Labour plan to allow terrorism suspects to be detained for up to 90 days and also backed controversial ID cards.

    Tory MP Dominic Raab said: ‘Just as it was wrong for Sir Ian Blair to lobby for the flawed ID card scheme, it is deeply unprofessional for Commissioner Hogan-Howe to lobby for Big Brother surveillance.

    ‘It politicises our police and undermines public trust. It’s also shocking that he wants more surveillance powers to “eliminate innocent people from an investigation”.

    ‘In this country, we’re innocent until proven guilty – not the other way round.’

    Former shadow home secretary David Davis, one of the most outspoken critics of the proposed law, said: ‘He will have done his reputation no end of harm by getting involved in this process.’

    He said that after Sir Ian spoke out on 90 days detention he was seen as a Government spokesman and, if not careful, the same would be said of Mr Hogan-Howe.

    He added: ‘The truth of the matter is this is a highly political issue and the police should stay out of it.’

    Mr Hogan-Howe made the comments yesterday in an article for The Times, in which he brought the 2005 Soham murder investigation into his argument, saying police were able to disprove Ian Huntley’s alibi that he did not kill schoolgirls Jessica Chapman and Holly Wells by looking at his phone and text records.

    He also appeared alongside Home Secretary Theresa May at a press conference to promote the draft Communications Data Bill.

    If made law, it will give ministers powers to demand that internet companies store data on every website visit, email, text message and visit to social networking sites for a minimum of 12 months.

    Police and security services would not have access to the content of messages, but would know who was contacted, when and by what method.

    The Bill is expected to face fierce criticism from Lib Dem and Tory backbenchers when it is scrutinised by Parliament.

    Currently, police can access information that is stored automatically by internet companies, but they say that 25 per cent of the data is not logged, leaving a loophole for determined criminals.

    Mrs May said that without the new powers, offenders would go free. ‘We will see people walking the streets who should be behind bars,’ she said.

    Sitting alongside her, Mr Hogan-Howe claimed the proposals were no more intrusive than current laws.

    But in facing repeated questions over whether he was right to intervene so publicly, the Commissioner accepted there was a risk of the police becoming politicised over the issue.

    ‘You could say there is a risk [of politicisation], but the thing I’m passionate about is making sure criminals can’t get away with crime,’ he said.

    ‘If that’s regarded as political, it’s a sorry state of affairs.’

    Find this story at 14 June 2012

    By Jack Doyle

    PUBLISHED: 23:14 GMT, 14 June 2012 | UPDATED: 23:14 GMT, 14 June 2012

    Published by Associated Newspapers Ltd

    Part of the Daily Mail, The Mail on Sunday & Metro Media Group
    © Associated Newspapers Ltd

    Revealed: Hundreds of words to avoid using online if you don’t want the government spying on you (and they include ‘pork’, ‘cloud’ and ‘Mexico’)

    Department of Homeland Security forced to release list following freedom of information request
    Agency insists it only looks for evidence of genuine threats to the U.S. and not for signs of general dissent

    Revealing: A list of keywords used by government analysts to scour the internet for evidence of threats to the U.S. has been released under the Freedom of Information Act

    The Department of Homeland Security has been forced to release a list of keywords and phrases it uses to monitor social networking sites and online media for signs of terrorist or other threats against the U.S.

    The intriguing the list includes obvious choices such as ‘attack’, ‘Al Qaeda’, ‘terrorism’ and ‘dirty bomb’ alongside dozens of seemingly innocent words like ‘pork’, ‘cloud’, ‘team’ and ‘Mexico’.

    Released under a freedom of information request, the information sheds new light on how government analysts are instructed to patrol the internet searching for domestic and external threats.

    The words are included in the department’s 2011 ‘Analyst’s Desktop Binder’ used by workers at their National Operations Center which instructs workers to identify ‘media reports that reflect adversely on DHS and response activities’.

    Department chiefs were forced to release the manual following a House hearing over documents obtained through a Freedom of Information Act lawsuit which revealed how analysts monitor social networks and media organisations for comments that ‘reflect adversely’ on the government.

    However they insisted the practice was aimed not at policing the internet for disparaging remarks about the government and signs of general dissent, but to provide awareness of any potential threats.

    As well as terrorism, analysts are instructed to search for evidence of unfolding natural disasters, public health threats and serious crimes such as mall/school shootings, major drug busts, illegal immigrant busts.

    The list has been posted online by the Electronic Privacy Information Center – a privacy watchdog group who filed a request under the Freedom of Information Act before suing to obtain the release of the documents.

    In a letter to the House Homeland Security Subcommittee on Counter-terrorism and Intelligence, the centre described the choice of words as ‘broad, vague and ambiguous’.

    Threat detection: Released under a freedom of information request, the information sheds new light on how government analysts are instructed to patrol the internet searching for domestic and external threats

    They point out that it includes ‘vast amounts of First Amendment protected speech that is entirely unrelated to the Department of Homeland Security mission to protect the public against terrorism and disasters.’

    Find this story at

    By Daniel Miller

    PUBLISHED: 09:32 GMT, 26 May 2012 | UPDATED: 17:46 GMT, 26 May 2012

    Part of the Daily Mail, The Mail on Sunday & Metro Media Group
    © Associated Newspapers Ltd

      nieuwere artikelen >>