From Digital Age to Nano Age. WorldWide.

Tag: experts

Robotic Automations

Google's call-scanning AI could dial up censorship by default, privacy experts warn | TechCrunch


A feature Google demoed at its I/O confab yesterday, using its generative AI technology to scan voice calls in real-time for conversational patterns associated with financial scams, has sent a collective shiver down the spines of privacy and security experts who are warning the feature represents the thin end of the wedge. They warn that, […]

© 2024 TechCrunch. All rights reserved. For personal use only.


Software Development in Sri Lanka

Robotic Automations

EU plan to force messaging apps to scan for CSAM risks millions of false positives, experts warn | TechCrunch


A controversial push by European Union lawmakers to legally require messaging platforms to scan citizens’ private communications for child sexual abuse material (CSAM) could lead to millions of false positives per day, hundreds of security and privacy experts warned in an open letter Thursday.

Concern over the EU proposal has been building since the Commission proposed the CSAM-scanning plan two years ago — with independent experts, lawmakers across the European Parliament and even the bloc’s own Data Protection Supervisor among those sounding the alarm.

The EU proposal would not only require messaging platforms that receive a CSAM detection order to scan for known CSAM; they would also have to use unspecified detection scanning technologies to try to pick up unknown CSAM and identify grooming activity as it’s taking place — leading to accusations of lawmakers indulging in magical thinking-levels of technosolutionism.

Critics argue the proposal asks the technologically impossible and will not achieve the stated aim of protecting children from abuse. Instead, they say, it will wreak havoc on Internet security and web users’ privacy by forcing platforms to deploy blanket surveillance of all their users in deploying risky, unproven technologies, such as client-side scanning.

Experts say there is no technology capable of achieving what the law demands without causing far more harm than good. Yet the EU is ploughing on regardless.

The latest open letter addresses amendments to the draft CSAM-scanning regulation recently proposed by the European Council which the signatories argue fail to address fundamental flaws with the plan.

Signatories to the letter — numbering 270 at the time of writing — include hundreds of academics, including well-known security experts such as professor Bruce Schneier of Harvard Kennedy School and Dr. Matthew D. Green of Johns Hopkins University, along with a handful of researchers working for tech companies such as IBM, Intel and Microsoft.

An earlier open letter (last July), signed by 465 academics, warned the detection technologies the legislation proposal hinges on forcing platforms to adopt are “deeply flawed and vulnerable to attacks”, and would lead to a significant weakening of the vital protections provided by end-to-end encrypted (E2EE) communications.

Little traction for counter-proposals

Last fall, MEPs in the European Parliament united to push back with a substantially revised approach — which would limit scanning to individuals and groups who are already suspected of child sexual abuse; limit it to known and unknown CSAM, removing the requirement to scan for grooming; and remove any risks to E2EE by limiting it to platforms that are not end-to-end-encrypted. But the European Council, the other co-legislative body involved in EU lawmaking, has yet to take a position on the matter, and where it lands will influence the final shape of the law.

The latest amendment on the table was put out by the Belgian Council presidency in March, which is leading discussions on behalf of representatives of EU Member States’ governments. But in the open letter the experts warn this proposal still fails to tackle fundamental flaws baked into the Commission approach, arguing that the revisions still create “unprecedented capabilities for surveillance and control of Internet users” and would “undermine… a secure digital future for our society and can have enormous consequences for democratic processes in Europe and beyond.”

Tweaks up for discussion in the amended Council proposal include a suggestion that detection orders can be more targeted by applying risk categorization and risk mitigation measures; and cybersecurity and encryption can be protected by ensuring platforms are not obliged to create access to decrypted data and by having detection technologies vetted. But the 270 experts suggest this amounts to fiddling around the edges of a security and privacy disaster.

From a “technical standpoint, to be effective, this new proposal will also completely undermine communications and systems security”, they warn. While relying on “flawed detection technology” to determine cases of interest in order for more targeted detection orders to be sent won’t reduce the risk of the law ushering in a dystopian era of “massive surveillance” of web users’ messages, in their analysis.

The letter also tackles a proposal by the Council to limit the risk of false positives by defining a “person of interest” as a user who has already shared CSAM or attempted to groom a child — which it’s envisaged would be done via an automated assessment; such as waiting for 1 hit for known CSAM or 2 for unknown CSAM/grooming before the user is officially detected as a suspect and reported to the EU Centre, which would handle CSAM reports.

Billions of users, millions of false positives

The experts warn this approach is still likely to lead to vast numbers of false alarms.

“The number of false positives due to detection errors is highly unlikely to be significantly reduced unless the number of repetitions is so large that the detection stops being effective. Given the large amount of messages sent in these platforms (in the order of billions), one can expect a very large amount of false alarms (in the order of millions),” they write, pointing out that the platforms likely to end up slapped with a detection order can have millions or even billions of users, such as Meta-owned WhatsApp.

“Given that there has not been any public information on the performance of the detectors that could be used in practice, let us imagine we would have a detector for CSAM and grooming, as stated in the proposal, with just a 0.1% False Positive rate (i.e., one in a thousand times, it incorrectly classifies non-CSAM as CSAM), which is much lower than any currently known detector.

“Given that WhatsApp users send 140 billion messages per day, even if only 1 in hundred would be a message tested by such detectors, there would be 1.4 million false positives every single day. To get the false positives down to the hundreds, statistically one would have to identify at least 5 repetitions using different, statistically independent images or detectors. And this is only for WhatsApp — if we consider other messaging platforms, including email, the number of necessary repetitions would grow significantly to the point of not effectively reducing the CSAM sharing capabilities.”

Another Council proposal to limit detection orders to messaging apps deemed “high-risk” is a useless revision, in the signatories’ view, as they argue it’ll likely still “indiscriminately affect a massive number of people”. Here they point out that only standard features, such as image sharing and text chat, are required for the exchange of CSAM — features that are widely supported by many service providers, meaning a high risk categorization will “undoubtedly impact many services.”

They also point out that adoption of E2EE is increasing, which they suggest will increase the likelihood of services that roll it out being categorized as high risk. “This number may further increase with the interoperability requirements introduced by the Digital Markets Act that will result in messages flowing between low-risk and high-risk services. As a result, almost all services could be classified as high risk,” they argue. (NB: Message interoperability is a core plank of the EU’s DMA.)

A backdoor for the backdoor

As for safeguarding encryption, the letter reiterates the message that security and privacy experts have been repeatedly yelling at lawmakers for years now: “Detection in end-to-end encrypted services by definition undermines encryption protection.”

“The new proposal has as one of its goals to ‘protect cyber security and encrypted data, while keeping services using end-to-end encryption within the scope of detection orders’. As we have explained before, this is an oxymoron,” they emphasize. “The protection given by end-to-end encryption implies that no one other than the intended recipient of a communication should be able to learn any information about the content of such communication. Enabling detection capabilities, whether for encrypted data or for data before it is encrypted, violates the very definition of confidentiality provided by end-to-end encryption.”

In recent weeks police chiefs across Europe have penned their own joint statement — raising concerns about the expansion of E2EE and calling for platforms to design their security systems in such as way that they can still identify illegal activity and send reports on message content to law enforcement.

The intervention is widely seen as an attempt to put pressure on lawmakers to pass laws like the CSAM-scanning regulation.

Police chiefs deny they’re calling for encryption to be backdoored but they haven’t explained exactly which technical solutions they do want platforms to adopt to enable the sought for “lawful access”. Squaring that circle puts a very wonky-shaped ball back in lawmakers’ court.

If the EU continues down the current road — so assuming the Council fails to change course, as MEPs have urged it to — the consequences will be “catastrophic”, the letter’s signatories go on to warn. “It sets a precedent for filtering the Internet, and prevents people from using some of the few tools available to protect their right to a private life in the digital space; it will have a chilling effect, in particular to teenagers who heavily rely on online services for their interactions. It will change how digital services are used around the world and is likely to negatively affect democracies across the globe.”

An EU source close to the Council was unable to provide insight on current discussions between Member States but noted there’s a working party meeting on May 8 where they confirmed the proposal for a regulation to combat child sexual abuse will be discussed.


Software Development in Sri Lanka

Robotic Automations

Watch: New Atlas robot stuns experts in first reveal from Boston Dynamics


This week Boston Dynamics retired its well-known Atlas robot that was powered by hydraulics. Then today it unveiled its new Atlas robot, which is powered by electricity.

The change might not seem like much, but TechCrunch’s Brian Heater told the TechCrunch Minute that the now-deprecated hydraulics system was out of date. It’s not hard to spot why Boston Dynamics, owned by Hyundai, wanted to go electric. Its new Atlas robot is leaner, and appears to have improved range-of-motion. Size and ability to contort and maneuver are not cosmetic elements to a humanoid robot — they can unlock new use cases and possible work environments.

The new Atlas is not incredibly well-defined today, which is not a massive surprise given that it’s still a work in progress. Still, we do know that it will first head to Hyundai factories before hitting the market more generally down the road.

Happily for those of us who want a domestic robot to handle household chores and hold our hands whilst we cry, there are other startups working on the humanoid robot project. Figure, Agility, Tesla, there are too many companies vying for the same prize to note in this short post. Which has me incredibly excited — more people working on the problem means quicker progress, and hopefully faster completion of a general purpose humanoid robot that can learn.

On that last bit, it’s worth keeping in mind that AI is set to play a large role in how robots go from being great at set, repetitive tasks to being able to learn, and do a great deal more without direct programming. While it will take time for LLMs’ ability to ingest language, write code, and the like to connect to robots under development today, you can spot the future if you squint far enough into the distance. To which all I can say is, faster, please!




Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber