From Digital Age to Nano Age. WorldWide.

Tag: false

Robotic Automations

EU plan to force messaging apps to scan for CSAM risks millions of false positives, experts warn | TechCrunch


A controversial push by European Union lawmakers to legally require messaging platforms to scan citizens’ private communications for child sexual abuse material (CSAM) could lead to millions of false positives per day, hundreds of security and privacy experts warned in an open letter Thursday.

Concern over the EU proposal has been building since the Commission proposed the CSAM-scanning plan two years ago — with independent experts, lawmakers across the European Parliament and even the bloc’s own Data Protection Supervisor among those sounding the alarm.

The EU proposal would not only require messaging platforms that receive a CSAM detection order to scan for known CSAM; they would also have to use unspecified detection scanning technologies to try to pick up unknown CSAM and identify grooming activity as it’s taking place — leading to accusations of lawmakers indulging in magical thinking-levels of technosolutionism.

Critics argue the proposal asks the technologically impossible and will not achieve the stated aim of protecting children from abuse. Instead, they say, it will wreak havoc on Internet security and web users’ privacy by forcing platforms to deploy blanket surveillance of all their users in deploying risky, unproven technologies, such as client-side scanning.

Experts say there is no technology capable of achieving what the law demands without causing far more harm than good. Yet the EU is ploughing on regardless.

The latest open letter addresses amendments to the draft CSAM-scanning regulation recently proposed by the European Council which the signatories argue fail to address fundamental flaws with the plan.

Signatories to the letter — numbering 270 at the time of writing — include hundreds of academics, including well-known security experts such as professor Bruce Schneier of Harvard Kennedy School and Dr. Matthew D. Green of Johns Hopkins University, along with a handful of researchers working for tech companies such as IBM, Intel and Microsoft.

An earlier open letter (last July), signed by 465 academics, warned the detection technologies the legislation proposal hinges on forcing platforms to adopt are “deeply flawed and vulnerable to attacks”, and would lead to a significant weakening of the vital protections provided by end-to-end encrypted (E2EE) communications.

Little traction for counter-proposals

Last fall, MEPs in the European Parliament united to push back with a substantially revised approach — which would limit scanning to individuals and groups who are already suspected of child sexual abuse; limit it to known and unknown CSAM, removing the requirement to scan for grooming; and remove any risks to E2EE by limiting it to platforms that are not end-to-end-encrypted. But the European Council, the other co-legislative body involved in EU lawmaking, has yet to take a position on the matter, and where it lands will influence the final shape of the law.

The latest amendment on the table was put out by the Belgian Council presidency in March, which is leading discussions on behalf of representatives of EU Member States’ governments. But in the open letter the experts warn this proposal still fails to tackle fundamental flaws baked into the Commission approach, arguing that the revisions still create “unprecedented capabilities for surveillance and control of Internet users” and would “undermine… a secure digital future for our society and can have enormous consequences for democratic processes in Europe and beyond.”

Tweaks up for discussion in the amended Council proposal include a suggestion that detection orders can be more targeted by applying risk categorization and risk mitigation measures; and cybersecurity and encryption can be protected by ensuring platforms are not obliged to create access to decrypted data and by having detection technologies vetted. But the 270 experts suggest this amounts to fiddling around the edges of a security and privacy disaster.

From a “technical standpoint, to be effective, this new proposal will also completely undermine communications and systems security”, they warn. While relying on “flawed detection technology” to determine cases of interest in order for more targeted detection orders to be sent won’t reduce the risk of the law ushering in a dystopian era of “massive surveillance” of web users’ messages, in their analysis.

The letter also tackles a proposal by the Council to limit the risk of false positives by defining a “person of interest” as a user who has already shared CSAM or attempted to groom a child — which it’s envisaged would be done via an automated assessment; such as waiting for 1 hit for known CSAM or 2 for unknown CSAM/grooming before the user is officially detected as a suspect and reported to the EU Centre, which would handle CSAM reports.

Billions of users, millions of false positives

The experts warn this approach is still likely to lead to vast numbers of false alarms.

“The number of false positives due to detection errors is highly unlikely to be significantly reduced unless the number of repetitions is so large that the detection stops being effective. Given the large amount of messages sent in these platforms (in the order of billions), one can expect a very large amount of false alarms (in the order of millions),” they write, pointing out that the platforms likely to end up slapped with a detection order can have millions or even billions of users, such as Meta-owned WhatsApp.

“Given that there has not been any public information on the performance of the detectors that could be used in practice, let us imagine we would have a detector for CSAM and grooming, as stated in the proposal, with just a 0.1% False Positive rate (i.e., one in a thousand times, it incorrectly classifies non-CSAM as CSAM), which is much lower than any currently known detector.

“Given that WhatsApp users send 140 billion messages per day, even if only 1 in hundred would be a message tested by such detectors, there would be 1.4 million false positives every single day. To get the false positives down to the hundreds, statistically one would have to identify at least 5 repetitions using different, statistically independent images or detectors. And this is only for WhatsApp — if we consider other messaging platforms, including email, the number of necessary repetitions would grow significantly to the point of not effectively reducing the CSAM sharing capabilities.”

Another Council proposal to limit detection orders to messaging apps deemed “high-risk” is a useless revision, in the signatories’ view, as they argue it’ll likely still “indiscriminately affect a massive number of people”. Here they point out that only standard features, such as image sharing and text chat, are required for the exchange of CSAM — features that are widely supported by many service providers, meaning a high risk categorization will “undoubtedly impact many services.”

They also point out that adoption of E2EE is increasing, which they suggest will increase the likelihood of services that roll it out being categorized as high risk. “This number may further increase with the interoperability requirements introduced by the Digital Markets Act that will result in messages flowing between low-risk and high-risk services. As a result, almost all services could be classified as high risk,” they argue. (NB: Message interoperability is a core plank of the EU’s DMA.)

A backdoor for the backdoor

As for safeguarding encryption, the letter reiterates the message that security and privacy experts have been repeatedly yelling at lawmakers for years now: “Detection in end-to-end encrypted services by definition undermines encryption protection.”

“The new proposal has as one of its goals to ‘protect cyber security and encrypted data, while keeping services using end-to-end encryption within the scope of detection orders’. As we have explained before, this is an oxymoron,” they emphasize. “The protection given by end-to-end encryption implies that no one other than the intended recipient of a communication should be able to learn any information about the content of such communication. Enabling detection capabilities, whether for encrypted data or for data before it is encrypted, violates the very definition of confidentiality provided by end-to-end encryption.”

In recent weeks police chiefs across Europe have penned their own joint statement — raising concerns about the expansion of E2EE and calling for platforms to design their security systems in such as way that they can still identify illegal activity and send reports on message content to law enforcement.

The intervention is widely seen as an attempt to put pressure on lawmakers to pass laws like the CSAM-scanning regulation.

Police chiefs deny they’re calling for encryption to be backdoored but they haven’t explained exactly which technical solutions they do want platforms to adopt to enable the sought for “lawful access”. Squaring that circle puts a very wonky-shaped ball back in lawmakers’ court.

If the EU continues down the current road — so assuming the Council fails to change course, as MEPs have urged it to — the consequences will be “catastrophic”, the letter’s signatories go on to warn. “It sets a precedent for filtering the Internet, and prevents people from using some of the few tools available to protect their right to a private life in the digital space; it will have a chilling effect, in particular to teenagers who heavily rely on online services for their interactions. It will change how digital services are used around the world and is likely to negatively affect democracies across the globe.”

An EU source close to the Council was unable to provide insight on current discussions between Member States but noted there’s a working party meeting on May 8 where they confirmed the proposal for a regulation to combat child sexual abuse will be discussed.


Software Development in Sri Lanka

Robotic Automations

Consumer Financial Protection Bureau fines BloomTech for false claims | TechCrunch


In an order today, the U.S. Consumer Financial Protection Bureau (CFPB) said that BloomTech, the for-profit coding bootcamp previously known as the Lambda School, deceived students about the cost of loans, made false claims about graduates’ hiring rates and engaged in illegal lending masked as “income sharing” agreements with high fees.

The order marks the end of the CFPB’s investigation into BloomTech’s practices — and the start of agency’s penalties on the organization.

The CFPB is permanently banning BloomTech from consumer lending activities and its CEO, Austen Allred, from student lending for a period of ten years. In addition, the agency is ordering BloomTech and Allred to cease collecting payments on loans for graduates who didn’t have a qualifying job and allow students to withdraw their funds without penalty — as well as eliminate finance changes for “certain agreements.”

“BloomTech and its CEO sought to drive students toward income share loans that were marketed as risk-free, but in fact carried significant finance charges and many of the same risks as other credit products,” CFPB director Rohit Chopra said in a statement. “Today’s action underscores our increased focus on investigating individual executives and, when appropriate, charging them with breaking the law.”

BloomTech and Allred must also pay the CFPB over $164,000 in civil penalties to be deposited in the agency’s victims relief fund, with BloomTech contributing ~$64,000 and Allred forking over the remainder ($100,000).

Allred founded BloomTech, which rebranded from the Lambda School in 2022 after cutting half its staff, in 2017. Based in San Francisco, the vocational organization — owned primarily by Allred — is backed by various VC funds and investors including Gigafund, Tandem Fund, Y Combinator, GV, GGV and Stripe, and at one time was valued at over $150 million.

Critics almost immediately attacked the firm’s then-pioneering business model — the income share agreement, or ISA — as predatory.

For BloomTech’s short-term, typically six-to-nine-month certification — not degree — programs in fields spanning web development, data science and backend engineering, the school originated income-share loans to fund students’ tuition. (According to the CFPB, BloomTech has originated “at least” 11,000 loans to date.) These loans require that recipients who earn more than $50,000 in a related industry pay BloomTech 17% of their pre-tax income each month until reaching the 24-payment or $30,000 total repayment threshold.

BloomTech didn’t market the loans as such, saying that they didn’t create debt and were “risk free,” and advertised a 71%-86% job placement rate. But the CFPB found these marketing claims and others to be flatly untrue.

BloomTech’s loans in fact carried an annual percentage rate and an average finance charge of around $4,000, neither of which students were made aware of, and a single missed payment triggered a default. The school’s job placement rates were closer to 50% and sank as low as 30%. And, unbeknownst to many students, BloomTech was selling a portion of its loans to investors while depriving recipients of rights they should’ve had under a federal protection known as the Holder Rule.

Prior to the CFPB order, BloomTech, which briefly landed in hot water with California’s oversight board several years ago for operating without approval, had faced other lawsuits claiming the school misrepresented how likely graduates were to get a job and how much they were likely to earn. Last year, leaked documents obtained by Business Insider raised questions about the company inflating its efficacy and hyping up a curriculum that didn’t upskill students at the level they expected.

To comply with the CFPB order, BloomTech must stop collecting payments on loans to graduates who didn’t receive a qualifying job in the past year, and eliminate the finance charge for those who graduated the program more than 18 months ago and obtained a qualifying job making $70,000 or less. The company must also allow current students to withdraw from the program and cancel their loans, or continue in the program with a third-party loan.




Software Development in Sri Lanka

Robotic Automations

A crypto wallet maker's warning about an iMessage bug sounds like a false alarm | TechCrunch


A crypto wallet maker claimed this week that hackers may be targeting people with an iMessage “zero-day” exploit — but all signs point to an exaggerated threat, if not a downright scam.

Trust Wallet’s official X (previously Twitter) account wrote that “we have credible intel regarding a high-risk zero-day exploit targeting iMessage on the Dark Web. This can infiltrate your iPhone without clicking any link. High-value targets are likely. Each use raises detection risk.”

The wallet maker recommended iPhone users to turn off iMessage completely “until Apple patches this,” even though no evidence shows that “this” exists at all.

The tweet went viral, and has been viewed over 3.6 million times as of our publication. Because of the attention the post received, Trust Wallet hours later wrote a follow-up post. The wallet maker doubled down on its decision to go public, saying that it “actively communicates any potential threats and risks to the community.”

Trust Wallet, which is owned by crypto exchange Binance, did not respond to TechCrunch’s request for comment. Apple spokesperson Scott Radcliffe declined to comment when reached Tuesday.

As it turns out, according to Trust Wallet’s CEO Eowyn Chen, the “intel” is an advertisement on a dark web site called CodeBreach Lab, where someone is offering said alleged exploit for $2 million in bitcoin cryptocurrency. The advert titled “iMessage Exploit” claims the vulnerability is a remote code execution (or RCE) exploit that requires no interaction from the target — commonly known as “zero-click” exploit — and works on the latest version of iOS. Some bugs are called zero-days because the vendor has no time, or zero days, to fix the vulnerability. In this case, there is no evidence of an exploit to begin with.

A screenshot of the dark web ad claiming to sell an alleged iMessage exploit. Image Credits: TechCrunch

RCEs are some of the most powerful exploits because they allow hackers to remotely take control of their target devices over the internet. An exploit like an RCE coupled with a zero-click capability is incredibly valuable because those attacks can be conducted invisibly without the device owner knowing. In fact, a company that acquires and resells zero-days is currently offering between $3 to $5 million for that kind of zero-click zero-day, which is also a sign of how hard it is to find and develop these types of exploits.

Contact Us

Do you have any information about actual zero-days? Or about spyware providers? From a non-work device, you can contact Lorenzo Franceschi-Bicchierai securely on Signal at +1 917 257 1382, or via Telegram, Keybase and Wire @lorenzofb, or email. You also can contact TechCrunch via SecureDrop.

Given the circumstances of how and where this zero-day is being sold, it’s very likely that it is all just a scam, and that Trust Wallet fell for it, spreading what people in the cybersecurity industry would call FUD, or “fear uncertainty and doubt.”

Zero-days do exist, and have been used by government hacking units for years. But in reality, you probably don’t need to turn off iMessage unless you are a high-risk user, such as a journalist or dissident under an oppressive government, for example.

It’s better advice to suggest people turn on Lockdown Mode, a special mode that disables certain Apple device features and functionalities with the goal of reducing the avenues hackers can use to attack iPhones and Macs.

According to Apple, there is no evidence anyone has successfully hacked someone’s Apple device while using Lockdown Mode. Several cybersecurity experts like Runa Sandvik and the researchers who work at Citizen Lab, who have investigated dozens of cases of iPhone hacks, recommend using Lockdown Mode.

For its part, CodeBreach Lab appears to be a new website with no track record. When we checked, a search on Google returned only seven results, one of which is a post on a well-known hacking forum asking if anyone had previously heard of CodeBreach Lab.

On its homepage — with typos — CodeBreach Lab claims to offer several types of exploits other than for iMessage, but provides no further evidence.

The owners describe CodeBreach Lab as “the nexus of cyber disruption.” But it would probably be more fitting to call it the nexus of braggadocio and naivety.

TechCrunch could not reach CodeBreach Lab for comment because there is no way to contact the alleged company. When we attempted to buy the alleged exploit — because why not — the website asked for the buyer’s name, email address, and then to send $2 million in bitcoin to a specific wallet address on the public blockchain. When we checked, nobody has so far.

In other words, if someone wants this alleged zero-day, they have to send $2 million to a wallet that, at this point, there is no way to know who it belongs to, nor — again — any way to contact.

And there is a very good chance that it will remain that way.




Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber