From Digital Age to Nano Age. WorldWide.

Tag: plan

Robotic Automations

EU plan to force messaging apps to scan for CSAM risks millions of false positives, experts warn | TechCrunch


A controversial push by European Union lawmakers to legally require messaging platforms to scan citizens’ private communications for child sexual abuse material (CSAM) could lead to millions of false positives per day, hundreds of security and privacy experts warned in an open letter Thursday.

Concern over the EU proposal has been building since the Commission proposed the CSAM-scanning plan two years ago — with independent experts, lawmakers across the European Parliament and even the bloc’s own Data Protection Supervisor among those sounding the alarm.

The EU proposal would not only require messaging platforms that receive a CSAM detection order to scan for known CSAM; they would also have to use unspecified detection scanning technologies to try to pick up unknown CSAM and identify grooming activity as it’s taking place — leading to accusations of lawmakers indulging in magical thinking-levels of technosolutionism.

Critics argue the proposal asks the technologically impossible and will not achieve the stated aim of protecting children from abuse. Instead, they say, it will wreak havoc on Internet security and web users’ privacy by forcing platforms to deploy blanket surveillance of all their users in deploying risky, unproven technologies, such as client-side scanning.

Experts say there is no technology capable of achieving what the law demands without causing far more harm than good. Yet the EU is ploughing on regardless.

The latest open letter addresses amendments to the draft CSAM-scanning regulation recently proposed by the European Council which the signatories argue fail to address fundamental flaws with the plan.

Signatories to the letter — numbering 270 at the time of writing — include hundreds of academics, including well-known security experts such as professor Bruce Schneier of Harvard Kennedy School and Dr. Matthew D. Green of Johns Hopkins University, along with a handful of researchers working for tech companies such as IBM, Intel and Microsoft.

An earlier open letter (last July), signed by 465 academics, warned the detection technologies the legislation proposal hinges on forcing platforms to adopt are “deeply flawed and vulnerable to attacks”, and would lead to a significant weakening of the vital protections provided by end-to-end encrypted (E2EE) communications.

Little traction for counter-proposals

Last fall, MEPs in the European Parliament united to push back with a substantially revised approach — which would limit scanning to individuals and groups who are already suspected of child sexual abuse; limit it to known and unknown CSAM, removing the requirement to scan for grooming; and remove any risks to E2EE by limiting it to platforms that are not end-to-end-encrypted. But the European Council, the other co-legislative body involved in EU lawmaking, has yet to take a position on the matter, and where it lands will influence the final shape of the law.

The latest amendment on the table was put out by the Belgian Council presidency in March, which is leading discussions on behalf of representatives of EU Member States’ governments. But in the open letter the experts warn this proposal still fails to tackle fundamental flaws baked into the Commission approach, arguing that the revisions still create “unprecedented capabilities for surveillance and control of Internet users” and would “undermine… a secure digital future for our society and can have enormous consequences for democratic processes in Europe and beyond.”

Tweaks up for discussion in the amended Council proposal include a suggestion that detection orders can be more targeted by applying risk categorization and risk mitigation measures; and cybersecurity and encryption can be protected by ensuring platforms are not obliged to create access to decrypted data and by having detection technologies vetted. But the 270 experts suggest this amounts to fiddling around the edges of a security and privacy disaster.

From a “technical standpoint, to be effective, this new proposal will also completely undermine communications and systems security”, they warn. While relying on “flawed detection technology” to determine cases of interest in order for more targeted detection orders to be sent won’t reduce the risk of the law ushering in a dystopian era of “massive surveillance” of web users’ messages, in their analysis.

The letter also tackles a proposal by the Council to limit the risk of false positives by defining a “person of interest” as a user who has already shared CSAM or attempted to groom a child — which it’s envisaged would be done via an automated assessment; such as waiting for 1 hit for known CSAM or 2 for unknown CSAM/grooming before the user is officially detected as a suspect and reported to the EU Centre, which would handle CSAM reports.

Billions of users, millions of false positives

The experts warn this approach is still likely to lead to vast numbers of false alarms.

“The number of false positives due to detection errors is highly unlikely to be significantly reduced unless the number of repetitions is so large that the detection stops being effective. Given the large amount of messages sent in these platforms (in the order of billions), one can expect a very large amount of false alarms (in the order of millions),” they write, pointing out that the platforms likely to end up slapped with a detection order can have millions or even billions of users, such as Meta-owned WhatsApp.

“Given that there has not been any public information on the performance of the detectors that could be used in practice, let us imagine we would have a detector for CSAM and grooming, as stated in the proposal, with just a 0.1% False Positive rate (i.e., one in a thousand times, it incorrectly classifies non-CSAM as CSAM), which is much lower than any currently known detector.

“Given that WhatsApp users send 140 billion messages per day, even if only 1 in hundred would be a message tested by such detectors, there would be 1.4 million false positives every single day. To get the false positives down to the hundreds, statistically one would have to identify at least 5 repetitions using different, statistically independent images or detectors. And this is only for WhatsApp — if we consider other messaging platforms, including email, the number of necessary repetitions would grow significantly to the point of not effectively reducing the CSAM sharing capabilities.”

Another Council proposal to limit detection orders to messaging apps deemed “high-risk” is a useless revision, in the signatories’ view, as they argue it’ll likely still “indiscriminately affect a massive number of people”. Here they point out that only standard features, such as image sharing and text chat, are required for the exchange of CSAM — features that are widely supported by many service providers, meaning a high risk categorization will “undoubtedly impact many services.”

They also point out that adoption of E2EE is increasing, which they suggest will increase the likelihood of services that roll it out being categorized as high risk. “This number may further increase with the interoperability requirements introduced by the Digital Markets Act that will result in messages flowing between low-risk and high-risk services. As a result, almost all services could be classified as high risk,” they argue. (NB: Message interoperability is a core plank of the EU’s DMA.)

A backdoor for the backdoor

As for safeguarding encryption, the letter reiterates the message that security and privacy experts have been repeatedly yelling at lawmakers for years now: “Detection in end-to-end encrypted services by definition undermines encryption protection.”

“The new proposal has as one of its goals to ‘protect cyber security and encrypted data, while keeping services using end-to-end encryption within the scope of detection orders’. As we have explained before, this is an oxymoron,” they emphasize. “The protection given by end-to-end encryption implies that no one other than the intended recipient of a communication should be able to learn any information about the content of such communication. Enabling detection capabilities, whether for encrypted data or for data before it is encrypted, violates the very definition of confidentiality provided by end-to-end encryption.”

In recent weeks police chiefs across Europe have penned their own joint statement — raising concerns about the expansion of E2EE and calling for platforms to design their security systems in such as way that they can still identify illegal activity and send reports on message content to law enforcement.

The intervention is widely seen as an attempt to put pressure on lawmakers to pass laws like the CSAM-scanning regulation.

Police chiefs deny they’re calling for encryption to be backdoored but they haven’t explained exactly which technical solutions they do want platforms to adopt to enable the sought for “lawful access”. Squaring that circle puts a very wonky-shaped ball back in lawmakers’ court.

If the EU continues down the current road — so assuming the Council fails to change course, as MEPs have urged it to — the consequences will be “catastrophic”, the letter’s signatories go on to warn. “It sets a precedent for filtering the Internet, and prevents people from using some of the few tools available to protect their right to a private life in the digital space; it will have a chilling effect, in particular to teenagers who heavily rely on online services for their interactions. It will change how digital services are used around the world and is likely to negatively affect democracies across the globe.”

An EU source close to the Council was unable to provide insight on current discussions between Member States but noted there’s a working party meeting on May 8 where they confirmed the proposal for a regulation to combat child sexual abuse will be discussed.


Software Development in Sri Lanka

Robotic Automations

Anthropic launches new iPhone app, premium plan for businesses | TechCrunch


Anthropic, one of the world’s best-funded generative AI startups with $7.6 billion in the bank, is launching a new paid plan aimed at enterprises, including those in highly regulated industries like healthcare, finance and legal, as well as a new iOS app.

Team, the enterprise plan, gives customers higher-priority access to Anthropic’s Claude 3 family of generative AI models plus additional admin and user management controls.

“Anthropic introduced the Team plan now in response to growing demand from enterprise customers who want to deploy Claude’s advanced AI capabilities across their organizations,” Scott White, product lead at Anthropic, told TechCrunch. “The Team plan is designed for businesses of all sizes and industries that want to give their employees access to Claude’s language understanding and generation capabilities in a controlled and trusted environment.”

The Team plan — which joins Anthropic’s individual premium plan, Pro — delivers “greater usage per user” compared to Pro, enabling users to “significantly increase” the number of chats that they can have with Claude. (We’ve asked Anthropic for figures.) Team customers get a 200,000-token (~150,000-word) context window as well as all the advantages of Pro, like early access to new features.

Image Credits: Anthropic

Context window, or context, refers to input data (e.g. text) that a model considers before generating output (e.g. more text). Models with small context windows tend to forget the content of even very recent conversations, while models with larger contexts avoid this pitfall — and, as an added benefit, better grasp the flow of data they take in.

Team also brings with it new toggles to control billing and user management. And in the coming weeks, it’ll gain collaboration features including citations to verify AI-generated claims (models including Anthropic’s tend to hallucinate), integrations with data repos like codebases and customer relationship management platforms (e.g. Salesforce) and — perhaps most intriguing to this writer — a canvas to work with team members on AI-generated docs and projects, Anthropic says.

In the nearer term, Team customers will be able to leverage tool use capabilities for Claude 3, which recently entered open beta. This allows users to equip Claude 3 with custom tools to perform a wider range of tasks, like getting a firm’s current stock price or the local weather report, similar to OpenAI’s GPTs.

“By enabling businesses to deeply integrate Claude into their collaborative workflows, the Team plan positions Anthropic to capture significant enterprise market share as more companies move from AI experimentation to full-scale deployment in pursuit of transformative business outcomes,” White said. “In 2023, customers rapidly experimented with AI, and now in 2024, the focus has shifted to identifying and scaling applications that deliver concrete business value.”

Anthropic talks a big game, but it still might take a substantial effort on its part to get businesses on board.

According to a recent Gartner survey, 49% of companies said that it’s difficult to estimate and demonstrate the value of AI projects, making them a tough sell internally. A separate poll from McKinsey found that 66% of executives believe that generative AI is years away from generating substantive business results.

Image Credits: Anthropic

Yet corporate spending on generative AI is forecasted to be enormous. IDC expects that it’ll reach $15.1 billion in 2027, growing nearly eightfold from its total in 2023.

That’s probably generative AI vendors, most notably OpenAI, are ramping up their enterprise-focused efforts.

OpenAI recently said that it had more than 600,000 users signed up for the enterprise tier of its generative AI platform ChatGPT, ChatGPT Enterprise. And it’s introduced a slew of tools aimed at satisfying corporate compliance and governance requirements, like a new user interface to compare model performance and quality.

Anthropic is competitively pricing its Team plan: $30 per user per month billed monthly, with a minimum of five seats. OpenAI doesn’t publish the price of ChatGPT Enterprise, but users on Reddit report being quoted anywhere from $30 per user per month for 120 users to $60 per user per month for 250 users. 

“Anthropic’s Team plan is competitive and affordable considering the value it offers organizations,” White said. “The per-user model is straightforward, allowing businesses to start small and expand gradually. This structure supports Anthropic’s growth and stability while enabling enterprises to strategically leverage AI.”

It undoubtedly helps that Anthropic’s launching Team from a position of strength.

Amazon in March completed its $4 billion investment in Anthropic (following a $2 billion Google investment), and the company is reportedly on track to generate more than $850 million in annualized revenue by the end of 2024 — a 70% increase from an earlier projection. Anthropic may see Team as its logical next path to expansion. But at least right now it seems Anthropic can afford to let Team grow organically as it attempts to convince holdout businesses its generative AI is better than the rest.

An Anthropic iOS app

Anthropic’s other piece of news Wednesday is that it’s launching an iOS app. Given that the company’s conspicuously been hiring iOS engineers over the past few months, this comes as no great surprise.

The iOS app provides access to Claude 3, including free access as well as upgraded Pro and Team access. It syncs with Anthropic’s client on the web, and it taps Claude 3’s vision capabilities to offer real-time analysis for uploaded and saved images. For example, users can upload a screenshot of charts from a presentation and ask Claude to summarize them.

Image Credits: Anthropic

“By offering the same functionality as the web version, including chat history syncing and photo upload capabilities, the iOS app aims to make Claude a convenient and integrated part of users’ daily lives, both for personal and professional use,” White said. “It complements the web interface and API offerings, providing another avenue for users to engage with the AI assistant. As we continue to develop and refine our technologies, we’ll continue to explore new ways to deliver value to users across various platforms and use cases, including mobile app development and functionality.”


Software Development in Sri Lanka

Robotic Automations

WhatsApp now lets users plan and schedule events in Communities | TechCrunch


WhatsApp is introducing a new way for people to organize events in Communities, the company announced on Wednesday. The feature makes it easier to plan get-togethers and events directly in WhatsApp, whether it’s setting up a PTA meeting or a birthday dinner.

Anyone can create an event that others can respond to, so everyone else in the group can see who is planning to attend. Users can find the event in the group’s information page, and those who are going to the event will get a notification when the event date nears.

The new feature is first becoming available to groups that belong to a Community and will roll out to all groups over the coming months.

Image Credits: WhatsApp

WhatsApp is also adding the ability for users to reply to messages in Announcement Groups, which is where admins in a Community send updates to all community members. Replies are grouped together so you can see what other people have said, and users won’t get notifications for them. The purpose of this feature is to allow admins to hear from their members, while still keeping Announcement Groups organized and free of clutter.

WhatsApp launched Communities back in November 2022 as a way for neighborhoods, school associations, hobbyists and more to keep groups connected by letting admins combine up to 50 groups together. The company says it will continue to roll out new features to Communities and groups over the next few months.


Software Development in Sri Lanka

Robotic Automations

Tesla's new growth plan is centered around mysterious cheaper models | TechCrunch


Tesla’s been undergoing some major changes, and now we have a sense of why: the company says it is upending its product roadmap because of “pressure” on EV sales.

The new and accelerated plan now includes “more affordable models” that the company claims will be launched next year. Or if Tesla CEO Elon Musk is to be believed — and that’s a big bet considering his track record with timelines — possibly as early as the end of 2024.

The shock announcement sent the company’s stock soaring more than 11% in after-hours trading Tuesday. And the price didn’t fall even as Musk and other Tesla executives refused to share further details on a call with investors.

This all comes following a bombshell report in early April from Reuters that claimed Tesla had abandoned its work on a low-cost, next-generation car. That next-gen car was meant to be built on the same EV platform Tesla is developing for its supposed robotaxi vehicle. Tesla had said this next-gen car could come as early as late 2025.

While Musk flimsily claimed Reuters was “lying,” both Electrek and Bloomberg News have since reported that the development of that particular EV has been delayed or deemphasized inside the company. Musk has since posted on social media site X that Tesla will reveal the robotaxi August 8.

Tesla provided the update in its less-than-stellar first-quarter earnings report, which showed profits falling 55% year-over-year. The company said in the report it had “updated [its] future vehicle line-up to accelerate the launch of new models ahead of our previously communicated start of production in the second half of 2025.” The slate of new vehicles includes “more affordable models,” the company said.

These new offerings are not being spun out of whole cloth, though. Tesla says it will build these vehicles on existing production lines and that they will “utilize aspects of” the next-generation platform it has been developing, “as well as aspects of our current platforms.”

Bloomberg News reported earlier this week Tesla was working on new versions of the Model Y and Model 3 that borrowed technology and processes from the next-gen EV, with an emphasis on the Model Y.

Tesla investors will have to wait to learn any more.

On a call with investors, Musk punted on the question of what Tesla’s new product roadmap actually involves.  “We’ll talk about this on August 8th,” he said, referring to the event Tesla has planned to reveal its robotaxi, which he called “Cybercab.”

When asked a similar question later in the call, Musk said “I think we’ve said all we will on that front.”

Tesla VP Lars Moravy said there was “some risk” associated with the new platform, and that Tesla could leverage “all the subsystems” being developed for it, like powertrains, drive units, as well as improvements in manufacturing and automation, thermal systems, seating,” and more. “All that’s transferrable, and that’s what we’re doing — trying to get it in new products as fast as possible,” he said. “That engineering work — we’re not trying to just throw it away and put it in a coffin.”

Cost versus growth

Tesla has worked to reduce the cost of manufacturing the next-gen EV by 50% compared to the platform that underpins the Model 3 and Model Y.

The company admitted Tuesday that by shifting to a strategy of mixing the next-gen technology and processes with existing platforms and manufacturing lines, it will lose some of that cost savings.

The upside, according to Tesla, is growth. The company claims it can double 2023’s production (which was around 1.8 million vehicles) by 2025. And while it won’t save as much on the cost of the cars, it also won’t have to build new production lines to make these mysterious new vehicles. The company has already slowed work on a new factory in Mexico, where it originally planned to start building the next-generation EV and robotaxi.

Of course, Tesla had said for years that it expected to reach 50% annual growth, averaged over a few years, and has consistently missed that target. As the company warned, it will grow at a “notably lower” rate this year.

There are other challenges as well. Tesla is claiming it can launch this new product lineup after axing a huge number of employees from its global workforce — though Musk said Tuesday the company is “not giving up anything significant that I’m aware of.”

“We’ve just had a long period of prosperity from 2019 to now,” Musk said on the call. “We’ve made some corrections along the way, but it is time to reorganize the company for the next phase of growth.”


Software Development in Sri Lanka

Robotic Automations

Tesla reportedly drops plan to build $25K EV | TechCrunch


Tesla is reportedly abandoning its plan to build a lower-cost EV, thought to be priced around $25,000, according to Reuters, despite that vehicle’s status as a pivotal product for the company’s overall growth.

The company will instead focus its efforts on a planned robotaxi that is being built on the same small EV platform that was supposed to power the lower-cost vehicle.

Tesla CEO Elon Musk claimed, without proof, that Reuters is “lying” in a post on his social media platform, X, and did not dispute any specific details. He also responded with an eyes emoji to another post that effectively summed up the Reuters report in different words.

Tesla has reportedly been working on these two vehicles for a few years. But Musk has wavered on whether to prioritize a typical car or one with no steering wheel or pedals, despite not having yet produced a fully autonomous car.

Musk first teased the idea of a truly low-cost Tesla in 2020. But by early 2022, he said Tesla had stopped work on the car because it had too much else to do.

That didn’t last long. The project spun back up, but the company and its CEO were split on whether it should be a typical car or a futuristic robotaxi.

In Walter Isaacson’s recent biography of Musk, he described the CEO pushing back in mid-2022 against his engineers’ insistence on referencing a car with a steering wheel and pedals. “This vehicle must be designed as a clean robotaxi. We’re going to take that risk, it’s my fault if it fucks up,” Isaacson quoted Musk as saying. A few weeks after that, Isaacson said, he quoted Musk saying the robotaxi will “transform everything” and make Tesla a “ten-trillion [dollar] company.”

But even after all that, Isaacson wrote that lead designer Franz von Holzhausen and engineering VP Lars Moravy kept the more traditional car version alive as a “shadow project.” In September 2022, Isaacson wrote, Moravy and von Holzhausen made the pitch to Musk that they needed an inexpensive, small car in order to grow at Musk’s stated goal of 50% per year. They also laid out the plan to use the same platform to power both distinct models.

Musk still said, according to Isaacson, that the $25,000 car was “really not that exciting of a project” — despite it being the ultimate goal of his famed original “master plan” for Tesla. But by early 2023, Musk had agreed to move forward with the plan laid out by his lieutenants.

That plan is now in question as Reuters cites internal documents showing that work has stopped on the traditional car project in favor of the robotaxi approach.

Things have changed since Musk agreed to that plan in 2023. Isaacson’s book explains that Musk’s reason for trying to spin up a factory in Mexico had to do with wanting to make both vehicles there. But Musk quickly pivoted to building the two vehicles in Texas instead. Musk has since told investors that Tesla has backed away from going “full tilt” in developing the Mexico plant in part because of high interest rates. And Tesla has spent the last year slashing prices on its best-selling models in an effort to stay competitive in China and maintain its huge advantage over the competition outside of that country.




Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber