From Digital Age to Nano Age. WorldWide.

Tag: warns

Robotic Automations

UK data protection watchdog ends privacy probe of Snap's GenAI chatbot, but warns industry | TechCrunch


The U.K.’s data protection watchdog has closed an almost year-long investigation of Snap’s AI chatbot, My AI — saying it’s satisfied the social media firm has addressed concerns about risks to children’s privacy. At the same time, the Information Commissioner’s Office (ICO) issued a general warning to industry to be proactive about assessing risks to […]

© 2024 TechCrunch. All rights reserved. For personal use only.


Software Development in Sri Lanka

Robotic Automations

EU warns Microsoft it could be fined billions over missing GenAI risk info | TechCrunch


The European Union has warned Microsoft that it could be fined up to 1% of its global annual turnover under the bloc’s online governance regime, the Digital Services Act (DSA), after the company failed to respond to a legally binding request for information (RFI) that focused on its generative AI tools. Back in March, the […]

© 2024 TechCrunch. All rights reserved. For personal use only.


Software Development in Sri Lanka

Robotic Automations

Sony Music warns tech companies over 'unauthorized' use of its content to train AI | TechCrunch


Sony Music Group has sent letters to more than 700 tech companies and music streaming services to warn them not to use its music to train AI without explicit permission. The letter, which was obtained by TechCrunch, says Sony Music has “reason to believe” that the recipients of the letter have “may already have made […]

© 2024 TechCrunch. All rights reserved. For personal use only.


Software Development in Sri Lanka

Robotic Automations

As AI accelerates, Europe's flagship privacy principles are under attack, warns EDPS | TechCrunch


The European Data Protection Supervisor (EDPS) has warned key planks of the bloc’s data protection and privacy regime are under attack from industry lobbyists and could face a critical reception from lawmakers in the next parliamentary mandate.

“We have quite strong attacks on the principles themselves,” warned Wojciech Wiewiórowski, who heads the regulatory body that oversees European Union institutions’ own compliance with the bloc’s data protection rules, Tuesday. He was responding to questions from members of the European Parliament’s civil liberties committee concerned the European Union’s General Data Protection Regulation (GDPR) risks being watered down. 

“Especially I mean the [GDPR] principles of minimization and purpose limitation. Purpose limitation will be definitely questioned in the next years.”

The GDPR’s purpose limitation principle implies that a data operation should be attached to a specific use. Further processing may be possible — but, for example, it may require obtaining permission from the person whose information it is, or having another valid legal basis. So the purpose limitation approach injects intentional friction into data operations.

Elections to the parliament are coming up in June, while the Commission’s mandate expires at the end of 2024 so changes to the EU’s executive are also looming. Any shift of approach by incoming lawmakers could have implications for the bloc’s high standard of protection for people’s data.

The GDPR has only been up and running since May 2018 but Wiewiórowski, who fleshed out his views on incoming regulatory challenges during a lunchtime press conference following publication of the EDPS’ annual report, said the make-up of the next parliament will contain few lawmakers who were involved with drafting and passing the flagship privacy framework.

“We can say that these people who will work in the European Parliament will see GDPR as a historic event,” he suggested, predicting there will be an appetite among the incoming cohort of parliamentarians to debate whether the landmark legislation is still fit for purpose. Though he also said some revisiting of past laws is a recurring process every time the make-up of the elected parliament turns over. 

But he particularly highlighted industry lobbying, especially complaints from businesses targeting the GDPR principle of purpose limitation. Some in the scientific community also see this element of the law as a limit to their research, per Wiewiórowski. 

“There is a kind of expectation from some of the [data] controllers that they will be able to reuse the data which are collected for reason ‘A’ in order to find things which we don’t know even that we will look for,” he said. “There is an old saying of one of the representatives of business who said that the purpose limitation is one of the biggest crimes against humanity, because we will need this data and we don’t know for which purpose.

“I don’t agree with it. But I cannot close my eyes to the fact that this question is asked.”

Any shift away from the GDPR’s purpose limitation and data minimization principles could have significant implications for privacy in the region, which was first to pass a comprehensive data protection framework. The EU is still considered to have some of the strongest privacy rules anywhere in the world, although the GDPR has inspired similar frameworks elsewhere.

Included in the GDPR is an obligation on those wanting to use personal data to process only the minimum info necessary for their purpose (aka data minimization). Additionally, personal data that’s collected for one purpose cannot simply be re-used, willy-nilly, for any other use that occurs.

But with the current industry-wide push to develop more and more powerful generative AI tools there’s a huge scramble for data to train AI models — an impetus that runs directly counter to the EU’s approach.

OpenAI, the maker of ChatGPT, has already run into trouble here. It’s facing a raft of GDPR compliance issues and investigations — including related to the legal basis claimed for processing people’s data for model training.

Wiewiórowski did not explicitly blame generative AI for driving the “strong attacks” on the GDPR’s purpose limitation principle. But he did name AI as one of the key challenges facing the region’s data protection regulators as a result of fast-paced tech developments.

“The problems connected with artificial intelligence and neuroscience will be the most important part of the next five years,” he predicted on nascent tech challenges.

“The technological part of our challenges is quite obvious at the time of the revolution of AI despite the fact that this is not the technological revolution that much. We have rather the democratization of the tools. But we have to remember as well, that in times of great instability, like the ones that we have right now — with Russia’s war in Ukraine — is the time when technology is developing every week,” he also said on this.

Wars are playing an active role in driving use of data and AI technologies — such as in Ukraine where AI has been playing a major role in areas like satellite imagery analysis and geospatial intelligence — with Wiewiórowski saying battlefield applications are driving AI uptake elsewhere in the world. The effects will be pushed out across the economy in the coming years, he further predicted.

On neuroscience, he pointed to regulatory challenges arising from the transhumanism movement, which aims to enhance human capabilities by physically connecting people with information systems. “This is not science fiction,” he said. “[It’s] something which is going on right now. And we have to be ready for that from the legal and human rights point of view.”

Examples of startups targeting transhumanism ideas include Elon Musk’s Neuralink, which is developing chips that can read brain waves. Facebook-owner Meta has also been reported to be working on an AI that can interpret people’s thoughts.

Privacy risks in an age of increasing convergence of technology systems and human biology could be grave indeed. So any AI-driven weakening of EU data protection laws in the near term is likely to have long-term consequences for citizens’ human rights.


Software Development in Sri Lanka

Robotic Automations

X warns that you might lose followers as it does another bot sweep | TechCrunch


X is warning users they may see a reduction in their follower counts as the company attempts to clear the network of some spammers and bots in a large sweep. Via an announcement published by X’s Safety account, the company on Thursday will begin a “significant, proactive initiative” to eliminate accounts that violate X’s rules about platform manipulation and spam.

The move comes shortly after X announced the appointment of two new leaders to its safety team: Kylie McRoberts, an existing X employee who’s now head of Safety, and Yale Cohen, previously of Publicis Media, who is joining as the head of Brand Safety and Advertiser solutions.

Spam has been an area that Elon Musk has longed to tackle at X, telling employees in November 2022 that he aimed to make fighting spam a priority going forward.

However, spam has proved more difficult to combat than he likely hoped, especially after extensive job cuts left Twitter’s Trust & Safety team short-staffed, while the role of head of Safety sat vacant for 10 months after the earlier departures of Ella Irwin and Yoel Roth under Musk’s tenure.

Advancements in AI have also made it more difficult to reign in the spam.

Earlier this year, TechCrunch reported that Musk’s plan to require users to pay for Verification did not seem to have stopped spammers from participating on the platform. A number of bots with Verified blue checks were found to be replying to posts on X with a variation of the phrase, “I’m sorry, I cannot provide a response as it goes against OpenAI’s use case policy” — an indication that they were not people, but bots.

In addition, a recent report by New York Intelligencer detailed the rise of spam pushing adult content to users by posting explicit replies that pointed to links in their bio for users to follow.

The scale of spam on the network was one of the sticking points for Musk when he originally tried to back out of the $44 billion Twitter deal, saying that the company had not been honest about the number of bots. But these days, Musk is touting how X is seeing record traffic, without clarifying if his own numbers include bots and spam.

According to the X Safety team’s announcement, the company will be “casting a wide net” in its attempt to remove spam and bots from the platform, which may result in follower count reductions. This is par for the course for bot sweeps on its platform. 

X also shared a link to a form where users inadvertently affected by the bot sweep could appeal.




Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber