From Digital Age to Nano Age. WorldWide.

Tag: privacy

Robotic Automations

Security bugs in popular phone-tracking app iSharing exposed users' precise locations | TechCrunch


Last week when a security researcher said he could easily obtain the precise location from any one of the millions of users of a widely used phone-tracking app, we had to see it for ourselves.

Eric Daigle, a computer science and economics student at the University of British Columbia in Vancouver, found the vulnerabilities in the tracking app iSharing as part of an investigation into the security of location-tracking apps. iSharing is one of the more popular location-tracking apps, claiming more than 35 million users to date.

Daigle said the bugs allowed anyone using the app to access anyone else’s coordinates, even if the user wasn’t actively sharing their location data with anybody else. The bugs also exposed the user’s name, profile photo and the email address and phone number used to log in to the app.

The bugs meant that iSharing’s servers were not properly checking that app users were only allowed to access their location data or someone else’s location data shared with them.

Location-tracking apps — including stealthy “stalkerware” apps — have a history of security mishaps that risk leaking or exposing users’ precise location.

In this case, it took Daigle only a few seconds to locate this reporter down to a few feet. Using an Android phone with the iSharing app installed and a new user account, we asked the researcher if he could pull our precise location using the bugs.

“770 Broadway in Manhattan?” Daigle responded, along with the precise coordinates of TechCrunch’s office in New York from where the phone was pinging out its location.

The security researcher pulled our precise location data from iSharing’s servers, even though the app was not sharing our location with anybody else. Image Credits: TechCrunch (screenshot)

Daigle shared details of the vulnerability with iSharing some two weeks earlier but had not heard anything back. That’s when Daigle asked TechCrunch for help in contacting the app makers. iSharing fixed the bugs soon after or during the weekend of April 20-21.

“We are grateful to the researcher for discovering this issue so we could get ahead of it,” iSharing co-founder Yongjae Chuh told TechCrunch in an email. “Our team is currently planning on working with security professionals to add any necessary security measures to make sure every user’s data is protected.”

iSharing blamed the vulnerability on a feature it calls groups, which allows users to share their location with other users. Chuh told TechCrunch that the company’s logs showed there was no evidence that the bugs were found prior to Daigle’s discovery. Chuh conceded that there “may have been oversight on our end,” because its servers were failing to check if users were allowed to join a group of other users.

TechCrunch held the publication of this story until Daigle confirmed the fix.

“Finding the initial flaw in total was probably an hour or so from opening the app, figuring out the form of the requests, and seeing that creating a group on another user and joining it worked,” Daigle told TechCrunch.

From there, he spent a few more hours building a proof-of-concept script to demonstrate the security bug.

Daigle, who described the vulnerabilities in more detail on his blog, said he plans to continue research in the stalkerware and location-tracking area.

Read more on TechCrunch:


To contact this reporter, get in touch on Signal and WhatsApp at +1 646-755-8849, or by email. You can also send files and documents via SecureDrop.


Software Development in Sri Lanka

Robotic Automations

Mozilla finds that most dating apps are not great guardians of user data | TechCrunch


Dating apps are not following great privacy practices and are collecting more data than ever in order to woo GenZ users, a new study by Mozilla pointed out. Researchers reviewed dating apps in terms of privacy in 2021. In the latest report, they noted that dating apps have become more data-hungry and intrusive.

The organization studied 25 apps and labeled 22 of them ” Privacy Not Included” — the lowest grade in Mozilla’s parlance. Mozilla only gave Queer-owned and operated Lex a positive review, with Harmony and Happn getting a passable rating.

Mozilla said 80% of the apps may share or sell your personal data for advertising purposes. The report noted that apps like Bumble have murky privacy clauses that might sell your data to advertisers.

“We use services that help improve marketing campaigns . . . Under certain privacy laws, this may be considered selling or sharing your personal information with our marketing partners,” an in-app popup says, as noted by Mozilla.

The report noted that the majority of apps, including Hinge, Tinder, OKCupid, Match, Plenty of Fish, BLK, and BlackPeopleMeet, had precise geolocation from users. Apps like Hinge collect location data in the background when the app is not in use.

“The collection of your geolocation may occur in the background even when you aren’t using the services if the permission you gave us expressly permits such collection. If you decline permission for us to collect your precise geolocation, we will not collect it, and our services that rely on precise geolocation may not be available to you,” Hinge’s policy states.

The insidious role of data brokers

Dating apps claim that they collect a significant amount of data to find better matches for users. However, if that data ends up with data brokers, there are grave consequences. Last year, the Washington Post reported that a U.S.-based Catholic group bought data from Grindr to monitor some members.

Notably, Grindr — which got one of the lowest ratings under Mozilla’s review — has had a record of lapses in privacy and security practices.

“If dating apps think people are going to keep handing over their most intimate data – basically, everything but their mother’s maiden name – without finding love, they’re underestimating their users. Their predatory privacy practices are a dealbreaker,” Zoë MacDonald, researcher and one of the authors of the report, said in a statement.

As per data from analytics firm data.ai, dating app downloads are slowing down. Separately, data from Pew Research published last year suggests that only three in 10 adults have ever used a dating site or an app — a figure that has stayed the same since 2019. Last month, The New York Times published a report noting that dating app giants Match Group and Bumble have lost more than $40 billion in market value since 2021.

Companies are now looking towards new ways to engage potential daters, including experimenting with AI-powered features. Match Group already said during its Q3 2024 earnings this year that it plans to leverage AI. In March, Platformer reported that Grindr plans to introduce an AI chatbot that could engage in sexually explicit language.

Mozilla said that apps already use AI to match algorithms. With the onset of generative AI, researchers are not confident that dating apps will have enough protections for user privacy.

Mozilla privacy researcher Misha Rykov said that, as dating apps collect more data, they have a duty to protect that data from being exploited.

“To forge stronger matches users have to write compelling profiles, fill out numerous interest and personality surveys, asses and charm matches, share pictures and videos — the whole experience is heavily dependent on how much information people share. By this virtue, dating apps must protect this data from exploitation,” he noted.

Earlier this year, Mozilla also evaluated a bunch of AI bots that could act as a romantic partner and found some serious concerns about security and data sharing practices of these bots.


Software Development in Sri Lanka

Robotic Automations

Cape dials up $61M from a16z and more for mobile service that doesn't use personal data | TechCrunch


AT&T’s recent mega customer data breach — 74 million accounts affected — laid bare how much data carriers have on their users, and also that the data is there for the hacking. On Thursday, a startup called Cape — based out of  Washington, D.C., and founded by a former executive from Palantir — is announcing $61 million in funding to build what it claims will be a much more secure approach: It won’t be able to leak your name, address, Social Security number or location because it never asks for these in the first place.

“You can’t leak or sell what you don’t have,” according to the company’s website. “We ask for the minimal amount of personal information and store sensitive credentials locally on your device, not on our network. That’s privacy by design.”

The funding is notable in part because Cape’s appeal to users is not yet proven. The company only came out of stealth four months ago, and it has yet to launch a commercial service for consumers. That’s due to come in June, CEO and founder John Doyle said in an interview. It has one pilot project in operation, deploying some of its tech with the U.S. government, securing communications on Guam.

The $61 million it announced Thursday is an aggregation across three rounds: a seed and Series A of $21 million (raised when it was still in stealth mode as a company called Private Tech) and a Series B of $40 million. The latest round is being co-led by A-Star and a16z, with XYZ Ventures, ex/ante, Costanoa Ventures, Point72 Ventures, Forward Deployed VC and Karman Ventures also participating. Cape is not disclosing its valuation.

Doyle attracted that investor attention in part because his past roles have included nearly nine years of working for Palantir as the head of its national security business. Prior to that, he was a special forces sergeant in the U.S. Army.

Those jobs exposed him to users (like government departments) who treated the security of personal information and privacy around data usage as essential. But, more entrepreneurially, they also got him thinking about consumers.

With the big focus that data privacy and security have today in the public consciousness — typically because of the many bad-news stories we hear about data breaches, the encroaching activities of social networks, and many questions about national security and digital networks — there is a clear opportunity to build tools like these for ordinary people, too, even if it feels like that might be impossible these days.

“It’s actually one of the reasons I started the company,” he told TechCrunch. “It feels like the problem is too big, right? It feels like our data is already out already out there and all these different ways and there’s really nothing to be done about it. We’ve all adopted a learned helplessness around the ability to be connected, but  have some sort of private, some sort of control over our own data, but that’s not necessarily true.”

Cape’s first efforts will be focused on providing eSIMs to users, which Doyle said would be sold essentially on a prepaid format to avoid the data that a contract might entail. Cape on Thursday also announced a partnership with UScellular, which itself provides an MNVO covering 12 cellular networks; Doyle said that Cape is talking with other telcos, too. Initially, it’s unlikely to bundle that eSIM with any mobile devices, although that also is not off the table for the future, Doyle said. Nor will the company provide encryption services around apps, voice calls and mobile data, at least not initially.

“We’re not focused on securing the content of communications. There’s a whole host of app-based solutions out there, apps out there like Proton Mail and Signal, and WhatsApp and other encrypted messaging platforms that do a good job, to varying degrees, depending on who you trust for securing the contents of your communications,” he said. “We are focused on your location and your identity data, in particular, as it relates to connecting to commercial cellular infrastructure, which is a related but separate set of problems.”

Cape’s not the only company in the market that is trying (or has tried, past-tense) to address privacy in the mobile sphere, but none of them has really made a mark so far. In Europe, recent efforts include the MVNO Murena, the OS maker Jolla, and the hardware company Punkt. Those that have come and gone include the Privacy Phone (FreedomPop) and Blackphone (from Geeksphone and Silent Circle).

There’s already the option to buy a prepaid SIM in the U.S. anonymously, but Cape points out that this has other trade-offs and isn’t as secure as what Cape is building. Although payments for this might be anonymous, a user’s data is still routed through the network infrastructure of the underlying carrier, making a user’s movements and usage observable. You can also still be open to SIM swap attacks and spam.

For a16z, the investment is becoming a part of the firm’s “American Dynamism” effort, which this week got a $600 million boost from the latest $7.2 billion in funds that the VC raised.

“Cape’s technology is an answer to long-standing, critical vulnerabilities in today’s telecom infrastructure that impacts everything from homeland security to consumer privacy,” said Katherine Boyle, general partner at a16z, in a statement. “The team is the first to apply this caliber of R&D muscle to rethinking legacy telecom networks, and are well placed to reshape the way mobile carriers think about their subscribers — as customers instead of products.”


Software Development in Sri Lanka

Robotic Automations

Adtech giants like Meta must give EU users real privacy choice, says EDPB | TechCrunch


The European Data Protection Board (EDPB) has published new guidance which has major implications for adtech giants like Meta and other large platforms.

The guidance, which was confirmed incoming Wednesday as we reported earlier, will steer how privacy regulators interpret the bloc’s General Data Protection Regulation (GDPR) in a critical area. The full opinion of the EDPB on so-called “consent or pay” runs to 42-pages.

Other large ad-funded platforms should also take note of the granular guidance. But Meta looks first in line to feel any resultant regulatory chill falling on its surveillance-based business model.

This is because — since November 2023 — the owner of Facebook and Instagram has forced users in the European Union to agree to being tracked and profiled for its ad targeting business or else they must pay it a monthly subscription to access ad-free versions of the services. However a market leader imposing that kind of binary choice looks unviable, per the EDPB, an expert body made up of representatives of data protection authorities from around the EU.

“The EDPB notes that negative consequences are likely to occur when large online platforms use a ‘consent or pay’ model to obtain consent for the processing,” the Board opines, underscoring the risk of “an imbalance of power” between the individual and the data controller, such as in cases where “an individual relies on the service and the main audience of the service”.

In a press release accompanying publication of the opinion, the Board’s chair, Anu Talu, also emphasized the need for platforms to provide users with a “real choice” over their privacy.

“Online platforms should give users a real choice when employing ‘consent or pay’ models,” Talu wrote. “The models we have today usually require individuals to either give away all their data or to pay. As a result most users consent to the processing in order to use a service, and they do not understand the full implications of their choices.”

“Controllers should take care at all times to avoid transforming the fundamental right to data protection into a feature that individuals have to pay to enjoy. Individuals should be made fully aware of the value and the consequences of their choices,” she added.

In a summary of its opinion, the EDPB writes in the press release that “in most cases” it will “not be possible” for “large online platforms” that implement consent or pay models to comply with the GDPR’s requirement for “valid consent” — if they “confront users only with a choice between consenting to processing of personal data for behavioural advertising purposes and paying a fee” (i.e. as Meta currently is).

The opinion defines large platforms, non-exhaustively, as entities designated as very large online platforms under the EU’s Digital Services Act or gatekeepers under the Digital Markets Act (DMA) — again, as Meta is (Facebook and Instagram are regulated under both laws).

“The EDPB considers that offering only a paid alternative to services which involve the processing of personal data for behavioural advertising purposes should not be the default way forward for controllers,” the Board goes on. “When developing alternatives, large online platforms should consider providing individuals with an ‘equivalent alternative’ that does not entail the payment of a fee.

If controllers do opt to charge a fee for access to the ‘equivalent alternative’, they should give significant consideration to offering an additional alternative. This free alternative should be without behavioural advertising, e.g. with a form of advertising involving the processing of less or no personal data. This is a particularly important factor in the assessment of valid consent under the GDPR.”

The EDPB takes care to stress that other core principles of the GDPR — such as purpose limitation, data minimisation and fairness — continue to apply around consent mechanisms, adding: “In addition, large online platforms should also consider compliance with the principles of necessity and proportionality, and they are responsible for demonstrating that their processing is generally in line with the GDPR.”

Given the detail of the EDPB’s opinion on this contentious and knotty topic — and the suggestion that lots of case-by-case analysis will be needed to make compliance assessments — Meta may feel confident nothing will change in the short term. Clearly it will take time for EU regulators to analyze, ingest and act on the Board’s advice.

Contacted for comment, Meta spokesman Matthew Pollard emailed a brief statement playing down the guidance: “Last year, the Court of Justice of the European Union [CJEU] ruled that the subscriptions model is a legally valid way for companies to seek people’s consent for personalised advertising. Today’s EDPB Opinion does not alter that judgment and Subscription for no ads complies with EU laws.”

Ireland’s Data Protection Commission, which oversees Meta’s GDPR compliance and has been reviewing its consent model since last year, declined to comment on whether it will be taking any action in light of the EDPB guidance as it said the case is ongoing.

Ever since Meta launched the “subscription for no-ads” offer last year it has continued to claim it complies with all relevant EU regulations — seizing on a line in the July 2023 ruling by the EU’s top court in which judges did not explicitly rule out the possibility of charging for a non-tracking alternative but instead stipulated that any such payment must be “necessary” and “appropriate”.

Commenting on this aspect of the CJEU’s decision in its opinion, the Board notes — in stark contrast to Meta’s repeat assertions the CJEU essentially sanctioned its subscription model in advance — that this was “not central to the Court’s determination”.

“The EDPB considers that certain circumstances should be present for a fee to be imposed, taking into account both possible alternatives to behavioural advertising that entail the processing of less personal data and the data subjects’ position,” it goes on with emphasis. “This is suggested by the words ‘necessary’ and ‘appropriate’, which should, however, not be read as requiring the imposition of a fee to be ‘necessary’ in the meaning of Article 52(1) of the Charter and EU data protection law.”

At the same time, the Board’s opinion does not entirely deny large platforms the possibility of charging for a non-tracking alternative — so Meta and its tracking-ad-funded ilk may feel confident they’ll be able to find some succour in 42-pages of granular discussion of the intersecting demands of data protection law. (Or, at least, that this intervention will keep regulators busy trying to wrap their heads about case-by-case complexities.)

However the guidance does — notably — encourage platforms towards offering free alternatives to tracking ads, including privacy-safe(r) ad-supported offerings.

The EDPB gives examples of contextual, “general advertising” or “advertising based on topics the data subject selected from a list of topics of interests”. (And it’s also worth noting the European Commission has also suggested Meta could be using contextual ads instead of forcing users to consent to to tracking ads as part of its oversight of the tech giant’s compliance with the DMA.)

“While there is no obligation for large online platforms to always offer services free of charge, making this further alternative available to the data subjects enhances their freedom of choice,” the Board goes on, adding: “This makes it easier for controllers to demonstrate that consent is freely given.”

While there’s rather more discursive nuance to what the Board has published today than instant clarity served up on a pivotal topic, the intervention is important and does look set to make it harder for mainstream adtech giants like Meta to frame and operate under false binary privacy-hostile choices over the long run.

Armed with this guidance, EU data protection regulators should be asking why such platforms aren’t offering far less privacy-hostile alternatives — and asking that question, if not literally today, then very, very soon.

For a tech giant as dominant and well resourced as Meta it’s hard to see how it can dodge answering that ask for long. Although it will surely stick to its usual GDPR playbook of spinning things out for as long as it possibly can and appealing every final decision it can.

Privacy rights nonprofit noyb, which has been at the forefront of fighting the creep of consent or pay tactics in the region in recent years, argues the EDPB opinion makes it clear Meta cannot rely on the “pay or okay” trick any more. However its founder and chairman Max Schrems told TechCrunch he’s concerned the Board hasn’t gone far enough in skewering this divisive forced consent mechanism.

“The EDPB recalls all the relevant elements, but does not unequivocally state the obvious consequence, which is that ‘pay or okay’ is not legal,” he told us. “It names all the elements why it’s illegal for Meta, but there is thousands of other pages where there is no answer yet.”

As if 42-pages of guidance on this knotty topic wasn’t enough already, the Board has more in the works, too: Talus says it intends to develop guidelines on consent or pay models “with a broader scope”, adding that it will “engage with stakeholders on these upcoming guidelines”.

European news publishers were the earliest adopters of the controversial consent tactic so the forthcoming “broader” EDPB opinion is likely to be keenly watched by players in the media industry.


Software Development in Sri Lanka

Robotic Automations

As AI accelerates, Europe's flagship privacy principles are under attack, warns EDPS | TechCrunch


The European Data Protection Supervisor (EDPS) has warned key planks of the bloc’s data protection and privacy regime are under attack from industry lobbyists and could face a critical reception from lawmakers in the next parliamentary mandate.

“We have quite strong attacks on the principles themselves,” warned Wojciech Wiewiórowski, who heads the regulatory body that oversees European Union institutions’ own compliance with the bloc’s data protection rules, Tuesday. He was responding to questions from members of the European Parliament’s civil liberties committee concerned the European Union’s General Data Protection Regulation (GDPR) risks being watered down. 

“Especially I mean the [GDPR] principles of minimization and purpose limitation. Purpose limitation will be definitely questioned in the next years.”

The GDPR’s purpose limitation principle implies that a data operation should be attached to a specific use. Further processing may be possible — but, for example, it may require obtaining permission from the person whose information it is, or having another valid legal basis. So the purpose limitation approach injects intentional friction into data operations.

Elections to the parliament are coming up in June, while the Commission’s mandate expires at the end of 2024 so changes to the EU’s executive are also looming. Any shift of approach by incoming lawmakers could have implications for the bloc’s high standard of protection for people’s data.

The GDPR has only been up and running since May 2018 but Wiewiórowski, who fleshed out his views on incoming regulatory challenges during a lunchtime press conference following publication of the EDPS’ annual report, said the make-up of the next parliament will contain few lawmakers who were involved with drafting and passing the flagship privacy framework.

“We can say that these people who will work in the European Parliament will see GDPR as a historic event,” he suggested, predicting there will be an appetite among the incoming cohort of parliamentarians to debate whether the landmark legislation is still fit for purpose. Though he also said some revisiting of past laws is a recurring process every time the make-up of the elected parliament turns over. 

But he particularly highlighted industry lobbying, especially complaints from businesses targeting the GDPR principle of purpose limitation. Some in the scientific community also see this element of the law as a limit to their research, per Wiewiórowski. 

“There is a kind of expectation from some of the [data] controllers that they will be able to reuse the data which are collected for reason ‘A’ in order to find things which we don’t know even that we will look for,” he said. “There is an old saying of one of the representatives of business who said that the purpose limitation is one of the biggest crimes against humanity, because we will need this data and we don’t know for which purpose.

“I don’t agree with it. But I cannot close my eyes to the fact that this question is asked.”

Any shift away from the GDPR’s purpose limitation and data minimization principles could have significant implications for privacy in the region, which was first to pass a comprehensive data protection framework. The EU is still considered to have some of the strongest privacy rules anywhere in the world, although the GDPR has inspired similar frameworks elsewhere.

Included in the GDPR is an obligation on those wanting to use personal data to process only the minimum info necessary for their purpose (aka data minimization). Additionally, personal data that’s collected for one purpose cannot simply be re-used, willy-nilly, for any other use that occurs.

But with the current industry-wide push to develop more and more powerful generative AI tools there’s a huge scramble for data to train AI models — an impetus that runs directly counter to the EU’s approach.

OpenAI, the maker of ChatGPT, has already run into trouble here. It’s facing a raft of GDPR compliance issues and investigations — including related to the legal basis claimed for processing people’s data for model training.

Wiewiórowski did not explicitly blame generative AI for driving the “strong attacks” on the GDPR’s purpose limitation principle. But he did name AI as one of the key challenges facing the region’s data protection regulators as a result of fast-paced tech developments.

“The problems connected with artificial intelligence and neuroscience will be the most important part of the next five years,” he predicted on nascent tech challenges.

“The technological part of our challenges is quite obvious at the time of the revolution of AI despite the fact that this is not the technological revolution that much. We have rather the democratization of the tools. But we have to remember as well, that in times of great instability, like the ones that we have right now — with Russia’s war in Ukraine — is the time when technology is developing every week,” he also said on this.

Wars are playing an active role in driving use of data and AI technologies — such as in Ukraine where AI has been playing a major role in areas like satellite imagery analysis and geospatial intelligence — with Wiewiórowski saying battlefield applications are driving AI uptake elsewhere in the world. The effects will be pushed out across the economy in the coming years, he further predicted.

On neuroscience, he pointed to regulatory challenges arising from the transhumanism movement, which aims to enhance human capabilities by physically connecting people with information systems. “This is not science fiction,” he said. “[It’s] something which is going on right now. And we have to be ready for that from the legal and human rights point of view.”

Examples of startups targeting transhumanism ideas include Elon Musk’s Neuralink, which is developing chips that can read brain waves. Facebook-owner Meta has also been reported to be working on an AI that can interpret people’s thoughts.

Privacy risks in an age of increasing convergence of technology systems and human biology could be grave indeed. So any AI-driven weakening of EU data protection laws in the near term is likely to have long-term consequences for citizens’ human rights.


Software Development in Sri Lanka

Robotic Automations

Could Congress actually pass a data privacy law? | TechCrunch


Hello, and welcome back to Equity, a podcast about the business of startups, where we unpack the numbers and nuance behind the headlines. This is our Monday show, where we dig into the weekend and take a peek at the week that is to come.

Now that we are finally past Y Combinator’s demo day — though our Friday show is worth listening to if you haven’t had a chance yet — we can dive into the latest news. So, this morning on Equity Monday we got into the chance that the United States might pass a real data privacy law. There’s movement to report, but we’re still very, very far from anything becoming law.

Elsewhere, the U.S. and TSMC have a new deal, there’s gaming news to consider (and a venture tie-in) and Spotify’s latest AI plans, which I am sure will delight some and annoy others. Hit play, and let’s talk about the news!

Oh, and on the crypto front, I forgot to mention that trading volume of digital tokens seems to have partially arrested its free fall, which should help some exchanges breathe a bit more easily.

Equity is TechCrunch’s flagship podcast and posts every Monday, Wednesday and Friday, and you can subscribe to us on Apple Podcasts, Overcast, Spotify and all the casts.

You also can follow Equity on X and Threads, at @EquityPod.

For the full interview transcript, for those who prefer reading over listening, read on, or check out our full archive of episodes over at Simplecast.




Software Development in Sri Lanka

Robotic Automations

Meta's 'consent or pay' tactic must not prevail over privacy, EU rights groups warn | TechCrunch


Nearly two dozen civil society groups and nonprofits have written an open letter to the European Data Protection Board (EDPB), urging it not to endorse a strategy used by Meta that they say is intended to bypass the EU’s privacy protections for commercial gain.

The letter comes ahead of a meeting of the EDPB this week that is expected to produce guidance on a controversial tactic used by Meta that forces Facebook and Instagram users to consent to its tracking.

Many of the signatories, which include the likes of EDRi, Access Now, noyb and Wikimedia Europe, signed a similar open letter to the EDPB in February. But the Board is expected to adopt an opinion on “consent or pay” (i.e., “pay or okay”) as soon as this Wednesday, so this is likely the last chance for rights groups to sway hearts and minds on an issue they warn is “pivotal” for the future of data protection and privacy in Europe.

“As you prepare to shape guidelines on the ‘Consent or Pay’ model, we urge you to refrain from endorsing a strategy that is merely an effort to bypass the EU’s data protection regulations for the sake of commercial advantage, and advocate for robust protections that prioritize data subjects’ agency and control over their information,” the open letter reads. “Emphasising the need for genuine choice and meaningful consent aligns with the foundational principles of data protection legislation, the larger context of all relevant CJEU rulings and serves to uphold the fundamental rights of individuals across the EEA [European Economic Area],” it continued.

Meta spokesman Matthew Pollard said in an emailed statement that the company’s offer, which it calls “Subscription for no ads,” is compliant with EU laws: “‘Subscription for no ads’ addresses the latest regulatory developments, guidance and judgments shared by leading European regulators and the courts over recent years. Specifically, it conforms to direction given by the highest court in Europe: in July, the Court of Justice of the European Union (CJEU) endorsed the subscriptions model as a way for people to consent to data processing for personalised advertising.”

A raft of complaints have been filed against Meta’s implementation of the pay-or-consent tactic since it launched the “no ads” subscription offer last fall. Additionally, in a notable step last month, the European Union opened a formal investigation into Meta’s tactic, seeking to find whether it breaches obligations that apply to Facebook and Instagram under the competition-focused Digital Markets Act (DMA). That probe remains ongoing.

The EU also recently questioned Meta about “consent or pay” using its oversight powers that let it monitor larger platforms’ compliance with the Digital Services Act (DSA), a sister regulation to the DMA, which also applies to Meta’s social networks, Facebook and Instagram.

The Board’s opinion on “consent or pay” is expected to provide guidance on how the EU’s General Data Protection Regulation (GDPR) should be applied in this area. However, it looks relevant to the DMA, too, as the newer market contestability law builds on the bloc’s data protection framework — referring to concepts set out in the GDPR, such as consent.

This means guidance from the EDPB, a GDPR-focused steering body, on how — or, indeed, whether — “consent or pay” models can comply with EU data protection rules is likely to have wider significance for whether the mechanism is ultimately deemed compliant by the Commission in its assessment of Meta’s approach to the DMA.

It’s worth noting the Board’s opinion will look at “consent or pay” generally, rather than specifically investigating Meta’s deployment. Nor is Meta the only service provider pushing “consent or pay” on users. The tactic was actually pioneered by a handful of European news publishers.

Nonetheless, it is likely to have major implications for the social networking giant. It could either make it harder for Meta to claim its subscription tactic is compliant with the GDPR, or, if the EDPB ends up endorsing a controversial model where users have to pay to obtain their rights, champagne corks will surely be popping at 1 Hacker Way as Meta prevails over Europeans’ privacy.

The rights groups behind the open letter say penning two letters on this topic a few weeks apart reflects “widespread apprehension about the consequences” of “consent or pay” being rubberstamped by privacy regulators.

Privacy rights group, noyb, and others have warned that a green light for the tactic will open the floodgates to apps of every stripe to leverage economic coercion to force users to be tracked — gutting key planks of the EU’s flagship data protection regime.

The letter points to concerns expressed by the Commission following its opening of a DMA investigation into Meta’s deployment of “consent or pay,” in which the EU cited misgivings that “the binary choice imposed by Meta’s ‘pay or consent’ model may not provide a real alternative in case users do not consent,” and could, therefore, lead to a continuing accumulation of personal data and loss of privacy for users.

The letter argues that the payment relied upon in the “consent or pay” model “could be deemed a degradation of service conditions,” which it suggests breaches Article 13(6) of the DMA. That section “corresponds to the fairness principle under Article 5(1)(a) GDPR.”

“Given that both acts refer to Article 4(11) GDPR, this underscores pressing need to protect freely given consent consistently in the context of the DMA as well as under the GDPR,” the letter reads.

The letter further notes the Commission has previously expressed doubts that consent or pay is “a credible alternative to tracking,” in relation to efforts to encourage businesses to simplify cookie consent flows (aka the “Cookie Pledge”) because of the “extremely limited” number of consumers who agree to pay in light of how many different apps and websites they may use each day.

It also points out the EDPB’s response to the Commission’s Cookie Pledge proposal contained what they couch as a clarification “that this ‘less intrusive’ option should be offered free of charge.”

“This insistence on genuine user choice underscores the fundamental principle that consent must be freely given,” it goes on. “However, the current ‘Consent or Pay’ model sets in stone a coercive dynamic, leaving users without an actual choice. The continued acceptance of this model undermines the fundamental principles of consent and perpetuates a system that prioritises commercial interests over individual rights.”

A spokeswoman for the Board confirmed it had received “several letters” from civil society organizations on this topic. She also told us the opinion on “consent or pay” “concerns a matter of general application, and does not look into specific companies,” further emphasizing: “The EDPB will only look into this matter from a data protection perspective.”

“The opinion will address the use of consent or pay models by large online platforms for purposes of behavioural advertising. More general guidance on consent or pay models will be adopted at a later stage,” she added, saying if an opinion is adopted Wednesday, there would still be some administrative work to do before it could be made public — declining to confirm a publication date ahead of time.

This report was updated after the EDPB responded to our request for comment and clarifications.


Software Development in Sri Lanka

Robotic Automations

PVML combines an AI-centric data access and analysis platform with differential privacy | TechCrunch


Enterprises are hoarding more data than ever to fuel their AI ambitions, but at the same time, they are also worried about who can access this data, which is often of a very private nature. PVML is offering an interesting solution by combining a ChatGPT-like tool for analyzing data with the safety guarantees of differential privacy. Using retrieval-augmented generation (RAG), PVML can access a corporation’s data without moving it, taking away another security consideration.

The Tel Aviv-based company recently announced that it has raised an $8 million seed round led by NFX, with participation from FJ Labs and Gefen Capital.

Image Credits: PVML

The company was founded by husband-and-wife team Shachar Schnapp (CEO) and Rina Galperin (CTO). Schnapp got his doctorate in computer science, specializing in differential privacy, and then worked on computer vision at General Motors, while Galperin got her master’s in computer science with a focus on AI and natural language processing and worked on machine learning projects at Microsoft.

“A lot of our experience in this domain came from our work in big corporates and large companies where we saw that things are not as efficient as we were hoping for as naive students, perhaps,” Galperin said. “The main value that we want to bring organizations as PVML is democratizing data. This can only happen if you, on one hand, protect this very sensitive data, but, on the other hand, allow easy access to it, which today is synonymous with AI. Everybody wants to analyze data using free text. It’s much easier, faster and more efficient — and our secret sauce, differential privacy, enables this integration very easily.”

Differential privacy is far from a new concept. The core idea is to ensure the privacy of individual users in large data sets and provide mathematical guarantees for that. One of the most common ways to achieve this is to introduce a degree of randomness into the data set, but in a way that doesn’t alter the data analysis.

The team argues that today’s data access solutions are ineffective and create a lot of overhead. Often, for example, a lot of data has to be removed in the process of enabling employees to gain secure access to data — but that can be counterproductive because you may not be able to effectively use the redacted data for some tasks (plus the additional lead time to access the data means real-time use cases are often impossible).

Image Credits: PVML

The promise of using differential privacy means that PVML’s users don’t have to make changes to the original data. This avoids almost all of the overhead and unlocks this information safely for AI use cases.

Virtually all the large tech companies now use differential privacy in one form or another, and make their tools and libraries available to developers. The PVML team argues that it hasn’t really been put into practice yet by most of the data community.

“The current knowledge about differential privacy is more theoretical than practical,” Schnapp said. “We decided to take it from theory to practice. And that’s exactly what we’ve done: We develop practical algorithms that work best on data in real-life scenarios.”

None of the differential privacy work would matter if PVML’s actual data analysis tools and platform weren’t useful. The most obvious use case here is the ability to chat with your data, all with the guarantee that no sensitive data can leak into the chat. Using RAG, PVML can bring hallucinations down to almost zero and the overhead is minimal since the data stays in place.

But there are other use cases, too. Schnapp and Galperin noted how differential privacy also allows companies to now share data between business units. In addition, it may also allow some companies to monetize access to their data to third parties, for example.

“In the stock market today, 70% of transactions are made by AI,” said Gigi Levy-Weiss, NFX general partner and co-founder. “That’s a taste of things to come, and organizations who adopt AI today will be a step ahead tomorrow. But companies are afraid to connect their data to AI, because they fear the exposure — and for good reasons. PVML’s unique technology creates an invisible layer of protection and democratizes access to data, enabling monetization use cases today and paving the way for tomorrow.”


Software Development in Sri Lanka

Robotic Automations

Indian government's cloud spilled citizens' personal data online for years | TechCrunch


The Indian government has finally resolved a years-long cybersecurity issue that exposed reams of sensitive data about its citizens. A security researcher exclusively told TechCrunch he found at least hundreds of documents containing citizens’ personal information — including Aadhaar numbers, COVID-19 vaccination data, and passport details — spilling online for anyone to access.

At fault was the Indian government’s cloud service, dubbed S3WaaS, which is billed as a “secure and scalable” system for building and hosting Indian government websites.

Security researcher Sourajeet Majumder told TechCrunch that he found a misconfiguration in 2022 that was exposing citizens’ personal information stored on S3WaaS to the open internet. Because the private documents were inadvertently made public, search engines also indexed the documents, allowing anyone to actively search the internet for the sensitive private citizen data.

With support from digital rights organization the Internet Freedom Foundation, Majumder reported the incident at the time to India’s computer emergency response team, known as CERT-In, and the Indian government’s National Informatics Centre.

CERT-In quickly acknowledged the issue, and links containing sensitive files from public search engines were pulled down.

But Majumder said that despite repeated warnings about the data spill, the Indian government cloud service was still exposing some individuals’ personal information as recently as last week.

With evidence of ongoing exposures of private data, Majumder asked TechCrunch for help getting the remaining data secured. Majumder said that some citizens’ sensitive data began spilling online long after he first disclosed the misconfiguration in 2022.

TechCrunch reported some of the exposed data to CERT-In. Majumder confirmed that those files are no longer publicly accessible.

When reached prior to publication, CERT-In did not object to TechCrunch publishing details of the security lapse. Representatives for the National Informatics Centre and S3WaaS did not respond to a request for comment.

Majumder said it was not possible to accurately estimate the true extent of this data leak, but warned that bad actors were purportedly selling the data on a known cybercrime forum before it was shuttered by U.S. authorities. CERT-In would not say if bad actors accessed the exposed data.

The exposed data, Majumder said, potentially puts citizens at risk of identity thefts and scams.

“More than that, when sensitive health information like COVID test results and vaccine records get out, it’s not just our medical privacy that’s compromised — it stirs fears of discrimination and social rejection,” he said.

Majumder noted that this incident should be a “wake-up call for security reforms.”


Software Development in Sri Lanka

Robotic Automations

'Reverse' searches: The sneaky ways that police tap tech companies for your private data | TechCrunch


U.S. police departments are increasingly relying on a controversial surveillance practice to demand large amounts of users’ data from tech companies, with the aim of identifying criminal suspects.

So-called “reverse” searches allow law enforcement and federal agencies to force big tech companies, like Google, to turn over information from their vast stores of user data. These orders are not unique to Google — any company with access to user data can be compelled to turn it over — but the search giant has become one of the biggest recipients of police demands for access to its databases of users’ information.

For example, authorities can demand that a tech company turn over information about every person who was in a particular place at a certain time based on their phone’s location, or who searched for a specific keyword or query. Thanks to a recently disclosed court order, authorities have shown they are able to scoop up identifiable information on everyone who watched certain YouTube videos.

Reverse searches effectively cast a digital dragnet over a tech company’s store of user data to catch the information that police are looking for.

Civil liberties advocates have argued that these kinds of court-approved orders are overbroad and unconstitutional, as they can also compel companies to turn over information on entirely innocent people with no connection to the alleged crime. Critics fear that these court orders can allow police to prosecute people based on where they go or whatever they search the internet for.

So far, not even the courts can agree on whether these orders are constitutional, setting up a likely legal challenge before the U.S. Supreme Court.

In the meantime, federal investigators are already pushing this controversial legal practice further. In one recent case, prosecutors demanded that Google turn over information on everyone who accessed certain YouTube videos in an effort to track down a suspected money launderer.

A recently unsealed search application filed in a Kentucky federal court last year revealed that prosecutors wanted Google to “provide records and information associated with Google accounts or IP addresses accessing YouTube videos for a one week period, between January 1, 2023, and January 8, 2023.”

The search application said that as part of an undercover transaction, the suspected money launderer shared a YouTube link with investigators, and investigators sent back two more YouTube links. The three videos — which TechCrunch has seen and have nothing to do with money laundering — collectively racked up about 27,000 views at the time of the search application. Still, prosecutors sought an order compelling Google to share information about every person who watched those three YouTube videos during that week, likely in a bid to narrow down the list of individuals to their top suspect, who prosecutors presumed had visited some or all of the three videos.

This particular court order was easier for law enforcement to obtain than a traditional search warrant because it sought access to connection logs about who accessed the videos, rather than the higher-standard search warrant that courts can use to demand that tech companies turn over the contents of someone’s private messages.

The Kentucky federal court approved the search order under seal, blocking its public release for a year. Google was barred from disclosing the demand until last month when the court’s order expired. Forbes first reported on the existence of the court order.

It’s not known if Google complied with the order, and a Google spokesperson declined to say either way when asked by TechCrunch.

Riana Pfefferkorn, a research scholar at the Stanford Internet Observatory, said this was a “perfect example” why civil liberties advocates have long criticized this type of court order for its ability to grant police access to people’s intrusive information.

“The government is essentially dragooning YouTube into serving as a honeypot for the feds to ensnare a criminal suspect by triangulating on who’d viewed the videos in question during a specific time period,” said Pfefferkorn, speaking about the recent order targeting YouTube users. “But by asking for information on everyone who’d viewed any of the three videos, the investigation also sweeps in potentially dozens or hundreds of other people who are under no suspicion of wrongdoing, just like with reverse search warrants for geolocation.”

Demanding the digital haystack

Reverse search court orders and warrants are a problem largely of Google’s own making, in part thanks to the gargantuan amounts of user data that the tech giant has long collected on its users, like browsing histories, web searches and even granular location data. Realizing that tech giants hold huge amounts of users’ location data and search queries, law enforcement began succeeding in convincing courts into granting broader access to tech companies’ databases than just targeting individual users.

A court-authorized search order allows police to demand information from a tech or phone company about a person who investigators believe is involved in a crime that took place or is about to happen. But instead of trying to find their suspect by looking for a needle in a digital haystack, police are increasingly demanding large chunks of the haystack — even if that includes personal information on innocent people — to sift for clues.

Using this same technique as demanding identifying information of anyone who viewed YouTube videos, law enforcement can also demand that Google turn over data that identifies every person who was at a certain place and time, or every user who searched the internet for a specific query.

Geofence warrants, as they are more commonly known, allow police to draw a shape on a map around a crime scene or place of interest and demand huge swaths of location data from Google’s databases on anyone whose phone was in that area at a point in time.

Read more on TechCrunch

Police can also use so-called “keyword search” warrants that can identify every user who searched a keyword or search term within a time frame, typically to find clues about criminal suspects researching their would-be crimes ahead of time.

Both of these warrants can be effective because Google stores the granular location data and search queries of billions of people around the world.

Law enforcement might defend the surveillance-gathering technique for its uncanny ability to catch even the most elusive suspected criminals. But plenty of innocent people have been caught up in these investigative dragnets by mistake — in some cases as criminal suspects — simply by having phone data that appears to place them near the scene of an alleged crime.

Though Google’s practice of collecting as much data as it can on its users makes the company a prime target and a top recipient of reverse search warrants, it’s not the only company subject to these controversial court orders. Any tech company large or small that stores banks of readable user data can be compelled to turn it over to law enforcement. Microsoft, Snap, Uber and Yahoo (which owns TechCrunch) have all received reverse orders for user data.

Some companies choose not to store user data and others scramble the data so it can’t be accessed by anyone other than the user. That prevents companies from turning over access to data that they don’t have or cannot access — especially when laws change from one day to the next, such as when the U.S. Supreme Court overturned the constitutional right to access abortion.

Google, for its part, is putting a slow end to its ability to respond to geofence warrants, specifically by moving where it stores users’ location data. Instead of centralizing enormous amounts of users’ precise location histories on its servers, Google will soon start storing location data directly on users’ devices, so that police must seek the data from the device owner directly. Still, Google has so far left the door open to receiving search orders that seek information on users’ search queries and browsing history.

But as Google and others are finding out the hard way, the only way for companies to avoid turning over customer data is by not having it to begin with.


Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber