From Digital Age to Nano Age. WorldWide.

Tag: Meta

Robotic Automations

Meta opens Quest OS to third-party headset makers, taps Lenovo and Xbox as partners | TechCrunch

The mixed reality operating system that powers Meta Quest headsets can officially be used by third-party device makers, the company announced on Monday. Now called “Meta Horizon OS,” the open system allows developers to access technologies like eye, face, hand, and body tracking, high-resolution passthrough, and more.

Three major tech players—Asus, Lenovo and Microsoft’s Xbox—are the first companies to confirm they’ll be developing new devices that run the software. Most notably, Microsoft is teaming up with Meta to build a “limited-edition Meta Quest, inspired by Xbox,” according to the announcement. Asus and Lenovo, on the other hand, are building headsets designed for specific use cases. Asus is developing a headset dedicated to gaming whereas Lenovo wants its device to be for “productivity, learning, and entertainment.”

The company says all future headsets can connect via the same Meta Quest app on iOS and Android. Plus, the Meta Quest Store, which the company renamed the Meta Horizon Store, is open to third-party developers, allowing them to use Meta’s frameworks and tools to create new mixed-reality experiences.

Meta Horizon OS is a strategic move for the company and comes at a time when the VR/AR headset wars between Meta, Apple and Sony continue to heat up.


Software Development in Sri Lanka

Robotic Automations

Watch: TikTok and Meta's latest moves signal a more commodified internet | TechCrunch

The internet’s mega-platforms are slowly merging into a great blob of sameness, and even the hottest companies in the world are not immune from the trend. TikTok’s winning strategy to focus on short-form, vertical video has found fans amongst other internet platforms, and now TikTok is taking a page from its rival, books, reportedly borrowing from what made them popular.

TikTok is working toward launching a new app called TikTok Notes that will allow users to post images in an apparent bid to rival Instagram, a service best known for its static-photo-sharing feature. Instagram, of course, has expanded into video and stories itself, taking pieces of other services and incorporating them into its own product.

Instagram’s parent company Meta’s other services are frequent borrowers as well. As is nearly every social service you can imagine. Recall that great Stories Boom that led to everyone from Line to Spotify to Instagram to LinkedIn trying out the popular sharing format. If it works for one social media service, expect the rest to follow in some manner at some point — probably sooner rather than later.

There’s good logic behind the effort. The answer is why X wants to become a super app; the more a service can offer its userbase to do, the more time they may spend inside the app’s walls. Expanding a feature set can bolster engaged time, and therefore how much revenue a social media service can earn. At the same time, bloat is a real issue that can dilute a user experience and render an app, well, Facebook in time.

This theme — the slow commodification of digital services via sameification — is similar to why we’re seeing LinkedIn try to ape The New York Times’ gaming might, and to some degree why major platform companies in tech wind up trying to be good at everything: the never-ending need to grow revenue. Perhaps this is why your favorite app always feels more and more like an alien world as time passes. It will evolve away from what made it special, and unique, because sticking to those guns is not the way to create a service that the maximum number of people will use. For that, you need to become Facebook.

Software Development in Sri Lanka

Robotic Automations

Meta will auto-blur nudity in Instagram DMs in latest teen safety step | TechCrunch

Meta said on Thursday that it is testing new features on Instagram intended to help safeguard young people from unwanted nudity or sextortion scams. This includes a feature called “Nudity Protection in DMs,” which automatically blurs images detected as containing nudity.

The tech giant said it will also nudge teens to protect themselves by serving a warning encouraging them to think twice about sharing intimate images. Meta hopes this will boost protection against scammers who may send nude images to trick people into sending their own images in return.

The company said it is also implementing changes that will make it more difficult for potential scammers and criminals to find and interact with teens. Meta said it is developing new technology to identify accounts that are “potentially” involved in sextortion scams, and will apply limits on how these suspect accounts can interact with other users.

In another step announced on Thursday, Meta said it has increased the data it is sharing with the cross-platform online child safety program, Lantern, to include more “sextortion-specific signals.”

The social networking giant has had long-standing policies that ban people from sending unwanted nudes or seeking to coerce others into sharing intimate images. However, that doesn’t stop these problems from occurring and causing misery for scores of teens and young people — sometimes with extremely tragic results.

We’ve rounded up the latest crop of changes in more detail below.

Nudity screens

Nudity Protection in DMs aims to protect teen users of Instagram from cyberflashing by putting nude images behind a safety screen. Users will be able to choose whether or not to view such images.

“We’ll also show them a message encouraging them not to feel pressure to respond, with an option to block the sender and report the chat,” said Meta.

The nudity safety screen will be turned on by default for users under 18 globally. Older users will see a notification encouraging them to turn the feature on.

“When nudity protection is turned on, people sending images containing nudity will see a message reminding them to be cautious when sending sensitive photos, and that they can unsend these photos if they’ve changed their mind,” the company added.

Anyone trying to forward a nude image will see the same warning encouraging them to reconsider.

The feature is powered by on-device machine learning, so Meta said it will work within end-to-end encrypted chats because the image analysis is carried out on the user’s own device.

The nudity filter has been in development for nearly two years.

Safety tips

In another safeguarding measure, Instagram users who send or receive nudes will be directed to safety tips (with information about the potential risks involved), which, according to Meta, have been developed with guidance from experts.

“These tips include reminders that people may screenshot or forward images without your knowledge, that your relationship to the person may change in the future, and that you should review profiles carefully in case they’re not who they say they are,” the company wrote in a statement. “They also link to a range of resources, including Meta’s Safety Center, support helplines, for those over 18, and Take It Down for those under 18.”

The company is also testing showing pop-up messages to people who may have interacted with an account that has been removed for sextortion. These pop-ups will also direct users to relevant resources.

“We’re also adding new child safety helplines from around the world into our in-app reporting flows. This means when teens report relevant issues — such as nudity, threats to share private images or sexual exploitation or solicitation — we’ll direct them to local child safety helplines where available,” the company said.

Tech to spot sextortionists

While Meta says it removes sextortionists’ accounts when it becomes aware of them, it first needs to spot bad actors to shut them down. So, the company is trying to go further by “developing technology to help identify where accounts may potentially be engaging in sextortion scams, based on a range of signals that could indicate sextortion behavior.”

“While these signals aren’t necessarily evidence that an account has broken our rules, we’re taking precautionary steps to help prevent these accounts from finding and interacting with teen accounts,” the company said. “This builds on the work we already do to prevent other potentially suspicious accounts from finding and interacting with teens.”

It’s not clear what technology Meta is using to do this analysis, nor which signals might denote a potential sextortionist (we’ve asked for more details). Presumably, the company may analyze patterns of communication to try to detect bad actors.

Accounts that get flagged by Meta as potential sextortionists will face restrictions on messaging or interacting with other users.

“[A]ny message requests potential sextortion accounts try to send will go straight to the recipient’s hidden requests folder, meaning they won’t be notified of the message and never have to see it,” the company wrote.

Users who are already chatting with potential scam or sextortion accounts will not have their chats shut down, but will be shown Safety Notices “encouraging them to report any threats to share their private images, and reminding them that they can say ‘no’ to anything that makes them feel uncomfortable,” according to the company.

Teen users are already protected from receiving DMs from adults they are not connected with on Instagram (and also from other teens, in some cases). But Meta is taking this a step further: The company said it is testing a feature that hides the “Message” button on teenagers’ profiles for potential sextortion accounts — even if they’re connected.

“We’re also testing hiding teens from these accounts in people’s follower, following and like lists, and making it harder for them to find teen accounts in Search results,” it added.

It’s worth noting the company is under increasing scrutiny in Europe over child safety risks on Instagram, and enforcers have questioned its approach since the bloc’s Digital Services Act (DSA) came into force last summer.

A long, slow creep towards safety

Meta has announced measures to combat sextortion before — most recently in February, when it expanded access to Take It Down. The third-party tool lets people generate a hash of an intimate image locally on their own device and share it with the National Center for Missing and Exploited Children, helping to create a repository of non-consensual image hashes that companies can use to search for and remove revenge porn.

The company’s previous approaches to tackle that problem had been criticized, as they required young people to upload their nudes. In the absence of hard laws regulating how social networks need to protect children, Meta was left to self-regulate for years — with patchy results.

However, some requirements have landed on platforms in recent years — such as the U.K.’s Children Code (which came into force in 2021) and the more recent DSA in the EU — and tech giants like Meta are finally having to pay more attention to protecting minors.

For example, in July 2021, Meta started defaulting young people’s Instagram accounts to private just ahead of the U.K. compliance deadline. Even tighter privacy settings for teens on Instagram and Facebook followed in November 2022.

This January, the company announced it would set stricter messaging settings for teens on Facebook and Instagram by default, shortly before the full compliance deadline for the DSA kicked in in February.

This slow and iterative feature creep at Meta concerning protective measures for young users raises questions about what took the company so long to apply stronger safeguards. It suggests Meta opted for a cynical minimum in safeguarding in a bid to manage the impact on usage, and prioritize engagement over safety. That is exactly what Meta whistleblower Francis Haugen repeatedly denounced her former employer for.

Asked why the company is not also rolling out these new protections to Facebook, a spokeswoman for Meta told TechCrunch, “We want to respond to where we see the biggest need and relevance — which, when it comes to unwanted nudity and educating teens on the risks of sharing sensitive images — we think is on Instagram DMs, so that’s where we’re focusing first.”

Software Development in Sri Lanka

Robotic Automations

Meta AI is restricting election-related responses in India | TechCrunch

Last week, Meta started testing its AI chatbot in India across WhatsApp, Instagram, and Messenger. But with the Indian general elections beginning today, the company is already blocking specific queries in its chatbot.

Meta confirmed that it is restricting certain election-related keywords for AI in the test phase. It also said that it is working to improve the AI response system.

“This is a new technology, and it may not always return the response we intend, which is the same for all generative AI systems. Since we launched, we’ve constantly released updates and improvements to our models, and we’re continuing to work on making them better,” a company spokesperson told TechCrunch.

The move makes the social media giant the latest big tech company, proactively curtailing the scope of its generative AI services as it gears up for a major set of elections.

One of the big concerns from critics has been that genAI could provide misleading or outright false information to users, playing an illegal and unwelcome role in the democratic process.

Last month, Google started blocking election-related queries in its Gemini chatbot experience in India and other  markets where elections are taking place this year.

Meta’s approach follows a bigger effort the company has announced around what it allows and does not allow on its platform leading up to elections. It pledged to block political ads in the week leading up to an election in any country, and it is working to identify and disclose when images in ads or other content have been created with AI.

Meta’s handling of genAI queries appears to be based around a blocklist. When you ask Meta AI about specific politicians, candidates, officeholders, and certain other terms, it will redirect you to the Election Commission’s website.

“This question may pertain to a political figure during general elections. Please refer to the link,” the response says.

Image Credits: Screenshot by TechCrunch

Notably, the company is not strictly blocking responses to questions containing party names. However, if a query includes the names of candidates or other terms, you might see the boilerplate answer cited above.

But just like other AI-powered systems, Meta AI has some inconsistencies. For instance, when TechCrunch asked for information about “Indi Alliance” — a politicial alliance of multiple parties that is fighting against the incumbents Bharatiya Janata Party (BJP) — it responded with information containing a politician’s name. However, when we asked about that politician in a separate query, the chatbot didn’t respond with any information.

Image credits: Screenshot by TechCrunch

This week, the company rolled out a new Llama-3-powered Meta AI chatbot in more than a dozen countries, including the U.S., but India was missing from the list. Meta said that the chatbot will be in the test phase in the country for now.

“We continue to learn from our users tests in India. As we do with many of our AI products and features, we test them publicly in varying phases and in a limited capacity,” a company spokesperson told TechCrunch in a statement.

Currently, Meta AI is not blocking queries about elections for U.S.-related terms such as “Tell me about Joe Biden.” We have asked Meta if the company plans to restrict these queries to the U.S. elections or other markets. We will update the story if we hear back.

If you want to talk about your experience with Meta AI you can reach out to Ivan Mehta at [email protected] by email and through this link on Signal.

Software Development in Sri Lanka

Robotic Automations

Meta adds its AI chatbot, powered by Llama 3, to the search bar across its apps | TechCrunch

Meta’s making several big moves today to promote its AI services across its platform. The company has upgraded its AI chatbot with its newest large language model, Llama 3, and it is now running it in the search bar of its four major apps (Facebook, Messenger, Instagram and WhatsApp) across multiple countries. Alongside this, the company launched other new features, such as faster image generation and access to web search results.

This confirms and extends a test that TechCrunch reported on last week, when we spotted that the company had started testing Meta AI on Instagram’s search bar.

Additionally, the company is also launching a new site for users to access the chatbot.

The news underscores Meta’s efforts to stake out a position as a mover and shaker amid the current hype for generative AI tools among consumers. Chasing after other popular services in the market such as those from OpenAI, Mark Zuckerberg claimed today that Meta AI is possibly the “most intelligent AI assistant that you can freely use.”

Meta first rolled out Meta AI in the U.S. last year. It is now expanding the chatbot in the English language in over a dozen countries, including Australia, Canada, Ghana, Jamaica, Malawi, New Zealand, Nigeria, Pakistan, Singapore, South Africa, Uganda, Zambia and Zimbabwe.

The company last week started testing Meta AI in countries like India and Nigeria, but notably, India was missing from today’s announcement. Meta said that it plans to keep Meta AI in test mode in the country at the moment.

“We continue to learn from our users tests in India. As we do with many of our AI products and features, we test them publicly in varying phases and in a limited capacity,” a company spokesperson said in a statement.

New features

Users could already ask Meta AI for writing or recipe suggestions. Now, they can also ask for web-related results powered by Google and Bing.

Image Credits: Meta

The company said that it is also making image generation faster. Plus, users can ask Meta AI to animate an image or turn an image into a GIF. Users can see the AI tool modifying the image in real time as they type. The company has also worked on making image quality of AI-generated photos better.

Image Credits: Meta

AI-powered image-generation tools have been bad at spelling out words. Meta claims that its new model has also shown improvements in this area.

All AI things everywhere at once

Meta is adopting the approach of having Meta AI available in as many places as it can. It is making the bot available on the search bar, in individual and group chats and even in the feed.

Image Credits: Meta

The company said that you can ask questions related to posts in your Facebook feed. For example, if you see a photo of the aurora borealis, you could ask Meta AI for suggestions about what is the best time to visit Iceland to see northern lights.

Image Credits: Meta

Meta AI is already available on the Ray-Ban smart glasses, and the company said that soon it will be available on the Meta Quest headset, too.

There are downsides to having AI in so many places. Specifically, the models can “hallucinate” and make up random, often non-sensical responses, so using them across multiple platforms could end up presenting a content moderation nightmare. Earlier this week, 404 Media reported that Meta AI, chatting in a parents group, said that it had a gifted and academically challenged child who attended a particular school in New York. (Parents spotted the odd message, and Meta eventually also weighed in and removed the answer, saying that the company would continue to work on improving these systems.)

“We share information within the features themselves to help people understand that AI might return inaccurate or inappropriate outputs. Since we launched, we’ve constantly released updates and improvements to our models, and we’re continuing to work on making them better,” Meta told 404 Media.

Software Development in Sri Lanka

Robotic Automations

Meta unveils its newest custom AI chip as it races to catch up | TechCrunch

Meta, hell-bent on catching up to rivals in the generative AI space, is spending billions on its own AI efforts. A portion of those billions is going toward recruiting AI researchers. But an even larger chunk is being spent developing hardware, specifically chips to run and train Meta’s AI models.

Meta unveiled the newest fruit of its chip dev efforts today, conspicuously a day after Intel announced its latest AI accelerator hardware. Called the “next-gen” Meta Training and Inference Accelerator (MTIA), the successor to last year’s MTIA v1, the chip runs models including for ranking and recommending display ads on Meta’s properties (e.g. Facebook).

Compared to MTIA v1, which was built on a 7nm process, the next-gen MTIA is 5nm. (In chip manufacturing, “process” refers to the size of the smallest component that can be built on the chip.) The next-gen MTIA is a physically larger design, packed with more processing cores than its predecessor. And while it consumes more power — 90W versus 25W — it also boasts more internal memory (128MB versus 64MB) and runs at a higher average clock speed (1.35GHz up from 800MHz).

Meta says the next-gen MTIA is currently live in 16 of its data center regions and delivering up to 3x overall better performance compared to MTIA v1. If that “3x” claim sounds a bit vague, you’re not wrong — we thought so too. But Meta would only volunteer that the figure came from testing the performance of “four key models” across both chips.

“Because we control the whole stack, we can achieve greater efficiency compared to commercially available GPUs,” Meta writes in a blog post shared with TechCrunch.

Meta’s hardware showcase — which comes a mere 24 hours after a press briefing on the company’s various ongoing generative AI initiatives — is unusual for several reasons.

One, Meta reveals in the blog post that it’s not using the next-gen MTIA for generative AI training workloads at the moment, although the company claims it has “several programs underway” exploring this. Two, Meta admits that the next-gen MTIA won’t replace GPUs for running or training models — but instead will complement them.

Reading between the lines, Meta is moving slowly — perhaps more slowly than it’d like.

Meta’s AI teams are almost certainly under pressure to cut costs. The company’s set to spend an estimated $18 billion by the end of 2024 on GPUs for training and running generative AI models, and — with training costs for cutting-edge generative models ranging in the tens of millions of dollars — in-house hardware presents an attractive alternative.

And while Meta’s hardware drags, rivals are pulling ahead, much to the consternation of Meta’s leadership, I’d suspect.

Google this week made its fifth-generation custom chip for training AI models, TPU v5p, generally available to Google Cloud customers, and revealed its first dedicated chip for running models, Axion. Amazon has several custom AI chip families under its belt. And Microsoft last year jumped into the fray with the Azure Maia AI Accelerator and the Azure Cobalt 100 CPU.

In the blog post, Meta says it took fewer than nine months to “go from first silicon to production models” of the next-gen MTIA, which to be fair is shorter than the typical window between Google TPUs. But Meta has a lot of catching up to do if it hopes to achieve a measure of independence from third-party GPUs — and match its stiff competition.

Software Development in Sri Lanka

Robotic Automations

Meta releases Llama 3, claims it's among the best open models available | TechCrunch

Meta has released the latest entry in its Llama series of open source generative AI models: Llama 3. Or, more accurately, the company has open sourced two models in its new Llama 3 family, with the rest to come at an unspecified future date.

Meta describes the new models — Llama 3 8B, which contains 8 billion parameters, and Llama 3 70B, which contains 70 billion parameters — as a “major leap” compared to the previous-gen Llama models, Llama 2 8B and Llama 2 70B, performance-wise. (Parameters essentially define the skill of an AI model on a problem, like analyzing and generating text; higher-parameter-count models are, generally speaking, more capable than lower-parameter-count models.) In fact, Meta says that, for their respective parameter counts, Llama 3 8B and Llama 3 70B — trained on two custom-built 24,000 GPU clusters — are are among the best-performing generative AI models available today.

That’s quite a claim to make. So how is Meta supporting it? Well, the company points to the Llama 3 models’ scores on popular AI benchmarks like MMLU (which attempts to measure knowledge), ARC (which attempts to measure skill acquisition) and DROP (which tests a model’s reasoning over chunks of text). As we’ve written about before, the usefulness — and validity — of these benchmarks is up for debate. But for better or worse, they remain one of the few standardized ways by which AI players like Meta evaluate their models.

Llama 3 8B bests other open source models like Mistral’s Mistral 7B and Google’s Gemma 7B, both of which contain 7 billion parameters, on at least nine benchmarks: MMLU, ARC, DROP, GPQA (a set of biology-, physics- and chemistry-related questions), HumanEval (a code generation test), GSM-8K (math word problems), MATH (another mathematics benchmark), AGIEval (a problem-solving test set) and BIG-Bench Hard (a commonsense reasoning evaluation).

Now, Mistral 7B and Gemma 7B aren’t exactly on the bleeding edge (Mistral 7B was released last September), and in a few of benchmarks Meta cites, Llama 3 8B scores only a few percentage points higher than either. But Meta also makes the claim that the larger-parameter-count Llama 3 model, Llama 3 70B, is competitive with flagship generative AI models including Gemini 1.5 Pro, the latest in Google’s Gemini series.

Image Credits: Meta

Llama 3 70B beats Gemini 1.5 Pro on MMLU, HumanEval and GSM-8K, and — while it doesn’t rival Anthropic’s most performant model, Claude 3 Opus — Llama 3 70B scores better than the weakest model in the Claude 3 series, Claude 3 Sonnet, on five benchmarks (MMLU, GPQA, HumanEval, GSM-8K and MATH).

Image Credits: Meta

For what it’s worth, Meta also developed its own test set covering use cases ranging from coding and creating writing to reasoning to summarization, and — surprise! — Llama 3 70B came out on top against Mistral’s Mistral Medium model, OpenAI’s GPT-3.5 and Claude Sonnet. Meta says that it gated its modeling teams from accessing the set to maintain objectivity, but obviously — given that Meta itself devised the test — the results have to be taken with a grain of salt.

Image Credits: Meta

More qualitatively, Meta says that users of the new Llama models should expect more “steerability,” a lower likelihood to refuse to answer questions, and higher accuracy on trivia questions, questions pertaining to history and STEM fields such as engineering and science and general coding recommendations. That’s in part thanks to a much larger data set: a collection of 15 trillion tokens, or a mind-boggling ~750,000,000,000 words — seven times the size of the Llama 2 training set. (In the AI field, “tokens” refers to subdivided bits of raw data, like the syllables “fan,” “tas” and “tic” in the word “fantastic.”)

Where did this data come from? Good question. Meta wouldn’t say, revealing only that it drew from “publicly available sources,” included four times more code than in the Llama 2 training data set, and that 5% of that set has non-English data (in ~30 languages) to improve performance on languages other than English. Meta also said it used synthetic data — i.e. AI-generated data — to create longer documents for the Llama 3 models to train on, a somewhat controversial approach due to the potential performance drawbacks.

“While the models we’re releasing today are only fine tuned for English outputs, the increased data diversity helps the models better recognize nuances and patterns, and perform strongly across a variety of tasks,” Meta writes in a blog post shared with TechCrunch.

Many generative AI vendors see training data as a competitive advantage and thus keep it and info pertaining to it close to the chest. But training data details are also a potential source of IP-related lawsuits, another disincentive to reveal much. Recent reporting revealed that Meta, in its quest to maintain pace with AI rivals, at one point used copyrighted ebooks for AI training despite the company’s own lawyers’ warnings; Meta and OpenAI are the subject of an ongoing lawsuit brought by authors including comedian Sarah Silverman over the vendors’ alleged unauthorized use of copyrighted data for training.

So what about toxicity and bias, two other common problems with generative AI models (including Llama 2)? Does Llama 3 improve in those areas? Yes, claims Meta.

Meta says that it developed new data-filtering pipelines to boost the quality of its model training data, and that it’s updated its pair of generative AI safety suites, Llama Guard and CybersecEval, to attempt to prevent the misuse of and unwanted text generations from Llama 3 models and others. The company’s also releasing a new tool, Code Shield, designed to detect code from generative AI models that might introduce security vulnerabilities.

Filtering isn’t foolproof, though — and tools like Llama Guard, CybersecEval and Code Shield only go so far. (See: Llama 2’s tendency to make up answers to questions and leak private health and financial information.) We’ll have to wait and see how the Llama 3 models perform in the wild, inclusive of testing from academics on alternative benchmarks.

Meta says that the Llama 3 models — which are available for download now, and powering Meta’s Meta AI assistant on Facebook, Instagram, WhatsApp, Messenger and the web — will soon be hosted in managed form across a wide range of cloud platforms including AWS, Databricks, Google Cloud, Hugging Face, Kaggle, IBM’s WatsonX, Microsoft Azure, Nvidia’s NIM and Snowflake. In the future, versions of the models optimized for hardware from AMD, AWS, Dell, Intel, Nvidia and Qualcomm will also be made available.

And more capable models are on the horizon.

Meta says that it’s currently training Llama 3 models over 400 billion parameters in size — models with the ability to “converse in multiple languages,” take more data in and understand images and other modalities as well as text, which would bring the Llama 3 series in line with open releases like Hugging Face’s Idefics2.

Image Credits: Meta

“Our goal in the near future is to make Llama 3 multilingual and multimodal, have longer context and continue to improve overall performance across core [large language model] capabilities such as reasoning and coding,” Meta writes in a blog post. “There’s a lot more to come.”


Software Development in Sri Lanka

Robotic Automations

Adtech giants like Meta must give EU users real privacy choice, says EDPB | TechCrunch

The European Data Protection Board (EDPB) has published new guidance which has major implications for adtech giants like Meta and other large platforms.

The guidance, which was confirmed incoming Wednesday as we reported earlier, will steer how privacy regulators interpret the bloc’s General Data Protection Regulation (GDPR) in a critical area. The full opinion of the EDPB on so-called “consent or pay” runs to 42-pages.

Other large ad-funded platforms should also take note of the granular guidance. But Meta looks first in line to feel any resultant regulatory chill falling on its surveillance-based business model.

This is because — since November 2023 — the owner of Facebook and Instagram has forced users in the European Union to agree to being tracked and profiled for its ad targeting business or else they must pay it a monthly subscription to access ad-free versions of the services. However a market leader imposing that kind of binary choice looks unviable, per the EDPB, an expert body made up of representatives of data protection authorities from around the EU.

“The EDPB notes that negative consequences are likely to occur when large online platforms use a ‘consent or pay’ model to obtain consent for the processing,” the Board opines, underscoring the risk of “an imbalance of power” between the individual and the data controller, such as in cases where “an individual relies on the service and the main audience of the service”.

In a press release accompanying publication of the opinion, the Board’s chair, Anu Talu, also emphasized the need for platforms to provide users with a “real choice” over their privacy.

“Online platforms should give users a real choice when employing ‘consent or pay’ models,” Talu wrote. “The models we have today usually require individuals to either give away all their data or to pay. As a result most users consent to the processing in order to use a service, and they do not understand the full implications of their choices.”

“Controllers should take care at all times to avoid transforming the fundamental right to data protection into a feature that individuals have to pay to enjoy. Individuals should be made fully aware of the value and the consequences of their choices,” she added.

In a summary of its opinion, the EDPB writes in the press release that “in most cases” it will “not be possible” for “large online platforms” that implement consent or pay models to comply with the GDPR’s requirement for “valid consent” — if they “confront users only with a choice between consenting to processing of personal data for behavioural advertising purposes and paying a fee” (i.e. as Meta currently is).

The opinion defines large platforms, non-exhaustively, as entities designated as very large online platforms under the EU’s Digital Services Act or gatekeepers under the Digital Markets Act (DMA) — again, as Meta is (Facebook and Instagram are regulated under both laws).

“The EDPB considers that offering only a paid alternative to services which involve the processing of personal data for behavioural advertising purposes should not be the default way forward for controllers,” the Board goes on. “When developing alternatives, large online platforms should consider providing individuals with an ‘equivalent alternative’ that does not entail the payment of a fee.

If controllers do opt to charge a fee for access to the ‘equivalent alternative’, they should give significant consideration to offering an additional alternative. This free alternative should be without behavioural advertising, e.g. with a form of advertising involving the processing of less or no personal data. This is a particularly important factor in the assessment of valid consent under the GDPR.”

The EDPB takes care to stress that other core principles of the GDPR — such as purpose limitation, data minimisation and fairness — continue to apply around consent mechanisms, adding: “In addition, large online platforms should also consider compliance with the principles of necessity and proportionality, and they are responsible for demonstrating that their processing is generally in line with the GDPR.”

Given the detail of the EDPB’s opinion on this contentious and knotty topic — and the suggestion that lots of case-by-case analysis will be needed to make compliance assessments — Meta may feel confident nothing will change in the short term. Clearly it will take time for EU regulators to analyze, ingest and act on the Board’s advice.

Contacted for comment, Meta spokesman Matthew Pollard emailed a brief statement playing down the guidance: “Last year, the Court of Justice of the European Union [CJEU] ruled that the subscriptions model is a legally valid way for companies to seek people’s consent for personalised advertising. Today’s EDPB Opinion does not alter that judgment and Subscription for no ads complies with EU laws.”

Ireland’s Data Protection Commission, which oversees Meta’s GDPR compliance and has been reviewing its consent model since last year, declined to comment on whether it will be taking any action in light of the EDPB guidance as it said the case is ongoing.

Ever since Meta launched the “subscription for no-ads” offer last year it has continued to claim it complies with all relevant EU regulations — seizing on a line in the July 2023 ruling by the EU’s top court in which judges did not explicitly rule out the possibility of charging for a non-tracking alternative but instead stipulated that any such payment must be “necessary” and “appropriate”.

Commenting on this aspect of the CJEU’s decision in its opinion, the Board notes — in stark contrast to Meta’s repeat assertions the CJEU essentially sanctioned its subscription model in advance — that this was “not central to the Court’s determination”.

“The EDPB considers that certain circumstances should be present for a fee to be imposed, taking into account both possible alternatives to behavioural advertising that entail the processing of less personal data and the data subjects’ position,” it goes on with emphasis. “This is suggested by the words ‘necessary’ and ‘appropriate’, which should, however, not be read as requiring the imposition of a fee to be ‘necessary’ in the meaning of Article 52(1) of the Charter and EU data protection law.”

At the same time, the Board’s opinion does not entirely deny large platforms the possibility of charging for a non-tracking alternative — so Meta and its tracking-ad-funded ilk may feel confident they’ll be able to find some succour in 42-pages of granular discussion of the intersecting demands of data protection law. (Or, at least, that this intervention will keep regulators busy trying to wrap their heads about case-by-case complexities.)

However the guidance does — notably — encourage platforms towards offering free alternatives to tracking ads, including privacy-safe(r) ad-supported offerings.

The EDPB gives examples of contextual, “general advertising” or “advertising based on topics the data subject selected from a list of topics of interests”. (And it’s also worth noting the European Commission has also suggested Meta could be using contextual ads instead of forcing users to consent to to tracking ads as part of its oversight of the tech giant’s compliance with the DMA.)

“While there is no obligation for large online platforms to always offer services free of charge, making this further alternative available to the data subjects enhances their freedom of choice,” the Board goes on, adding: “This makes it easier for controllers to demonstrate that consent is freely given.”

While there’s rather more discursive nuance to what the Board has published today than instant clarity served up on a pivotal topic, the intervention is important and does look set to make it harder for mainstream adtech giants like Meta to frame and operate under false binary privacy-hostile choices over the long run.

Armed with this guidance, EU data protection regulators should be asking why such platforms aren’t offering far less privacy-hostile alternatives — and asking that question, if not literally today, then very, very soon.

For a tech giant as dominant and well resourced as Meta it’s hard to see how it can dodge answering that ask for long. Although it will surely stick to its usual GDPR playbook of spinning things out for as long as it possibly can and appealing every final decision it can.

Privacy rights nonprofit noyb, which has been at the forefront of fighting the creep of consent or pay tactics in the region in recent years, argues the EDPB opinion makes it clear Meta cannot rely on the “pay or okay” trick any more. However its founder and chairman Max Schrems told TechCrunch he’s concerned the Board hasn’t gone far enough in skewering this divisive forced consent mechanism.

“The EDPB recalls all the relevant elements, but does not unequivocally state the obvious consequence, which is that ‘pay or okay’ is not legal,” he told us. “It names all the elements why it’s illegal for Meta, but there is thousands of other pages where there is no answer yet.”

As if 42-pages of guidance on this knotty topic wasn’t enough already, the Board has more in the works, too: Talus says it intends to develop guidelines on consent or pay models “with a broader scope”, adding that it will “engage with stakeholders on these upcoming guidelines”.

European news publishers were the earliest adopters of the controversial consent tactic so the forthcoming “broader” EDPB opinion is likely to be keenly watched by players in the media industry.

Software Development in Sri Lanka

Robotic Automations

Meta confirms that its Llama 3 open source LLM is coming in the next month | TechCrunch

At an event in London on Tuesday, Meta confirmed that it plans an initial release of Llama 3 — the next generation of its large language model used to power generative AI assistants — within the next month.

This confirms a report published on Monday by The Information that Meta was getting close to launch.

“Within the next month, actually less, hopefully in a very short period of time, we hope to start rolling out our new suite of next-generation foundation models, Llama 3,” said Nick Clegg, Meta’s president of global affairs. He described what sounds like the release of several different iterations or versions of the product. “There will be a number of different models with different capabilities, different versatilities [released] during the course of this year, starting really very soon.”

The plan, Meta Chief Product Officer Chris Cox added, will be to power multiple products across Meta with Llama 3.

Meta has been scrambling to catch up to OpenAI, which took it and other big tech companies like Google by surprise when it launched ChatGPT over a year ago and the app went viral, turning generative AI questions and answers into everyday, mainstream experiences.

Meta has largely taken a very cautious approach with AI, but that hasn’t gone over well with the public, with previous versions of Llama criticized as too limited. (Llama 2 was released publicly in July 2023. The first version of Llama was not released to the public, yet it still leaked online.)

Llama 3, which is bigger in scope than its predecessors, is expected to address this, with capabilities not just to answer questions more accurately but also to field a wider range of questions that might include more controversial topics. It hopes this will make the product catch on with users.

“Our goal over time is to make a Llama-powered Meta AI be the most useful assistant in the world,” said Joelle Pineau, vice president AI Research. “There’s quite a bit of work remaining to get there.” The company did not talk about the size of the parameters it’s using in Llama 3, nor did it offer any demos of how it would work. It’s expected to have about 140 billion parameters, compared to 70 billion for the biggest Llama 2 model.

Most notably, Meta’s Llama families, built as open source products, represent a different philosophical approach to how AI should develop as a wider technology. In doing so, Meta is hoping to play into wider favor with developers versus more proprietary models.

But Meta is also playing it more cautiously, it seems, especially when it comes to other generative AI beyond text generation. The company is not yet releasing Emu, its image generation tool, Pineau said.

“Latency matters a lot along with safety along with ease of use, to generate images that you’re proud of and that represent whatever your creative context is,” Cox said.

Ironically — or perhaps predictably (heh) — even as Meta works to launch Llama 3, it does have some significant generative AI skeptics in the house.

Yann LeCun, the celebrated AI academic who is also Meta’s chief AI scientist, took a swipe at the limitations of generative AI overall and said his bet is on what comes after it. He predicts that will be joint embedding predicting architecture (JEPA), a different approach both to training models and producing results, which Meta has been using to build more accurate predictive AI in the area of image generation.

“The future of AI is JEPA. It’s not generative AI,” he said. “We’re going to have to change the name of Chris’s product division.”

Software Development in Sri Lanka

Robotic Automations

WhatsApp is adding filters to easily find messages | TechCrunch

For those who use WhatsApp more like an inbox, the app will now become more useful. WhatsApp on Tuesday announced a handful of new chat filters for the app to access certain types of messages easily: All, Unread and Groups.

The “All” filter is selected by default and shows you an unfiltered version of your inbox. The “Unread” filter is helpful in looking at messages you might not have seen. It also helps you get to inbox zero and get rid of the annoying number of unread chat indicators.

Notably, WhatsApp already had a way to look at unread messages through a filter in the search bar. But with the new filter bubbles on top of the chat screen, the option is easily available.

Meta said that the “Groups” filter was one of the most sought-after features for quickly scrolling through all your group chats. This filter will also show conversations in subgroups that are part of Communities — WhatsApp’s discussion group feature.

Gmail users might find these filter bubbles familiar as the Google-owned email service introduced a similar feature in 2020 to make search simpler.

This first set of filters might be just the beginning, though. As multiple reports from WABetaInfo have suggested, WhatsApp has been working on other filters such as “Contacts” to filter out messages from unknown people and businesses, “Favorites” to mark frequently used contacts and even custom chat filters in various beta versions of the app.

WhatsApp said that the filter options will be rolling out to users starting today and will be available to all users in a few weeks.

Software Development in Sri Lanka