From Digital Age to Nano Age. WorldWide.

Tag: labels

Robotic Automations

Google will now show labels in Play Store to denote official government apps | TechCrunch


After months of testing, Google is today rolling out labels in the Play Store to denote official state and federal government apps in more than 14 countries. This new label will help users rule out apps that masquerade as official apps to steal money or data.

The company said that these badges currently cover more than 2,000 apps in countries including Australia, Canada, Germany, France, the United Kingdom, Japan, South Korea, the United States, Brazil, Indonesia, India, and Mexico. Last November, the company teased this feature when it announced new rules for app developers.

Users will be able to see a new “Government” badge on official apps. If they tap on the badge, a pop-up displays a message saying, “Play verified this app is affiliated with a government entity.” The badge also shows up in lists like “Top Charts” for apps.

Screenshot

Google said it has worked with governments and their developer partners to onboard apps with badges. In India, the company has faced problems with numerous fake central and state government apps popping up on the Play Store to dupe people.

The company noted that Play Store’s policy already doesn’t allow apps with false descriptions, misleading icons, or screenshots — especially the ones claiming to be official apps.

“Apps that falsely claim affiliation with a government entity or to provide or facilitate government services for which they are not properly authorized,” Google’s rule about deceptive behavior read.

Google said that it asks developers to submit proof if they have sufficient permission to process government documents for safety reasons. It also encourages the government to use official email IDs to create developer accounts on Google Play to publish apps.




Software Development in Sri Lanka

Robotic Automations

Meta's new AI deepfake playbook: More labels, fewer takedowns | TechCrunch


Meta has announced changes to its rules on AI-generated content and manipulated media following criticism from its Oversight Board. Starting next month, the company said, it will label a wider range of such content, including by applying a “Made with AI” badge to deepfakes. Additional contextual information may be shown when content has been manipulated in other ways that pose a high risk of deceiving the public on an important issue.

The move could lead to the social networking giant labeling more pieces of content that have the potential to be misleading — important in a year of many elections taking place around the world. However, for deepfakes, Meta is only going to apply labels where the content in question has “industry standard AI image indicators,” or where the uploader has disclosed it’s AI-generated content.

AI-generated content that falls outside those bounds will, presumably, escape unlabeled. 

The policy change is also likely to lead to more AI-generated content and manipulated media remaining on Meta’s platforms, since it’s shifting to favor an approach focused on “providing transparency and additional context,” as the “better way to address this content” (rather than removing manipulated media, given associated risks to free speech).

So, for AI-generated or otherwise manipulated media on Meta platforms like Facebook and Instagram, the playbook appears to be: more labels, fewer takedowns.

Meta said it will stop removing content solely on the basis of its current manipulated video policy in July, adding in a blog post published Friday that: “This timeline gives people time to understand the self-disclosure process before we stop removing the smaller subset of manipulated media.”

The change of approach may be intended to respond to rising legal demands on Meta around content moderation and systemic risk, such as the European Union’s Digital Services Act. Since last August, the EU law has applied a set of rules to its two main social networks that require Meta to walk a fine line between purging illegal content, mitigating systemic risks and protecting free speech. The bloc is also applying extra pressure on platforms ahead of elections to the European Parliament this June, including urging tech giants to watermark deepfakes where technically feasible.

The upcoming U.S. presidential election in November is also likely on Meta’s mind.

Oversight Board criticism

Meta’s advisory board, which the tech giant funds but permits to run at arm’s length, reviews a tiny percentage of its content moderation decisions but can also make policy recommendations. Meta is not bound to accept the board’s suggestions, but in this instance it has agreed to amend its approach.

In a blog post published Friday, Monika Bickert, Meta’s VP of content policy, said the company is amending its policies on AI-generated content and manipulated media based on the board’s feedback. “We agree with the Oversight Board’s argument that our existing approach is too narrow since it only covers videos that are created or altered by AI to make a person appear to say something they didn’t say,” she wrote.

Back in February, the Oversight Board urged Meta to rethink its approach to AI-generated content after taking on the case of a doctored video of President Biden that had been edited to imply a sexual motive to a platonic kiss he gave his granddaughter.

While the board agreed with Meta’s decision to leave the specific content up, they attacked its policy on manipulated media as “incoherent” — pointing out, for example, that it only applies to video created through AI, letting other fake content (such as more basically doctored video or audio) off the hook. 

Meta appears to have taken the critical feedback on board.

“In the last four years, and particularly in the last year, people have developed other kinds of realistic AI-generated content like audio and photos, and this technology is quickly evolving,” Bickert wrote. “As the Board noted, it’s equally important to address manipulation that shows a person doing something they didn’t do.

“The Board also argued that we unnecessarily risk restricting freedom of expression when we remove manipulated media that does not otherwise violate our Community Standards. It recommended a ‘less restrictive’ approach to manipulated media like labels with context.”

Earlier this year, Meta announced it was working with others in the industry on developing common technical standards for identifying AI content, including video and audio. It’s leaning on that effort to expand labeling of synthetic media now.

“Our ‘Made with AI’ labels on AI-generated video, audio and images will be based on our detection of industry-shared signals of AI images or people self-disclosing that they’re uploading AI-generated content,” said Bickert, noting the company already applies “Imagined with AI” labels to photorealistic images created using its own Meta AI feature.

The expanded policy will cover “a broader range of content in addition to the manipulated content that the Oversight Board recommended labeling,” per Bickert.

“If we determine that digitally-created or altered images, video or audio create a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label so people have more information and context,” she wrote. “This overall approach gives people more information about the content so they can better assess it and so they will have context if they see the same content elsewhere.”

Meta said it won’t remove manipulated content — whether AI-based or otherwise doctored — unless it violates other policies (such as voter interference, bullying and harassment, violence and incitement, or other Community Standards issues). Instead, as noted above, it may add “informational labels and context” in certain scenarios of high public interest.

Meta’s blog post highlights a network of nearly 100 independent fact-checkers, which it says it’s engaged with to help identify risks related to manipulated content.

These external entities will continue to review false and misleading AI-generated content, per Meta. When they rate content as “False or Altered,” Meta said it will respond by applying algorithm changes that reduce the content’s reach — meaning stuff will appear lower in feeds so fewer people see it, in addition to Meta slapping an overlay label with additional information for those eyeballs that do land on it.

These third party fact-checkers look set to face an increasing workload as synthetic content proliferates, driven by the boom in generative AI tools. And because more of this stuff looks set to remain on Meta’s platforms as a result of this policy shift.


Software Development in Sri Lanka

Robotic Automations

India, grappling with election misinfo, weighs up labels and its own AI safety coalition | TechCrunch


India, long in the tooth when it comes to co-opting tech to persuade the public, has become a global hot spot when it comes to how AI is being used, and abused, in political discourse, and specifically the democratic process. Tech companies, who built the tools in the first place, are making trips to the country to push solutions.

Earlier this year, Andy Parsons, a senior director at Adobe who oversees its involvement in the cross-industry Content Authenticity Initiative (CAI), stepped into the whirlpool when he made a trip to India to visit with media and tech organizations in the country to promote tools that can be integrated into content workflows to identify and flag AI content.

“Instead of detecting what’s fake or manipulated, we as a society, and this is an international concern, should start to declare authenticity, meaning saying if something is generated by AI that should be known to consumers,” he said in an interview.

Parsons added that some Indian companies — currently not part of a Munich AI election safety accord signed by OpenAI, Adobe, Google and Amazon in February — intended to construct a similar alliance in the country.

“Legislation is a very tricky thing. To assume that the government will legislate correctly and rapidly enough in any jurisdiction is something hard to rely on. It’s better for the government to take a very steady approach and take its time,” he said.

Detection tools are famously inconsistent, but they are a start in fixing some of the problems, or so the argument goes.

“The concept is already well understood,” he said during his Delhi trip. “I’m helping raise awareness that the tools are also ready. It’s not just an idea. This is something that’s already deployed.”

Andy Parsons, senior director at Adobe. Image Credits: Adobe

The CAI — which promotes royalty-free, open standards for identifying if digital content was generated by a machine or a human — predates the current hype around generative AI: It was founded in 2019 and now has 2,500 members, including Microsoft, Meta, and Google, The New York Times, The Wall Street Journal and the BBC.

Just as there is an industry growing around the business of leveraging AI to create media, there is a smaller one being created to try to course-correct some of the more nefarious applications of that.

So in February 2021, Adobe went one step further into building one of those standards itself and co-founded the Coalition for Content Provenance and Authenticity (C2PA) with ARM, BBC, Intel, Microsoft and Truepic. The coalition aims to develop an open standard, which taps the metadata of images, videos, text and other media to highlight their provenance and tell people about the file’s origins, the location and time of its generation, and whether it was altered before it reached the user. The CAI works with C2PA to promote the standard and make it available to the masses.

Now it is actively engaging with governments like India’s to widen the adoption of that standard to highlight the provenance of AI content and participate with authorities in developing guidelines for AI’s advancement.

Adobe has nothing but also everything to lose by playing an active role in this game. It’s not — yet — acquiring or building large language models (LLMs) of its own, but as the home of apps like Photoshop and Lightroom, it’s the market leader in tools for the creative community, and so not only is it building new products like Firefly to generate AI content natively, but it is also infusing legacy products with AI. If the market develops as some believe it will, AI will be a must-have in the mix if Adobe wants to stay on top. If regulators (or common sense) have their way, Adobe’s future may well be contingent on how successful it is in making sure what it sells does not contribute to the mess.

The bigger picture in India in any case is indeed a mess.

Google focused on India as a test bed for how it will bar use of its generative AI tool Gemini when it comes to election content; parties are weaponizing AI to create memes with likenesses of opponents; Meta has set up a deepfake “helpline” for WhatsApp, such is the popularity of the messaging platform in spreading AI-powered missives; and at a time when countries are sounding increasingly alarmed about AI safety and what they have to do to ensure it, we’ll have to see what the impact will be of India’s government deciding in March to relax rules on how new AI models are built, tested and deployed. It’s certainly meant to spur more AI activity, at any rate.

Using its open standard, the C2PA has developed a digital nutrition label for content called Content Credentials. The CAI members are working to deploy the digital watermark on their content to let users know its origin and whether it is AI-generated. Adobe has Content Credentials across its creative tools, including Photoshop and Lightroom. It also automatically attaches to AI content generated by Adobe’s AI model Firefly. Last year, Leica launched its camera with Content Credentials built in, and Microsoft added Content Credentials to all AI-generated images created using Bing Image Creator.

Image Credits: Content Credentials

Parsons told TechCrunch the CAI is talking with global governments on two areas: one is to help promote the standard as an international standard, and the other is to adopt it.

“In an election year, it’s especially critical for candidates, parties, incumbent offices and administrations who release material to the media and to the public all the time to make sure that it is knowable that if something is released from PM [Narendra] Modi’s office, it is actually from PM Modi’s office. There have been many incidents where that’s not the case. So, understanding that something is truly authentic for consumers, fact-checkers, platforms and intermediaries is very important,” he said.

India’s large population, vast language and demographic diversity make it challenging to curb misinformation, he added, a vote in favor of simple labels to cut through that.

“That’s a little ‘CR’ … it’s two western letters like most Adobe tools, but this indicates there’s more context to be shown,” he said.

Controversy continues to surround what the real point might be behind tech companies supporting any kind of AI safety measure: Is it really about existential concern, or just having a seat at the table to give the impression of existential concern, all the while making sure their interests get safeguarded in the process of rule making?

“It’s generally not controversial with the companies who are involved, and all the companies who signed the recent Munich accord, including Adobe, who came together, dropped competitive pressures because these ideas are something that we all need to do,” he said in defense of the work.


Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber