From Digital Age to Nano Age. WorldWide.

Tag: images

Robotic Automations

Meta's Ray-Ban smart glasses now let you share images directly to your Instagram Story | TechCrunch


Meta is updating its Ray-Ban smart glasses with new hands-free functionality, the company announced on Wednesday. Most notably, users can now share an image from their smart glasses directly to their Instagram Story without needing to take out their phone. After you take a photo with the smart glasses, you can say, “Hey Meta, share […]

© 2024 TechCrunch. All rights reserved. For personal use only.


Software Development in Sri Lanka

Robotic Automations

Meta's AI tools for advertisers can now create full new images, not just new backgrounds | TechCrunch


Meta is rolling out an expanded set of generative AI tools for advertisers, after first announcing a set of AI features last October. Now, instead of only being able to create different backgrounds for a product image, advertisers can also request full image variations, which offer AI-inspired ideas for the overall photo, including riffs that update the photo’s subject or product being advertised.

In one example, Meta shows how an existing ad creative showing a cup of coffee sitting outdoors next to coffee beans could be modified to present the cup, from a different angle, in front of lush greenery and coffee beans, evoking imagery reminiscent of a coffee farm.

This may not be a big deal if the image is only mean to encourage someone to visit a local coffee shop. But if it was the coffee cup itself that was for sale, then the AI variations Meta offers could be versions of the product that didn’t exist in real life.

The feature could be abused by advertisers who wanted to dupe consumers into buying products that don’t actually exist.

Meta admits this is a possible use case, saying that an advertiser could tailor the generated output with the coming Text Prompt feature with different colors of their product, from different angles and in different scenarios. Currently, the “different colors” option could be used to dupe customers into thinking a product looked different than it does in real life.

As Meta’s example demonstrates, the coffee cup itself could be transformed into different colors, or could be shown from different angles, where each cup has its own distinct swirl of foaming milk mixed in with the hot beverage.

However, Meta claims that it has strong guardrails in place to prevent its system from generating inappropriate ad content or low-quality images. This includes “pre-guardrails” to filter out images that its gen AI models don’t support and “post-guardrails” that filter out generated text and image content that doesn’t meet its quality bar or that it deems inappropriate. Plus, Meta said it stress-tested the feature using its Llama image and full ads image generation model with both internal and external experts to try to find unexpected ways it could be used, then addressed any vulnerabilities found.

Meta says this feature has already begun to roll out, and in the months ahead, advertisers will be able to provide text prompts to tailor the image’s variations, too.

Image Credits: Meta

Plus, Meta will now allow advertisers to add text overlays on their AI-generated images with a dozen of the most popular font typefaces available to choose from.

Another feature, image expansion, also introduced in October 2023, will now be available to Reels in addition to the Feed, across both Facebook and Instagram. This option leverages AI to help advertisers adjust their image assets to fit across different aspect ratios, like Reels and Feed. The idea is that advertisers could spend less time repurposing their creative assets for different surfaces. Meta says text overlay will work along with image expansion, too.

One advertiser, smartphone case maker Casetify, said that using Meta’s GenAI Background Generation feature led to a 13% increase in return on its ad spend. The company had tested the option with its Advantage+ shopping campaigns, where the AI features first became available in the fall. The updated AI features will also be available through Ads Manager via Advantage+ creative, as before.

Image Credits: Meta

Beyond images, Meta’s AI can be used to generate alternate versions of the ad headline, in addition to the ad’s primary text, which was already supported by leveraging the original copy. Meta says it’s testing the ability for this text to also sound like the brand’s voice and tone, using previous campaigns as its reference material. Text generation capabilities will be moved to Mets’s next-gen LLM (large language model), Meta Llama 3.

All the generative AI features will become available globally to advertisers by the end of the year.

Outside of the AI updates, Meta also announced it would expand its subscription service, Meta Verified for businesses, to new markets including Argentina, Mexico, Chile, Peru, France, and Italy. The service began testing last year in Australia, New Zealand and Canada. 

Now, Meta Verified will offer four different tiers to its subscription plan, all with the base features of a verified badge, account support, and impersonation monitoring. Higher tiers will include new tools like profile enhancements, tools for creating connections, and more ways to access customer support.

Meta Verified will be expanded to WhatsApp soon, the company also said.


Software Development in Sri Lanka

Robotic Automations

Meta AI is obsessed with turbans when generating images of Indian men | TechCrunch


Bias in AI image generators is a well-studied and well-reported phenomenon, but consumer tools continue to exhibit glaring cultural biases. The latest culprit in this area is Meta’s AI chatbot, which, for some reason, really wants to add turbans to any image of an Indian man.

The company rolled out Meta AI in more than a dozen countries earlier this month across WhatsApp, Instagram, Facebook, and Messenger. However, the company has rolled out Meta AI to select users in India, one of the biggest markets around the world.

TechCrunch looks at various culture-specific queries as part of our AI testing process, by which we found out, for instance, that Meta is blocking election-related queries in India because of the country’s ongoing general elections. But Imagine, Meta AI’s new image generator, also displayed a peculiar predisposition to generating Indian men wearing a turban, among other biases.

When we tested different prompts and generated more than 50 images to test various scenarios, and they’re all here minus a couple (like “a German driver”) we did to see how the system represented different cultures. There is no scientific method behind the generation, and we didn’t take inaccuracies in object or scene representation beyond the cultural lens into consideration.

There are a lot of men in India who wear a turban, but the ratio is not nearly as high as Meta AI’s tool would suggest. In India’s capital, Delhi, you would see one in 15 men wearing a turban at most. However, in images generates Meta’s AI, roughly 3-4 out of 5 images representing Indian males would be wearing a turban.

We started with the prompt “An Indian walking on the street,” and all the images were of men wearing turbans.

Next, we tried generating images with prompts like “An Indian man,” “An Indian man playing chess,” “An Indian man cooking,” and An Indian man swimming.” Meta AI generated only one image of a man without a turban.

 

Even with the non-gendered prompts, Meta AI didn’t display much diversity in terms of gender and cultural differences. We tried prompts with different professions and settings, including an architect, a politician, a badminton player, an archer, a writer, a painter, a doctor, a teacher, a balloon seller, and a sculptor.

As you can see, despite the diversity in settings and clothing, all the men were generated wearing turbans. Again, while turbans are common in any job or region, it’s strange for Meta AI to consider them so ubiquitous.

We generated images of an Indian photographer, and most of them are using an outdated camera, except in one image where a monkey also somehow has a DSLR.

We also generated images of an Indian driver. And until we added the word “dapper,” the image generation algorithm showed hints of class bias.

 

We also tried generating two images with similar prompts. Here are some examples: An Indian coder in an office.

An Indian man in a field operating a tractor.

Two Indian men sitting next to each other:

Additionally, we tried generating a collage of images with prompts, such as an Indian man with different hairstyles. This seemed to produce the diversity we expected.

Meta AI’s Imagine also has a perplexing habit of generating one kind of image for similar prompts. For instance, it constantly generated an image of an old-school Indian house with vibrant colors, wooden columns, and styled roofs. A quick Google image search will tell you this is not the case with majority of Indian houses.

Another prompt we tried was “Indian content creator,” and it generated an image of a female creator repeatedly. In the gallery bellow, we have included images with content creator on a beach, a hill, mountain, a zoo, a restaurant, and a shoe store.

Like any image generator, the biases we see here are likely due to inadequate training data, and after that an inadequate testing process. While you can’t test for all possible outcomes, common stereotypes ought to be easy to spot. Meta AI seemingly picks one kind of representation for a given prompt, indicating a lack of diverse representation in the dataset at least for India.

In response to questions TechCrunch sent to Meta about training data an biases, the company said it is working on making its generative AI tech better, but didn’t provide much detail about the process.

“This is new technology and it may not always return the response we intend, which is the same for all generative AI systems. Since we launched, we’ve constantly released updates and improvements to our models and we’re continuing to work on making them better,” a spokesperson said in a statement.

Meta AI’s biggest draw is that it is free and easily available across multiple surfaces. So millions of people from different cultures would be using it in different ways. While companies like Meta are always working on improving image generation models in terms of the accuracy of how they generate objects and humans, it’s also important that they work on these tools to stop them from playing into stereotypes.

Meta will likely want creators and users to use this tool to post content on its platforms. However, if generative biases persist, they also play a part in confirming or aggravating the biases in users and viewers. India is a diverse country with many intersections of culture, caste, religion, region, and languages. Companies working on AI tools will need to be better at representing different people.

If you have found AI models generating unusual or biased output, you can reach out to me at im@ivanmehta.com by email and through this link on Signal.


Software Development in Sri Lanka

Robotic Automations

Snap plans to add watermarks to images created with its AI-powered tools | TechCrunch


Social media company Snap said Tuesday that it plans to add watermarks to AI-generated images on its platform.

The platform is adding a logo of a small ghost with a sparkle icon to denote an AI-generated image. The company said the watermark would appear when the image is exported or saved to the camera roll.

Snap plans to show a Ghost logo with sparkle on AI-generated images using its tools. Image Credits: Snap

On its support page, the company said removing Snap’s Ghost with sparkles watermark violates its terms. It’s unclear how Snap plans to detect the watermark removal. We have asked the company for more details and will update the story when we hear back.

Other tech giants such as Microsoft, Meta, and Google have also taken steps to label or identify images created with AI-powered tools.

Currently, Snap allows users to create or edit AI-generated images with Snap AI for paid users and a selfie-focused feature called Dreams.

In its blog post outlining its safety and transparency practices around AI, the company explained that it shows gen-AI powered features such as Lenses with visual markers like a sparkling logo.

Snap lists indicators for features powered by generative AI. Image credits: Snap

The company also added context cards with AI-generated images from tools like Dream selfies to better inform the user.

In February, Snap partnered with HackerOne to stress its AI image-generation tools by adapting a bug bounty program. The company said it has also created a review process to remove problematic problems when AI-powered lenses are in development.

“We want Snapchatters from all walks of life to have equitable access and expectations when using all features within our app, particularly our AI-powered experiences. With this in mind, we’re implementing additional testing to minimize potentially biased AI results,” the company said on its blog.

Snapchat landed in hot water soon after introducing the “My AI” chatbot last year. A Washington Post report noted the bot was returning inappropriate responses to users. Later, the company rolled out controls in the Family Center for parents and guardians to monitor and restrict their teen’s interactions with AI.


Software Development in Sri Lanka

Robotic Automations

Meta's Oversight Board probes explicit AI-generated images posted on Instagram and Facebook | TechCrunch


The Oversight Board, Meta’s semi-independent policy council, it turning its attention to how the company’s social platforms are handling explicit, AI-generated images. Tuesday, it announced investigations into two separate cases over how Instagram in India and Facebook in the U.S. handled AI-generated images of public figures after Meta’s systems fell short on detecting and responding to the explicit content.

In both cases, the sites have now taken down the media. The board is not naming the individuals targeted by the AI images “to avoid gender-based harassment,” according to an e-mail Meta sent to TechCrunch.

The board takes up cases about Meta’s moderation decisions. Users have to appeal to Meta first about a moderation move before approaching the Oversight Board. The board is due to publish its full findings and conclusions in the future.

The cases

Describing the first case, the board said that a user reported an AI-generated nude of a public figure from India on Instagram as pornography. The image was posted by an account that exclusively posts images of Indian women created by AI, and the majority of users who react to these images are based in India.

Meta failed to take down the image after the first report, and the ticket for the report was closed automatically after 48 hours after the company didn’t review the report further. When the original complainant appealed the decision, the report was again closed automatically without any oversight from Meta. In other words, after two reports, the explicit AI-generated image remained on Instagram.

The user then finally appealed to the board. The company only acted at that point to remove the objectionable content and removed the image for breaching its community standards on bullying and harassment.

The second case relates to Facebook, where a user posted an explicit, AI-generated image that resembled a U.S. public figure in a Group focusing on AI creations. In this case, the social network took down the image as it was posted by another user earlier, and Meta had added it to a Media Matching Service Bank under “derogatory sexualized photoshop or drawings” category.

When TechCrunch asked about why the board selected a case where the company successfully took down an explicit AI-generated image, the board said it selects cases “that are emblematic of broader issues across Meta’s platforms.” It added that these cases help the advisory board to look at the global effectiveness of Meta’s policy and processes for various topics.

“We know that Meta is quicker and more effective at moderating content in some markets and languages than others. By taking one case from the US and one from India, we want to look at whether Meta is protecting all women globally in a fair way,” Oversight Board Co-Chair Helle Thorning-Schmidt said in a statement.

“The Board believes it’s important to explore whether Meta’s policies and enforcement practices are effective at addressing this problem.”

The problem of deep fake porn and online gender-based violence

Some — not all — generative AI tools in recent years have expanded to allow users to generate porn. As TechCrunch reported previously, groups like Unstable Diffusion are trying to monetize AI porn with murky ethical lines and bias in data.

In regions like India, deepfakes have also become an issue of concern. Last year, a report from the BBC noted that the number of deepfaked videos of Indian actresses has soared in recent times. Data suggests that women are more commonly subjects for deepfaked videos.

Earlier this year, Deputy IT Minister Rajeev Chandrasekhar expressed dissatisfaction with tech companies’ approach to countering deepfakes.

“If a platform thinks that they can get away without taking down deepfake videos, or merely maintain a casual approach to it, we have the power to protect our citizens by blocking such platforms,” Chandrasekhar said in a press conference at that time.

While India has mulled bringing specific deepfake-related rules into the law, nothing is set in stone yet.

While the country there are provisions for reporting online gender-based violence under law, experts note that the process could be tedious, and there is often little support. In a study published last year, the Indian advocacy group IT for Change noted that courts in India need to have robust processes to address online gender-based violence and not trivialize these cases.

There are currently only a few laws globally that address the production and distribution of porn generated using AI tools. A handful of U.S. states have laws against deepfakes. The UK introduced a law this week to criminalize the creation of sexually explicit AI-powered imagery.

Meta’s response and the next steps

In response to the Oversight Board’s cases, Meta said it took down both pieces of content. However, the social media company didn’t address the fact that it failed to remove content on Instagram after initial reports by users or for how long the content was up on the platform.

Meta said that it uses a mix of artificial intelligence and human review to detect sexually suggestive content. The social media giant said that it doesn’t recommend this kind of content in places like Instagram Explore or Reels recommendations.

The Oversight Board has sought public comments — with a deadline of April 30 — on the matter that addresses harms by deep fake porn, contextual information about the proliferation of such content in regions like the U.S. and India, and possible pitfalls of Meta’s approach in detecting AI-generated explicit imagery.

The board will investigate the cases and public comments and post the decision on the site in a few weeks.

These cases indicate that large platforms are still grappling with older moderation processes while AI-powered tools have enabled users to create and distribute different types of content quickly and easily. Companies like Meta are experimenting with tools that use AI for content generation, with some efforts to detect such imagery. However, perpetrators are constantly finding ways to escape these detection systems and post problematic content on social platforms.


Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber