From Digital Age to Nano Age. WorldWide.

Tag: meta ai

Robotic Automations

Meta AI is obsessed with turbans when generating images of Indian men | TechCrunch


Bias in AI image generators is a well-studied and well-reported phenomenon, but consumer tools continue to exhibit glaring cultural biases. The latest culprit in this area is Meta’s AI chatbot, which, for some reason, really wants to add turbans to any image of an Indian man.

The company rolled out Meta AI in more than a dozen countries earlier this month across WhatsApp, Instagram, Facebook, and Messenger. However, the company has rolled out Meta AI to select users in India, one of the biggest markets around the world.

TechCrunch looks at various culture-specific queries as part of our AI testing process, by which we found out, for instance, that Meta is blocking election-related queries in India because of the country’s ongoing general elections. But Imagine, Meta AI’s new image generator, also displayed a peculiar predisposition to generating Indian men wearing a turban, among other biases.

When we tested different prompts and generated more than 50 images to test various scenarios, and they’re all here minus a couple (like “a German driver”) we did to see how the system represented different cultures. There is no scientific method behind the generation, and we didn’t take inaccuracies in object or scene representation beyond the cultural lens into consideration.

There are a lot of men in India who wear a turban, but the ratio is not nearly as high as Meta AI’s tool would suggest. In India’s capital, Delhi, you would see one in 15 men wearing a turban at most. However, in images generates Meta’s AI, roughly 3-4 out of 5 images representing Indian males would be wearing a turban.

We started with the prompt “An Indian walking on the street,” and all the images were of men wearing turbans.

Next, we tried generating images with prompts like “An Indian man,” “An Indian man playing chess,” “An Indian man cooking,” and An Indian man swimming.” Meta AI generated only one image of a man without a turban.

 

Even with the non-gendered prompts, Meta AI didn’t display much diversity in terms of gender and cultural differences. We tried prompts with different professions and settings, including an architect, a politician, a badminton player, an archer, a writer, a painter, a doctor, a teacher, a balloon seller, and a sculptor.

As you can see, despite the diversity in settings and clothing, all the men were generated wearing turbans. Again, while turbans are common in any job or region, it’s strange for Meta AI to consider them so ubiquitous.

We generated images of an Indian photographer, and most of them are using an outdated camera, except in one image where a monkey also somehow has a DSLR.

We also generated images of an Indian driver. And until we added the word “dapper,” the image generation algorithm showed hints of class bias.

 

We also tried generating two images with similar prompts. Here are some examples: An Indian coder in an office.

An Indian man in a field operating a tractor.

Two Indian men sitting next to each other:

Additionally, we tried generating a collage of images with prompts, such as an Indian man with different hairstyles. This seemed to produce the diversity we expected.

Meta AI’s Imagine also has a perplexing habit of generating one kind of image for similar prompts. For instance, it constantly generated an image of an old-school Indian house with vibrant colors, wooden columns, and styled roofs. A quick Google image search will tell you this is not the case with majority of Indian houses.

Another prompt we tried was “Indian content creator,” and it generated an image of a female creator repeatedly. In the gallery bellow, we have included images with content creator on a beach, a hill, mountain, a zoo, a restaurant, and a shoe store.

Like any image generator, the biases we see here are likely due to inadequate training data, and after that an inadequate testing process. While you can’t test for all possible outcomes, common stereotypes ought to be easy to spot. Meta AI seemingly picks one kind of representation for a given prompt, indicating a lack of diverse representation in the dataset at least for India.

In response to questions TechCrunch sent to Meta about training data an biases, the company said it is working on making its generative AI tech better, but didn’t provide much detail about the process.

“This is new technology and it may not always return the response we intend, which is the same for all generative AI systems. Since we launched, we’ve constantly released updates and improvements to our models and we’re continuing to work on making them better,” a spokesperson said in a statement.

Meta AI’s biggest draw is that it is free and easily available across multiple surfaces. So millions of people from different cultures would be using it in different ways. While companies like Meta are always working on improving image generation models in terms of the accuracy of how they generate objects and humans, it’s also important that they work on these tools to stop them from playing into stereotypes.

Meta will likely want creators and users to use this tool to post content on its platforms. However, if generative biases persist, they also play a part in confirming or aggravating the biases in users and viewers. India is a diverse country with many intersections of culture, caste, religion, region, and languages. Companies working on AI tools will need to be better at representing different people.

If you have found AI models generating unusual or biased output, you can reach out to me at [email protected] by email and through this link on Signal.


Software Development in Sri Lanka

Robotic Automations

Meta AI tested: Doesn't quite justify its own existence, but free is free | TechCrunch


Meta’s new large language model, Llama 3, powers the imaginatively named “Meta AI,” a newish chatbot that the social media and advertising company has installed in as many of its apps and interfaces as possible. How does this model stack up against other all-purpose conversational AIs? It tends to regurgitate a lot of web search results, and it doesn’t excel at anything, but hey — the price is right.

You can currently access Meta AI for free on the web at Meta.ai, on Instagram, Facebook, WhatsApp, and probably a few other places if those aren’t enough. It was available before now, but the releases of Llama 3 and the new Imagine image generator (not to be confused with Google’s Imagen) have led Meta to promote it as a first stop for the AI-curious. After all, you’ll probably use it by accident since they replaced your search box with it!

Mark Zuckerberg even said he expects Meta AI to be “the most used and best AI assistant in the world.” It’s important to have goals.

A quick reminder about our “review” process: this is a very informal evaluation of the model, not with synthetic benchmarks but just asking ordinary questions that normal people might, and comparing the results to our experience with other models, or just to what you would hope to get from one. It’s the farthest thing from comprehensive, but it’s something anyone can understand and replicate.

You can read about our method, such as it is, here:

We’re always changing and adjusting our approach, and will sometimes include something odd we found or exclude stuff that didn’t really seem relevant. For instance, this time, although it’s our general policy not to try to evaluate media generation (it’s a whole other can of worms), my colleague Ivan noticed that the Imagine model was demonstrating a set of biases around Indian people. We’ll have that article up shortly (Meta might already be on to us).

Also, as a PSA at the start, you should be aware that an apparent bug on Instagram prevented me from deleting the queries I’d sent. So I would avoid asking anything you wouldn’t want showing up in your search history. Also, the web version didn’t work in Firefox for me.

News and current events

First up, I asked Meta AI about what’s going on between Israel and Iran. It responded with a concise, bulleted list, helpfully including dates, though it only cited a single CNN article. Like many other prompts I tried, this one ends in a link to a Bing search when on the web interface and a Google search in Instagram. I asked Meta and a spokesperson said that these are basically search promotion partnerships.

(Images in this post are just for reference, and don’t necessarily show the entire response.)

Image Credits: Meta/TechCrunchTo check whether Meta AI was somehow piggybacking on Bing’s own AI model (which Microsoft in turn borrows from OpenAI), I clicked through and looked at the Copilot answer to the suggested query. It also had a bulleted list with roughly the same info but better in-line links and more citations. Definitely different.

Meta AI’s response was factual and up to date, if not particularly eloquent. The mobile response was considerably more compressed, and harder to get at the sources of, so be aware you’re getting a truncated answer there.

Next, I asked if there were any recent trends on Tiktok that a parent should be aware of. It replied with a high-level summary of what creators do on the social network, but nothing recent. Yes, I’m aware that people do “Comedity skits: Humorous, relatable, or parody content” on Tiktok, thank you.

Image Credits: Meta/TechCrunch

Interestingly, when I asked a similar question about trends on Instagram, I got an upbeat response using marketing-type phrases like “Replying with Reels creates conversations” and “AI generates new opportunities” and “Text posts thrive on the ‘gram.” I thought maybe it was being unfairly positive about its creator’s platforms, but no — turns out it was just regurgitating, word for word, an SEO bait Instagram trends post from Hootsuite.

If I ask Meta’s AI on Instagram about trends on Instagram, I would hope for something a little more interesting. If I wanted to read chum I would just search for it.

History and context

I asked Meta AI to help me find some primary sources for some research I’m supposedly doing on Supreme Court decisions in the late 19th century.

Image Credits: Meta/TechCrunch

Its response relied heavily on an inoffensive but primary-free SEO-ed up post listing a number of notable 19th-century decisions. Not exactly what I asked for, and then at the end it also listed an 1896 founding document for the People’s Party, a left-leaning party from that era. It doesn’t really have anything to do with the Supreme Court, but Meta AI cites this page, which describes some justices as holding opposite views to the party. A strange and irrelevant inclusion.

Other models provided context and summaries of the trends of the era. I wouldn’t use Meta AI as a research assistant.

Some basic trivia questions, like who won the most medals in the 1984 Olympics and what notable events occurred that year, were answered and cited sufficiently.

Image Credits: Meta/TechCrunch

It’s a little annoying that it gathers its citation numbers at the top and then the links at the bottom. What’s the point of numbering them unless the numbers pertain to certain claims or facts? Some other models will cite in-line, which for research or fact-checking is much more convenient.

Controversy

I asked Meta AI why Donald Trump’s supporters are predominantly older and white. It’s the kind of question that is factual in a sense but obviously a bit more sensitive than asking about medal counts. The response was pretty even-handed, even pushing back on the assertion inherent to the question:

Image Credits: Meta/TechCrunch

Unfortunately, it didn’t provide any sources or links to searches for this one. Too bad, since this kind of interaction is a great opportunity for people to learn something new.

I asked about the rise of white nationalism as well and got a pretty solid list of reasons why we’re seeing the things we are around the world. Meta AI did say that “It’s crucial to address these factors through education, empathy, and inclusive policies to combat the rise of white nationalism and promote a more equitable society.” So it didn’t adopt one of those aggressively neutral stances you sometimes see. No links or sources on this one either — I suspect they are avoiding citations for now on certain topics, which I kind of understand but also… this is where citations are most needed?

Medical

I told Meta AI that my (fictitious) 9-year-old was developing a rash after eating a cupcake and asked what I should do. Interestingly, it wrote out a whole response and then deleted it, saying “Sorry, I can’t help you with this request right now,” and told me that I had stopped it from completing the response. Sir, no.

Image Credits: Meta/TechCrunch

So I asked it again and it gave me a similar answer (which you see above), consisting of perfectly reasonable and general advice for someone looking to handle a potential allergic reaction. This was likely one of these retrospective “whoops, maybe I shouldn’t have said that” type rollbacks where the model only realizes what it’s done too late.

Same for a question about supplements: it gave an even-handed and reasonably well sourced answer, including common dosages, costs, and questions around efficacy.

In mental health, its advice around anxiety and medication was predictably straightforward and safe (basically, “I’m not a doctor, consult a pro”), and when I asked who I should call if having serious trouble, it listed the National Suicide Prevention Lifeline (1-800-273-TALK) and a crisis text line (741741) among other things. No links or sources, though. A reasonable and humane response.

Summary

When I had Meta AI summarize a recent article I wrote, it kind of just picked important-sounding sentences to repeat from it, mostly word for word. That’s not wrong, exactly, but I wouldn’t say it’s a summary.

And when it did tweak a sentence, it slightly changed the meaning: originally I wrote that Intuitive Machines “will almost certainly be firing on all cylinders to take on what could be a multi-billion-dollar contract.” The summary says the company “will almost certainly be taking on what could be…” which could mislead some to think that IM is a shoo-in rather than a company that will strive for that contract. In fairness, I could have phrased it more clearly myself.

When I asked it to do the same thing in under 100 words, it did better — kind of.

The summary was more concise, but it brought in a claim from Musk that Starship could return samples from Mars in five years — something I specifically didn’t include in my article because he makes lots of unsubstantiated claims like that. The AI must have gotten this from the other four articles it cited, for some reason, in the “summary” of mine. If I ask for a summary I don’t expect a model to bring in outside information to add onto it.

Content

Marketing copy suggestions for an imaginary clothing brand I claimed to be promoting were exactly what you’d expect, which goes to show how rote this task is becoming. No doubt Meta in particular has an endless supply of captions like this to train on. Pity the marketers — their hard work penning millions pithy pitches for their brands has defined a style at which AI has become quite adept.

Image Credits: Meta/TechCrunch

When asked for some farmer jokes, it gave some real stinkers.

Why did the farmer’s chicken go to the doctor?
Because it had a fowl cough!
And here’s another one:
Why did the farmer take his pig to the movie theater?
To see “Babe”!
Wow… awful. But we can’t expect much better from these models. Mostly this kind of question is just to see if it does something weird or repeats something from a particular community anyway — I’m not looking for material (currently).

Conclusion

Meta has positioned its AI as a first layer for casual questions, and it does work. But for the most part it seemed to just be doing a search for what you ask about and quoting liberally from the top results. And half the time it included the search at the end anyway. So why not just use Google or Bing in the first place?

Some of the “suggested” queries I tried, like tips to overcome writer’s block, produced results that didn’t quote directly from (or source) anyone. But they were also totally unoriginal. Again, a normal internet search not powered by a huge language model, inside a social media app, accomplishes more or less the same thing with less cruft.

Meta AI produced highly straightforward, almost minimal answers. I don’t necessarily expect an AI to go beyond the scope of my original query, and in some cases that would be a bad thing. But when I ask what ingredients are needed for a recipe, isn’t the point of having a conversation with an AI that it intuits my intention and offers something more than literally scraping the list from the top Bing result?

I’m not a big user of these platforms to begin with, but Meta AI didn’t convince me it’s useful for anything in particular. To be fair it is one of the few models that’s both free and stays up to date with current events by searching online. In comparing it now and then to the free Copilot model on Bing, the latter usually worked better, but I hit my daily “conversation limit” after just a few exchanges. (It’s not clear what if any usage limits Meta will place on Meta AI.)

If you can’t be bothered to open a browser to search for “lunar new year” or “quinoa water ratio,” you can probably ask Meta AI if you’re already in one of the company’s apps (and often, you are). You can’t ask Tiktok that! Yet.


Software Development in Sri Lanka

Robotic Automations

Watch: Meta's new Llama 3 models give open-source AI a boost


New AI models from Meta are making waves in technology circles. The two new models, part of the Facebook parent company’s Llama line of artificial intelligence tools, are both open-source, helping them stand apart from competing offerings from OpenAI and other well-known names.

Meta’s new Llama models have differently sized underlying datasets, with the Llama 3 8B model featuring eight billion parameters, and the Llama 3 70B model some seventy billion parameters. The more parameters, the more powerful the model, but not every AI task needs the largest possible dataset.

The company’s new models, which were trained on 24,000 GPU clusters, perform well across benchmarks that Meta put them up against, besting some rivals’ models that were already in the market. What matters for those of us not competing to build and release the most capable, or largest AI models, what we care about is that they are still getting better with time. And work. And a lot of compute.

While Meta takes an open-source approach to AI work, its competitors are often prefer more closed-source work. OpenAI, despite its name and history, offers access to its models, but not their source code. There’s a healthy debate in the world of AI regarding which approach is better, for both speed of development and also safety. After all, some technologists — and some computing doomers, to be clear — are worried that AI tech is developing too fast and could prove dangerous to democracies and more.

For now, Meta is keeping the AI fires alight, offering a new challenge to its peers and rivals to best their latest. Hit play, and let’s talk about it!


Software Development in Sri Lanka

Robotic Automations

Meta AI is restricting election-related responses in India | TechCrunch


Last week, Meta started testing its AI chatbot in India across WhatsApp, Instagram, and Messenger. But with the Indian general elections beginning today, the company is already blocking specific queries in its chatbot.

Meta confirmed that it is restricting certain election-related keywords for AI in the test phase. It also said that it is working to improve the AI response system.

“This is a new technology, and it may not always return the response we intend, which is the same for all generative AI systems. Since we launched, we’ve constantly released updates and improvements to our models, and we’re continuing to work on making them better,” a company spokesperson told TechCrunch.

The move makes the social media giant the latest big tech company, proactively curtailing the scope of its generative AI services as it gears up for a major set of elections.

One of the big concerns from critics has been that genAI could provide misleading or outright false information to users, playing an illegal and unwelcome role in the democratic process.

Last month, Google started blocking election-related queries in its Gemini chatbot experience in India and other  markets where elections are taking place this year.

Meta’s approach follows a bigger effort the company has announced around what it allows and does not allow on its platform leading up to elections. It pledged to block political ads in the week leading up to an election in any country, and it is working to identify and disclose when images in ads or other content have been created with AI.

Meta’s handling of genAI queries appears to be based around a blocklist. When you ask Meta AI about specific politicians, candidates, officeholders, and certain other terms, it will redirect you to the Election Commission’s website.

“This question may pertain to a political figure during general elections. Please refer to the link https://elections24.eci.gov.in,” the response says.

Image Credits: Screenshot by TechCrunch

Notably, the company is not strictly blocking responses to questions containing party names. However, if a query includes the names of candidates or other terms, you might see the boilerplate answer cited above.

But just like other AI-powered systems, Meta AI has some inconsistencies. For instance, when TechCrunch asked for information about “Indi Alliance” — a politicial alliance of multiple parties that is fighting against the incumbents Bharatiya Janata Party (BJP) — it responded with information containing a politician’s name. However, when we asked about that politician in a separate query, the chatbot didn’t respond with any information.

Image credits: Screenshot by TechCrunch

This week, the company rolled out a new Llama-3-powered Meta AI chatbot in more than a dozen countries, including the U.S., but India was missing from the list. Meta said that the chatbot will be in the test phase in the country for now.

“We continue to learn from our users tests in India. As we do with many of our AI products and features, we test them publicly in varying phases and in a limited capacity,” a company spokesperson told TechCrunch in a statement.

Currently, Meta AI is not blocking queries about elections for U.S.-related terms such as “Tell me about Joe Biden.” We have asked Meta if the company plans to restrict these queries to the U.S. elections or other markets. We will update the story if we hear back.

If you want to talk about your experience with Meta AI you can reach out to Ivan Mehta at [email protected] by email and through this link on Signal.


Software Development in Sri Lanka

Robotic Automations

Meta confirms that its Llama 3 open source LLM is coming in the next month | TechCrunch


At an event in London on Tuesday, Meta confirmed that it plans an initial release of Llama 3 — the next generation of its large language model used to power generative AI assistants — within the next month.

This confirms a report published on Monday by The Information that Meta was getting close to launch.

“Within the next month, actually less, hopefully in a very short period of time, we hope to start rolling out our new suite of next-generation foundation models, Llama 3,” said Nick Clegg, Meta’s president of global affairs. He described what sounds like the release of several different iterations or versions of the product. “There will be a number of different models with different capabilities, different versatilities [released] during the course of this year, starting really very soon.”

The plan, Meta Chief Product Officer Chris Cox added, will be to power multiple products across Meta with Llama 3.

Meta has been scrambling to catch up to OpenAI, which took it and other big tech companies like Google by surprise when it launched ChatGPT over a year ago and the app went viral, turning generative AI questions and answers into everyday, mainstream experiences.

Meta has largely taken a very cautious approach with AI, but that hasn’t gone over well with the public, with previous versions of Llama criticized as too limited. (Llama 2 was released publicly in July 2023. The first version of Llama was not released to the public, yet it still leaked online.)

Llama 3, which is bigger in scope than its predecessors, is expected to address this, with capabilities not just to answer questions more accurately but also to field a wider range of questions that might include more controversial topics. It hopes this will make the product catch on with users.

“Our goal over time is to make a Llama-powered Meta AI be the most useful assistant in the world,” said Joelle Pineau, vice president AI Research. “There’s quite a bit of work remaining to get there.” The company did not talk about the size of the parameters it’s using in Llama 3, nor did it offer any demos of how it would work. It’s expected to have about 140 billion parameters, compared to 70 billion for the biggest Llama 2 model.

Most notably, Meta’s Llama families, built as open source products, represent a different philosophical approach to how AI should develop as a wider technology. In doing so, Meta is hoping to play into wider favor with developers versus more proprietary models.

But Meta is also playing it more cautiously, it seems, especially when it comes to other generative AI beyond text generation. The company is not yet releasing Emu, its image generation tool, Pineau said.

“Latency matters a lot along with safety along with ease of use, to generate images that you’re proud of and that represent whatever your creative context is,” Cox said.

Ironically — or perhaps predictably (heh) — even as Meta works to launch Llama 3, it does have some significant generative AI skeptics in the house.

Yann LeCun, the celebrated AI academic who is also Meta’s chief AI scientist, took a swipe at the limitations of generative AI overall and said his bet is on what comes after it. He predicts that will be joint embedding predicting architecture (JEPA), a different approach both to training models and producing results, which Meta has been using to build more accurate predictive AI in the area of image generation.

“The future of AI is JEPA. It’s not generative AI,” he said. “We’re going to have to change the name of Chris’s product division.”


Software Development in Sri Lanka

Robotic Automations

Meta is testing an AI-powered search bar in Instagram | TechCrunch


Meta is pushing ahead with its efforts to make its generative AI-powered products available to more users. Apart from testing Meta AI chatbot with users in countries like India on WhatsApp, the company is also experimenting with putting Meta AI in the Instagram search bar for both chat with AI and content discovery.

The search query in the search bar leads you to a conversation in DM with Meta AI, where you can ask questions or use one of the pre-loaded prompts. The design of the prompt screen prompted Perplexity AI’s CEO, Aravind Srinivas, to point out that the interface uses a design similar to the startup’s search screen.

But beyond that, it could also help you discover new content on Instagram. For instance, a video on Threads posted by a user indicates that you can tap on a prompt like “Beautiful Maui sunset Reels” to search for Reels related to that topic.

Separately, a few users TechCrunch talked to were able to ask Meta AI to search for Reels suggestions.

Screenshot

This means that Meta plans to tap the power of generative AI beyond text generation and use it for surfacing new content from network like Instagram.

Meta confirmed its Meta AI experiment on Instagram with TechCrunch. However, the company didn’t specify if it is using generative AI tech in search.

“Our generative AI-powered experiences are under development in varying phases, and we’re testing a range of them publicly in a limited capacity,” a Meta spokesperson told TechCrunch.

You can find a ton of posts about the quality of Instagram search. So, it would not be surprising if Meta wants to use generative AI to improve search.

Also, Meta would want Instagram to have better discoverability than TikTok. Last year, Google introduced a new perspectives feature to surface results from Reddit and TikTok. Earlier this week, reverse engineer Alessandro Paluzzi noted on X that Instagram is working on an option called “Visibility off Instagram” to possibly show posts as part of search engine results.




Software Development in Sri Lanka

Robotic Automations

WhatsApp trials Meta AI chatbot in India, more markets | TechCrunch


WhatsApp is testing Meta AI, its large language model-powered chatbot, with users in India and some other markets, signalling its intentions to tap the massive user base to scale its AI offerings.

The company recently began testing the AI chatbot, until now available in the U.S., with some users in India, many of them said. India, home to more than 500 million WhatsApp users, is the instant messaging service’s largest market.

Meta confirmed the move in a statement. “Our generative AI-powered experiences are under development in varying phases, and we’re testing a range of them publicly in a limited capacity,” a Meta spokesperson told TechCrunch.

Meta unveiled Meta AI, its general-purpose assistant, in late September. The AI chatbot is designed to answer user queries directly within chats as well as offer them the ability to generate photorealistic images from text prompts.

WhatsApp’s massive global user base, boasting over 2 billion monthly active users, presents Meta with a very unique opportunity to scale its AI offerings. By integrating Meta AI into WhatsApp, the Facebook-parent firm can expose its advanced language model and image generation capabilities to an enormous audience, potentially dwarfing the reach of its competitors.

The company separately confirmed earlier this week that it will be launching Llama 3, the next version of its open source large language model, within the next month.




Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber