From Digital Age to Nano Age. WorldWide.

Tag: probes

Robotic Automations

UK probes Amazon and Microsoft over AI partnerships with Mistral, Anthropic, and Inflection | TechCrunch


The U.K.’s Competition and Markets Authority (CMA) is launching preliminary enquiries into whether the close-knit tie-ups and hiring practices involving Microsoft, Amazon and a trio of AI startup falls within the scope of its merger rules — and whether the arrangements could impact competition in the U.K. market.

The announcement comes amid growing scrutiny of Big Tech’s fresh approach to M&A in the world of artificial intelligence (AI), where the so-called “quasi-merger” has emerged as flavor of the day as a means of — apparently — bypassing regulatory oversight.

Microsoft’s investment in, and close partnership with, ChatGPT-maker OpenAI attracted the CMA’s scrutiny late last year, with the regulator launching a formal “invitation to comment,” aimed at relevant stakeholders in the AI and business spheres. Since then, Microsoft hired the core team behind Inflection AI, a U.S.-based OpenAI rival it had previously invested in, and earlier this month Microsoft launched a new London AI hub fronted by former Inflection and DeepMind scientist Jordan Hoffmann.

Elsewhere, Microsoft also recently invested in Mistral AI, a French AI startup working on foundational models that could be construed as rivalling OpenAI.

And then there’s Amazon, which recently completed its $4 billion investment in Anthropic — another U.S.-based AI company working on large language models.

Collectively, these latest deals

The CMA’s executive director of mergers, Joel Bamford, said that it’s merely inviting comments from relevant parties, as it assesses whether these various partnerships are tantamount to mergers, and whether it might impact competition in the U.K.’s fast-growing AI industry.

“Foundation models have the potential to fundamentally impact the way we all live and work, including products and services across so many U.K. sectors – healthcare, energy, transport, finance and more,” Bamford said in a statement. “So open, fair, and effective competition in foundation model markets is critical to making sure the full benefits of this transformation are realised by people and businesses in the UK, as well as our wider economy where technology has a huge role to play in growth and productivity.”

This is a development story, refresh for updates.


Software Development in Sri Lanka

Robotic Automations

Meta's Oversight Board probes explicit AI-generated images posted on Instagram and Facebook | TechCrunch


The Oversight Board, Meta’s semi-independent policy council, it turning its attention to how the company’s social platforms are handling explicit, AI-generated images. Tuesday, it announced investigations into two separate cases over how Instagram in India and Facebook in the U.S. handled AI-generated images of public figures after Meta’s systems fell short on detecting and responding to the explicit content.

In both cases, the sites have now taken down the media. The board is not naming the individuals targeted by the AI images “to avoid gender-based harassment,” according to an e-mail Meta sent to TechCrunch.

The board takes up cases about Meta’s moderation decisions. Users have to appeal to Meta first about a moderation move before approaching the Oversight Board. The board is due to publish its full findings and conclusions in the future.

The cases

Describing the first case, the board said that a user reported an AI-generated nude of a public figure from India on Instagram as pornography. The image was posted by an account that exclusively posts images of Indian women created by AI, and the majority of users who react to these images are based in India.

Meta failed to take down the image after the first report, and the ticket for the report was closed automatically after 48 hours after the company didn’t review the report further. When the original complainant appealed the decision, the report was again closed automatically without any oversight from Meta. In other words, after two reports, the explicit AI-generated image remained on Instagram.

The user then finally appealed to the board. The company only acted at that point to remove the objectionable content and removed the image for breaching its community standards on bullying and harassment.

The second case relates to Facebook, where a user posted an explicit, AI-generated image that resembled a U.S. public figure in a Group focusing on AI creations. In this case, the social network took down the image as it was posted by another user earlier, and Meta had added it to a Media Matching Service Bank under “derogatory sexualized photoshop or drawings” category.

When TechCrunch asked about why the board selected a case where the company successfully took down an explicit AI-generated image, the board said it selects cases “that are emblematic of broader issues across Meta’s platforms.” It added that these cases help the advisory board to look at the global effectiveness of Meta’s policy and processes for various topics.

“We know that Meta is quicker and more effective at moderating content in some markets and languages than others. By taking one case from the US and one from India, we want to look at whether Meta is protecting all women globally in a fair way,” Oversight Board Co-Chair Helle Thorning-Schmidt said in a statement.

“The Board believes it’s important to explore whether Meta’s policies and enforcement practices are effective at addressing this problem.”

The problem of deep fake porn and online gender-based violence

Some — not all — generative AI tools in recent years have expanded to allow users to generate porn. As TechCrunch reported previously, groups like Unstable Diffusion are trying to monetize AI porn with murky ethical lines and bias in data.

In regions like India, deepfakes have also become an issue of concern. Last year, a report from the BBC noted that the number of deepfaked videos of Indian actresses has soared in recent times. Data suggests that women are more commonly subjects for deepfaked videos.

Earlier this year, Deputy IT Minister Rajeev Chandrasekhar expressed dissatisfaction with tech companies’ approach to countering deepfakes.

“If a platform thinks that they can get away without taking down deepfake videos, or merely maintain a casual approach to it, we have the power to protect our citizens by blocking such platforms,” Chandrasekhar said in a press conference at that time.

While India has mulled bringing specific deepfake-related rules into the law, nothing is set in stone yet.

While the country there are provisions for reporting online gender-based violence under law, experts note that the process could be tedious, and there is often little support. In a study published last year, the Indian advocacy group IT for Change noted that courts in India need to have robust processes to address online gender-based violence and not trivialize these cases.

There are currently only a few laws globally that address the production and distribution of porn generated using AI tools. A handful of U.S. states have laws against deepfakes. The UK introduced a law this week to criminalize the creation of sexually explicit AI-powered imagery.

Meta’s response and the next steps

In response to the Oversight Board’s cases, Meta said it took down both pieces of content. However, the social media company didn’t address the fact that it failed to remove content on Instagram after initial reports by users or for how long the content was up on the platform.

Meta said that it uses a mix of artificial intelligence and human review to detect sexually suggestive content. The social media giant said that it doesn’t recommend this kind of content in places like Instagram Explore or Reels recommendations.

The Oversight Board has sought public comments — with a deadline of April 30 — on the matter that addresses harms by deep fake porn, contextual information about the proliferation of such content in regions like the U.S. and India, and possible pitfalls of Meta’s approach in detecting AI-generated explicit imagery.

The board will investigate the cases and public comments and post the decision on the site in a few weeks.

These cases indicate that large platforms are still grappling with older moderation processes while AI-powered tools have enabled users to create and distribute different types of content quickly and easily. Companies like Meta are experimenting with tools that use AI for content generation, with some efforts to detect such imagery. However, perpetrators are constantly finding ways to escape these detection systems and post problematic content on social platforms.


Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber