From Digital Age to Nano Age. WorldWide.

Tag: ai safety

Robotic Automations

In Seoul summit, heads of states and companies commit to AI safety | TechCrunch


Government officials and AI industry executives agreed on Tuesday to apply elementary safety measures in the fast-moving field and establish an international safety research network. Nearly six months after the inaugural global summit on AI safety at Bletchley Park in England, Britain and South Korea are hosting the AI safety summit this week in Seoul. The […]

© 2024 TechCrunch. All rights reserved. For personal use only.


Software Development in Sri Lanka

Robotic Automations

Meta's Oversight Board probes explicit AI-generated images posted on Instagram and Facebook | TechCrunch


The Oversight Board, Meta’s semi-independent policy council, it turning its attention to how the company’s social platforms are handling explicit, AI-generated images. Tuesday, it announced investigations into two separate cases over how Instagram in India and Facebook in the U.S. handled AI-generated images of public figures after Meta’s systems fell short on detecting and responding to the explicit content.

In both cases, the sites have now taken down the media. The board is not naming the individuals targeted by the AI images “to avoid gender-based harassment,” according to an e-mail Meta sent to TechCrunch.

The board takes up cases about Meta’s moderation decisions. Users have to appeal to Meta first about a moderation move before approaching the Oversight Board. The board is due to publish its full findings and conclusions in the future.

The cases

Describing the first case, the board said that a user reported an AI-generated nude of a public figure from India on Instagram as pornography. The image was posted by an account that exclusively posts images of Indian women created by AI, and the majority of users who react to these images are based in India.

Meta failed to take down the image after the first report, and the ticket for the report was closed automatically after 48 hours after the company didn’t review the report further. When the original complainant appealed the decision, the report was again closed automatically without any oversight from Meta. In other words, after two reports, the explicit AI-generated image remained on Instagram.

The user then finally appealed to the board. The company only acted at that point to remove the objectionable content and removed the image for breaching its community standards on bullying and harassment.

The second case relates to Facebook, where a user posted an explicit, AI-generated image that resembled a U.S. public figure in a Group focusing on AI creations. In this case, the social network took down the image as it was posted by another user earlier, and Meta had added it to a Media Matching Service Bank under “derogatory sexualized photoshop or drawings” category.

When TechCrunch asked about why the board selected a case where the company successfully took down an explicit AI-generated image, the board said it selects cases “that are emblematic of broader issues across Meta’s platforms.” It added that these cases help the advisory board to look at the global effectiveness of Meta’s policy and processes for various topics.

“We know that Meta is quicker and more effective at moderating content in some markets and languages than others. By taking one case from the US and one from India, we want to look at whether Meta is protecting all women globally in a fair way,” Oversight Board Co-Chair Helle Thorning-Schmidt said in a statement.

“The Board believes it’s important to explore whether Meta’s policies and enforcement practices are effective at addressing this problem.”

The problem of deep fake porn and online gender-based violence

Some — not all — generative AI tools in recent years have expanded to allow users to generate porn. As TechCrunch reported previously, groups like Unstable Diffusion are trying to monetize AI porn with murky ethical lines and bias in data.

In regions like India, deepfakes have also become an issue of concern. Last year, a report from the BBC noted that the number of deepfaked videos of Indian actresses has soared in recent times. Data suggests that women are more commonly subjects for deepfaked videos.

Earlier this year, Deputy IT Minister Rajeev Chandrasekhar expressed dissatisfaction with tech companies’ approach to countering deepfakes.

“If a platform thinks that they can get away without taking down deepfake videos, or merely maintain a casual approach to it, we have the power to protect our citizens by blocking such platforms,” Chandrasekhar said in a press conference at that time.

While India has mulled bringing specific deepfake-related rules into the law, nothing is set in stone yet.

While the country there are provisions for reporting online gender-based violence under law, experts note that the process could be tedious, and there is often little support. In a study published last year, the Indian advocacy group IT for Change noted that courts in India need to have robust processes to address online gender-based violence and not trivialize these cases.

There are currently only a few laws globally that address the production and distribution of porn generated using AI tools. A handful of U.S. states have laws against deepfakes. The UK introduced a law this week to criminalize the creation of sexually explicit AI-powered imagery.

Meta’s response and the next steps

In response to the Oversight Board’s cases, Meta said it took down both pieces of content. However, the social media company didn’t address the fact that it failed to remove content on Instagram after initial reports by users or for how long the content was up on the platform.

Meta said that it uses a mix of artificial intelligence and human review to detect sexually suggestive content. The social media giant said that it doesn’t recommend this kind of content in places like Instagram Explore or Reels recommendations.

The Oversight Board has sought public comments — with a deadline of April 30 — on the matter that addresses harms by deep fake porn, contextual information about the proliferation of such content in regions like the U.S. and India, and possible pitfalls of Meta’s approach in detecting AI-generated explicit imagery.

The board will investigate the cases and public comments and post the decision on the site in a few weeks.

These cases indicate that large platforms are still grappling with older moderation processes while AI-powered tools have enabled users to create and distribute different types of content quickly and easily. Companies like Meta are experimenting with tools that use AI for content generation, with some efforts to detect such imagery. However, perpetrators are constantly finding ways to escape these detection systems and post problematic content on social platforms.


Software Development in Sri Lanka

Robotic Automations

US and EU commit to links aimed at boosting AI safety and risk research | TechCrunch


The European Union and United States put out a joint statement Friday affirming a desire to increase cooperation over artificial intelligence. The agreement covers AI safety and governance, but also, more broadly, an intent to collaborate across a number of other tech issues, such as developing digital identity standards and applying pressure on platforms to defend human rights.

As we reported Wednesday, this is the fruit of the sixth (and possibly last) meeting of the EU-U.S. Trade and Technology Council (TTC). The TTC has been meeting since 2021 in a bid to rebuild transatlantic relations battered by the Trump presidency.

Given the possibility of Donald Trump returning to the White House in the U.S. presidential elections taking place later this year, it’s not clear how much EU-U.S. cooperation on AI or any other strategic tech area will actually happen in the near future.

But, under the current political make-up across the Atlantic, the will to push for closer alignment across a range of tech issues has gained in strength. There is also a mutual desire to get this message heard — hence today’s joint statement — which is itself, perhaps, also a wider appeal aimed at each side’s voters to opt for a collaborative program, rather than a destructive opposite, come election time.

An AI dialogue

In a section of the joint statement focused on AI, filed under a heading of “Advancing Transatlantic Leadership on Critical and Emerging Technologies”, the pair write that they “reaffirm our commitment to a risk-based approach to artificial intelligence… and to advancing safe, secure, and trustworthy AI technologies.”

“We encourage advanced AI developers in the United States and Europe to further the application of the Hiroshima Process International Code of Conduct for Organisations Developing Advanced AI Systems which complements our respective governance and regulatory systems,” the statement also reads, referencing a set of risk-based recommendations that came out of G7 discussions on AI last year.

The main development out of the sixth TTC meeting appears to be a commitment from EU and U.S. AI oversight bodies, the European AI Office and the U.S. AI Safety Institute, to set up what’s couched as “a Dialogue.” The aim is a deeper collaboration between the AI institutions, with a particular focus on encouraging the sharing of scientific information among respective AI research ecosystems.

Topics highlighted here include benchmarks, potential risks and future technological trends.

“This cooperation will contribute to making progress with the implementation of the Joint Roadmap on Evaluation and Measurement Tools for Trustworthy AI and Risk Management, which is essential to minimise divergence as appropriate in our respective emerging AI governance and regulatory systems, and to cooperate on interoperable and international standards,” the two sides go on to suggest.

The statement also flags an updated version of a list of key AI terms, with “mutually accepted joint definitions” as another outcome from ongoing stakeholder talks flowing from the TTC.

Agreement on definitions will be a key piece of the puzzle to support work toward AI standardization.

A third element of what’s been agreed by the EU and U.S. on AI shoots for collaboration to drive research aimed at applying machine learning technologies for beneficial use cases, such as advancing healthcare outcomes, boosting agriculture and tackling climate change, with a particular focus on sustainable development. In a briefing with journalists earlier this week a senior commission official suggested this element of the joint working will focus on bringing AI advancements to developing countries and the global south.

“We are advancing on the promise of AI for sustainable development in our bilateral relationship through joint research cooperation as part of the Administrative Arrangement on Artificial Intelligence and computing to address global challenges for the public good,” the joint statement reads. “Working groups jointly staffed by United States science agencies and European Commission departments and agencies have achieved substantial progress by defining critical milestones for deliverables in the areas of extreme weather, energy, emergency response, and reconstruction. We are also making constructive progress in health and agriculture.”

In addition, an overview document on the collaboration around AI for the public good was published Friday. Per the document, multidisciplinary teams from the EU and U.S. have spent more than 100 hours in scientific meetings over the past half-year “discussing how to advance applications of AI in on-going projects and workstreams”.

“The collaboration is making positive strides in a number of areas in relation to challenges like energy optimisation, emergency response, urban reconstruction, and extreme weather and climate forecasting,” it continues, adding: “In the coming months, scientific experts and ecosystems in the EU and the United States intend to continue to advance their collaboration and present innovative research worldwide. This will unlock the power of AI to address global challenges.”

According to the joint statement, there is a desire to expand collaboration efforts in this area by adding more global partners.

“We will continue to explore opportunities with our partners in the United Kingdom, Canada, and Germany in the AI for Development Donor Partnership to accelerate and align our foreign assistance in Africa to support educators, entrepreneurs, and ordinary citizens to harness the promise of AI,” the EU and U.S. note.

On platforms, an area where the EU is enforcing recently passed, wide-ranging legislation — including laws like the Digital Services Act (DSA) and Digital Markets Act — the two sides are united in calling for Big Tech to take protecting “information integrity” seriously.

The joint statement refers to 2024 as “a Pivotal Year for Democratic Resilience”, on account of the number of elections being held around the world. It includes an explicit warning about threats posed by AI-generated information, saying the two sides “share the concern that malign use of AI applications, such as the creation of harmful ‘deepfakes,’ poses new risks, including to further the spread and targeting of foreign information manipulation and interference”.

It goes on to discuss a number of areas of ongoing EU-U.S. cooperation on platform governance and includes a joint call for platforms to do more to support researchers’ access to data — especially for the study of societal risks (something the EU’s DSA makes a legal requirement for larger platforms).

On e-identity, the statement refers to ongoing collaboration on standards work, adding: “The next phase of this project will focus on identifying potential use cases for transatlantic interoperability and cooperation with a view toward enabling the cross-border use of digital identities and wallets.”

Other areas of cooperation the statement covers include clean energy, quantum and 6G.


Software Development in Sri Lanka

Robotic Automations

EU and US set to announce joint working on AI safety, standards & R&D | TechCrunch


The European Union and the U.S. expect to announce a cooperation on AI at a meeting of the EU-U.S. Trade and Technology Council (TTC) on Friday, according to a senior commission official who was briefing journalists on background ahead of the confab.

The mood music points to growing cooperation between lawmakers on both sides of the Atlantic when it comes to devising strategies to respond to challenges and opportunities posed by powerful AI technologies — in spite of what remains a very skewed commercial picture where U.S. giants like OpenAI continue to dominate developments in cutting-edge AI.

The TTC was set up a few years ago, post-Trump, to provide a forum where EU and U.S. lawmakers could meet to discuss transatlantic cooperation on trade and tech policy issues. Friday’s meeting, the sixth since the forum started operating in 2021, will be the last before elections in both regions. The prospect of a second Trump presidency derailing future EU-U.S. cooperation may well be concentrating lawmakers’ minds on maximizing opportunities for joint working now.

“There will be certainly an announcement at the TTC around the AI Office and the [U.S.] AI Safety Institute,” the senior commission official said, referencing an EU oversight body that’s in the process of being set up as part of the incoming EU AI Act, a comprehensive risk-based framework for regulating AI apps that will start to apply across the bloc later this year.

This element of the incoming accord — seemingly set to be focused on AI safety or oversight — is being framed as a “collaboration or dialogue” between the respective AI oversight bodies in the EU and the U.S. to bolster implementation of regulatory powers on AI, per the official.

A second area of focus for the expected EU-U.S. AI agreement will be around standardization, they said. This will take the form of joint working aimed at developing standards that can underpin developments by establishing an “AI roadmap.”

The EU-U.S. partnership will also have a third element — “AI for public good”. This concerns joint work on fostering research activities but with a focus on implementing AI technologies in developing countries and the global south, per the commission.

The official suggested there’s a shared perspective that AI technologies will be able to bring “very quantifiable” benefits to developing regions — in areas like healthcare, agriculture and energy. So this is also set to be an area of focus for transatlantic collaboration on fostering uptake of AI in the near term.

‘AI’ stands for aligned interests?

AI is no longer being seen as a trade issue by the U.S., as the EU tells it. “Through the TTC, we have been able to explain our policies, and also to show to the Americans that, in fact, we have the same goals,” the commission official suggested. “Through the AI Act and through the [AI safety- and security-focused] Executive Order — which is to mitigate the risks of AI technologies while supporting their uptake in our economies.”

Earlier this week the U.S. and the U.K. signed a partnership agreement on AI safety. Although the EU-U.S. collaboration appears to be more wide ranging — as it’s slated to cover not just shared safety and standardization goals with the joint support for “public good” research.

The commission official teased additional areas of collaboration on emerging technologies — including standardization work in the area of electronic identity (where the EU has been developing an e-ID proposal for several years) that they suggested will also be announced Friday. “Electronic identity is a very strong area of cooperation with a lot of potential,” they said, claiming the U.S. is interested in “vast new business opportunities” created by the EU’s electronic identity wallet.

The official also suggested there is growing accord between the EU and U.S. on how to handle platform power — another area where the EU has targeted lawmaking in recent years. “We see a lot of commonalities [between EU laws like the DMA, aka Digital Markets Act] with the recent antitrust cases that are being launched also in the United States,” said the official. “I think in many of these areas there is no doubt that there is a win-win opportunity.”

Meanwhile, the U.S.-U.K. AI memorandum of understanding signed Monday in Washington by U.S. Commerce Secretary Gina Raimondo and the U.K.’s secretary of state for technology, Michelle Donelan, states the pair will aim to accelerate joint working on a range of AI safety issues, including in the area of national security as well as broader societal AI safety concerns.

The U.S.-U.K. agreement mentions at least one joint testing exercise on a publicly accessible AI model, the U.K.’s Department for Science, Innovation and Technology (DSIT) said in a press release. It also suggested there could be personnel exchanges between the two country’s respective AI safety institutes to collaborate on expertise-sharing.

Wider information-sharing is envisaged under the U.S.-U.K. agreement — about “capabilities and risks” associated with AI models and systems, and on “fundamental technical research on AI safety and security”. “This will work to underpin a common approach to AI safety testing, allowing researchers on both sides of the Atlantic — and around the world — to coalesce around a common scientific foundation,” DSIT’s PR continued.

Last summer, ahead of hosting a global AI summit, the U.K. government said it had obtained a commitment from U.S. AI giants Anthropic, DeepMind and OpenAI to provide “early or priority access” to their AI models to support research into evaluation and safety. It also announced a plan to spend £100 million on an AI safety taskforce which it said would be focused on so-called foundational AI models.

At the U.K. AI Summit last November, Raimondo announced the creation of a U.S. AI safety institute on the heels of the U.S. executive order on AI. This new institute will be housed within her department, under the National Institute of Standards and Technology, which she said would aim to work closely with other AI safety groups set up by other governments.

Neither the U.S. nor the U.K. have proposed comprehensive legislation on AI safety, as yet — with the EU remaining ahead of the pack when it comes to legislating on AI safety. But more cross-border joint working looks like a given.


Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber