From Digital Age to Nano Age. WorldWide.

Tag: safety

Robotic Automations

EU opens child safety probes of Facebook and Instagram, citing addictive design concerns | TechCrunch


Facebook and Instagram are under formal investigation in the European Union over child protection concerns, the Commission announced Thursday. The proceedings follow a raft of requests for information to parent entity Meta since the bloc’s online governance regime, the Digital Services Act (DSA), started applying last August. The development could be significant as the formal […]

© 2024 TechCrunch. All rights reserved. For personal use only.


Software Development in Sri Lanka

Robotic Automations

U.K. agency releases tools to test AI model safety | TechCrunch


The U.K. Safety Institute, the U.K.’s recently established AI safety body, has released a toolset designed to “strengthen AI safety” by making it easier for industry, research organizations and academia to develop AI evaluations.  Called Inspect, the toolset — which is available under an open source license, specifically an MIT License — aims to assess certain […]

© 2024 TechCrunch. All rights reserved. For personal use only.


Software Development in Sri Lanka

Robotic Automations

Ofcom to push for better age verification, filters and 40 other checks in new online child safety code | TechCrunch


Ofcom is cracking down on Instagram, YouTube and 150,000 other web services to improve child safety online. A new Children’s Safety Code from the U.K. Internet regulator will push tech firms to run better age checks, filter and downrank content, and apply around 40 other steps to assess harmful content around subjects like suicide, self harm and pornography, to reduce under-18’s access to it. Currently in draft form and open for feedback until July 17, enforcement of the Code is expected to kick in next year after Ofcom publishes the final in the spring. Firms will have three months to get their inaugural child safety risk assessments done after the final Children’s Safety Code is published.

The Code is significant because it could force a step-change in how Internet companies approach online safety. The government has repeatedly said it wants the U.K. to be the safest place to go online in the world. Whether it will be any more successful at preventing digital slurry from pouring into kids’ eyeballs than it has actual shit from polluting the country’s waterways remains to be seen. Critics of the approach suggest the law will burden tech firms with crippling compliance costs and make it harder for citizens to access certain types of information.

Meanwhile, failure to comply with the Online Safety Act can have serious consequences for UK-based web services large and small, with fines of up to 10% of global annual turnover for violations, and even criminal liability for senior managers in certain scenarios.

The guidance puts a big focus on stronger age verification. Following on from last year’s draft guidance on age assurance for porn sites, age verification and estimation technologies deemed “accurate, robust, reliable and fair” will be applied to a wider range of services as part of the plan. Photo-ID matching, facial age estimation and reusable digital identity services are in; self-declaration of age and contractual restrictions on the use of services by children are out.

That suggests Brits may need to get accustomed to proving their age before they access a range of online content — though how exactly platforms and services will respond to their legal duty to protect children will be for private companies to decide: that’s the nature of the guidance here.

The draft proposal also sets out specific rules on how content is handled. Suicide, self-harm and pornography content — deemed the most harmful — will have to be actively filtered (i.e. removed) so minors do not see it. Ofcom wants other types of content such as violence to be downranked and made far less visible in children’s feeds. Ofcom also said it may expect services to act on potentially harmful content (e.g. depression content). The regulator told TechCrunch it will encourage firms to pay particular attention to the “volume and intensity” of what kids are exposed to as they design safety interventions. All of this demands services be able to identify child users — again pushing robust age checks to the fore.

Ofcom previously named child safety as its first priority in enforcing the UK’s Online Safety Act — a sweeping content moderation and governance rulebook that touches on harms as diverse as online fraud and scam ads; cyberflashing and deepfake revenge porn; animal cruelty; and cyberbullying and trolling, as well as regulating how services tackle illegal content like terrorism and child sexual abuse material (CSAM).

The Online Safety Bill passed last fall, and now the regulator is busy with the process of implementation, which includes designing and consulting on detailed guidance ahead of its enforcement powers kicking in once parliament approves Codes of Practice it’s cooking up.

With Ofcom estimating around 150,000 web services in scope of the Online Safety Act, scores of tech firms will, at the least, have to assess whether children are accessing their services and, if so, take steps to identify and mitigate a range of safety risks. The regulator said it’s already working with some larger social media platforms where safety risks are likely to be greatest, such as Facebook and Instagram, to help them design their compliance plans.

Consultation on the Children’s Safety Code

In all, Ofcom’s draft Children’s Safety Code contains more than 40 “practical steps” the regulator wants web services to take to ensure child protection is enshrined in their operations. A wide range of apps and services are likely to fall in-scope — including popular social media sites, games and search engines.

“Services must prevent children from encountering the most harmful content relating to suicide, self-harm, eating disorders, and pornography. Services must also minimise children’s exposure to other serious harms, including violent, hateful or abusive material, bullying content, and content promoting dangerous challenges,” Ofcom wrote in a summary of the consultation.

“In practice, this means that all services which do not ban harmful content, and those at higher risk of it being shared on their service, will be expected to implement highly effective age-checks to prevent children from seeing it,” it added in a press release Monday. “In some cases, this will mean preventing children from accessing the entire site or app. In others it might mean age-restricting parts of their site or app for adults-only access, or restricting children’s access to identified harmful content.”

Ofcom’s current proposal suggests that almost all services will have to take mitigation measures to protect children. Only those deploying age verification or age estimation technology that is “highly effective” and used to prevent children from accessing the service (or the parts of it where content poses risks to kids) will not be subject to the children’s safety duties.

Those who find — on the contrary — that children can access their service will need to carry out a follow-on assessment known as the “child user condition”. This requires them to assess whether “a significant number” of kids are using the service and/or are likely to be attracted to it. Those that are likely to be accessed by children must then take steps to protect minors from harm, including conducting a Children’s Risk Assessment and implementing safety measures (such as age assurance, governance measures, safer design choices and so on) — as well as applying an ongoing review of their approach to ensure they keep up with changing risks and patterns of use. 

Ofcom does not define what “a significant number” means in this context — but “even a relatively small number of children could be significant in terms of the risk of harm. We suggest service providers should err on the side of caution in making their assessment.” In other words, tech firms may not be able to eschew child safety measures by arguing there aren’t many minors using their stuff.

Nor is there a simple one-shot fix for services that fall in scope of the child safety duty. Multiple measures are likely to be needed, combined with ongoing assessment of efficacy.

“There is no single fix-all measure that services can take to protect children online. Safety measures need to work together to help create an overall safer experience for children,” Ofcom wrote in an overview of the consultation, adding: “We have proposed a set of safety measures within our draft Children’s Safety Codes, that will work together to achieve safer experiences for children online.” 

Recommender systems, reconfigured

Under the draft Code, any service that operates a recommender system — a form of algorithmic content sorting, tracking user activity — and is at “higher risk” of showing harmful content, must use “highly-effective” age assurance to identify who their child users are. They must then configure their recommender algorithms to filter out the most harmful content (i.e. suicide, self harm, porn) from the feeds of users it has identified as children, and reduce the “visibility and prominence” of other harmful content.

Under the Online Safety Act, suicide, self harm, eating disorders and pornography are classed “primary priority content”. Harmful challenges and substances; abuse and harassment targeted at people with protected characteristics; real or realistic violence against people or animals; and instructions for acts of serious violence are all classified “priority content.” Web services may also identify other content risks they feel they need to act on as part of their risk assessments.

In the proposed guidance, Ofcom wants children to be able to provide negative feedback directly to the recommender feed — in order that it can better learn what content they don’t want to see too.

Content moderation is another big focus in the draft Code, with the regulator highlighting research showing content that’s harmful to children is available on many services at scale and which it said suggests services’ current efforts are insufficient.

Its proposal recommends all “user-to-user” services (i.e. those allowing users to connect with each other, such as via chat functions or through exposure to content uploads) must have content moderation systems and processes that ensure “swift action” is taken against content harmful to children. Ofcom’s proposal does not contain any expectations that automated tools are used to detect and review content. But the regulator writes that it’s aware large platforms often use AI for content moderation at scale and says it’s “exploring” how to incorporate measures on automated tools into its Codes in the future.

“Search engines are expected to take similar action,” Ofcom also suggested. “And where a user is believed to be a child, large search services must implement a ‘safe search’ setting which cannot be turned off must filter out the most harmful content.”

“Other broader measures require clear policies from services on what kind of content is allowed, how content is prioritised for review, and for content moderation teams to be well-resourced and trained,” it added.

The draft Code also includes measures it hopes will ensure “strong governance and accountability” around children’s safety inside tech firms. “These include having a named person accountable for compliance with the children’s safety duties; an annual senior-body review of all risk management activities relating to children’s safety; and an employee Code of Conduct that sets standards for employees around protecting children,” Ofcom wrote.

Facebook- and Instagram-owner Meta was frequently singled out by ministers during the drafting of the law for having a lax attitude to child protection. The largest platforms may be likely to pose the greatest safety risks — and therefore have “the most extensive expectations” when it comes to compliance — but there’s no free pass based on size.

Services cannot decline to take steps to protect children merely because it is too expensive or inconvenient — protecting children is a priority and all services, even the smallest, will have to take action as a result of our proposals,” it warned.

Other proposed safety measures Ofcom highlights include suggesting services provide more choice and support for children and the adults who care for them — such as by having “clear and accessible” terms of service; and making sure children can easily report content or make complaints.

The draft guidance also suggests children are provided with support tools that enable them to have more control over their interactions online — such an option to decline group invites; block and mute user accounts; or disable comments on their own posts.

The UK’s data protection authority, the Information Commission’s Office, has expected compliance with its own age-appropriate children’s design Code since September 2021 so it’s possible there may be some overlap. Ofcom for instance notes that service providers may already have assessed children’s access for a data protection compliance purpose — adding they “may be able to draw on the same evidence and analysis for both.”

Flipping the child safety script?

The regulator is urging tech firms to be proactive about safety issues, saying it won’t hesitate to use its full range of enforcement powers once they’re in place. The underlying message to tech firms is get your house in order sooner rather than later or risk costly consequences.

“We are clear that companies who fall short of their legal duties can expect to face enforcement action, including sizeable fines,” it warned in a press release.

The government is rowing hard behind Ofcom’s call for a proactive response, too. Commenting in a statement today, the technology secretary Michelle Donelan said: “To platforms, my message is engage with us and prepare. Do not wait for enforcement and hefty fines — step up to meet your responsibilities and act now.”

“The government assigned Ofcom to deliver the Act and today the regulator has been clear; platforms must introduce the kinds of age-checks young people experience in the real world and address algorithms which too readily mean they come across harmful material online,” she added. “Once in place these measures will bring in a fundamental change in how children in the UK experience the online world.

“I want to assure parents that protecting children is our number one priority and these laws will help keep their families safe.”

Ofcom said it wants its enforcement of the Online Safety Act to deliver what it couches as a “reset” for children’s safety online — saying it believes the approach it’s designing, with input from multiple stakeholders (including thousands of children and young people), will make a “significant difference” to kids’ online experiences.

Fleshing out its expectations, it said it wants the rulebook to flip the script on online safety so children will “not normally” be able to access porn and will be protected from “seeing, and being recommended, potentially harmful content”.

Beyond identity verification and content management, it also wants the law to ensure kids won’t be added to group chats without their consent; and wants it to make it easier for children to complain when they see harmful content, and be “more confident” that their complaints will be acted on.

As it stands, the opposite looks closer to what UK kids currently experience online, with Ofcom citing research over a four-week period in which a majority (62%) of children aged 13-17 reported encountering online harm and many saying they consider it an “unavoidable” part of their lives online.

Exposure to violent content begins in primary school, Ofcom found, with children who encounter content promoting suicide or self-harm characterizing it as “prolific” on social media; and frequent exposure contributing to a “collective normalisation and desensitisation”, as it put it. So there’s a huge job ahead for the regulator to reshape the online landscape kids encounter.

As well as the Children’s Safety Code, its guidance for services includes a draft Children’s Register of Risk, which it said sets out more information on how risks of harm to children manifest online; and draft Harms Guidance which sets out examples and the kind of content it considers to be harmful to children. Final versions of all its guidance will follow the consultation process, a legal duty on Ofcom. It also told TechCrunch that it will be providing more information and launching some digital tools to further support services’ compliance ahead of enforcement kicking in.

“Children’s voices have been at the heart of our approach in designing the Codes,” Ofcom added. “Over the last 12 months, we’ve heard from over 15,000 youngsters about their lives online and spoken with over 7,000 parents, as well as professionals who work with children.

“As part of our consultation process, we are holding a series of focused discussions with children from across the UK, to explore their views on our proposals in a safe environment. We also want to hear from other groups including parents and carers, the tech industry and civil society organisations — such as charities and expert professionals involved in protecting and promoting children’s interests.”

The regulator recently announced plans to launch an additional consultation later this year which it said will look at how automated tools, aka AI technologies, could be deployed to content moderation processes to proactively detect illegal content and content most harmful to children — such as previously undetected CSAM and content encouraging suicide and self-harm.

However, there is no clear evidence today that AI will be able to improve detection efficacy of such content without causing large volumes of (harmful) false positives. It thus remains to be seen whether Ofcom will push for greater use of such tech tools given the risks that leaning on automation in this context could backfire.

In recent years, a multi-year push by the Home Office geared towards fostering the development of so-called “safety tech” AI tools — specifically to scan end-to-end encrypted messages for CSAM — culminated in a damning independent assessment which warned such technologies aren’t fit for purpose and pose an existential threat to people’s privacy and the confidentiality of communications.

One question parents might have is what happens on a kid’s 18th birthday, when the Code no longer applies? If all these protections wrapping kids’ online experiences end overnight, there could be a risk of (still) young people being overwhelmed by sudden exposure to harmful content they’ve been shielded from until then. That sort of shocking content transition could itself create a new online coming-of-age risk for teens.

Ofcom told us future proposals for larger platforms could be introduced to mitigate this sort of risk.

“Children are accepting this harmful content as a normal part of the online experience — by protecting them from this content while they are children, we are also changing their expectations for what’s an appropriate experience online,” an Ofcom spokeswoman responded when we asked about this. “No user, regardless of their age, should accept to have their feed flooded with harmful content. Our phase 3 consultation will include further proposals on how the largest and riskiest services can empower all users to take more control of the content they see online. We plan to launch that consultation early next year.”


Software Development in Sri Lanka

Robotic Automations

Hinge adds a way to mute requests containing words you specify | TechCrunch


Hinge is adding a “Hidden Words” feature to its app, which will filter out likes with comments containing those phrases or words. It pretty much works like a mute filter on social media apps.

On Hinge, users can send a message when they like a profile — that message can contain any text. Rivals, such as Bumble also introduced a similar feature called compliments in 2022. Match Group-owned Hinge is trying to cut down on online harassment through those messages with this new ability to hide words.

If a like with a comment contains any of the words you added to the mute list, the profile doesn’t appear in your usual like list. Instead, it shows up in a separate “Hidden Likes” section. This is reminiscent of the message request inbox on Twitter/X or Instagram. You can review these comments or delete these requests directly.

Image Credits: Hinge

Users can add words, phrases, or emojis to their hidden words list. Hinge said users can add up to 1,000 hidden words to their profile.

Hinge’s Jeff Dunn told TechCrunch over a call that the company has been testing a bunch of features for its safety toolkit, and “Hidden Words” proved to be the first choice for a public release. Dunn hinted that this feature will have an expanded scope in the future and maybe even include filtering for chats.

“We have a roadmap for Hidden Words that involves expanding its abilities, flexibility, and coverage. We are currently researching how we can improve the feature while also understanding what people want out of it with the first release,” he said.


Software Development in Sri Lanka

Robotic Automations

Meta will auto-blur nudity in Instagram DMs in latest teen safety step | TechCrunch


Meta said on Thursday that it is testing new features on Instagram intended to help safeguard young people from unwanted nudity or sextortion scams. This includes a feature called “Nudity Protection in DMs,” which automatically blurs images detected as containing nudity.

The tech giant said it will also nudge teens to protect themselves by serving a warning encouraging them to think twice about sharing intimate images. Meta hopes this will boost protection against scammers who may send nude images to trick people into sending their own images in return.

The company said it is also implementing changes that will make it more difficult for potential scammers and criminals to find and interact with teens. Meta said it is developing new technology to identify accounts that are “potentially” involved in sextortion scams, and will apply limits on how these suspect accounts can interact with other users.

In another step announced on Thursday, Meta said it has increased the data it is sharing with the cross-platform online child safety program, Lantern, to include more “sextortion-specific signals.”

The social networking giant has had long-standing policies that ban people from sending unwanted nudes or seeking to coerce others into sharing intimate images. However, that doesn’t stop these problems from occurring and causing misery for scores of teens and young people — sometimes with extremely tragic results.

We’ve rounded up the latest crop of changes in more detail below.

Nudity screens

Nudity Protection in DMs aims to protect teen users of Instagram from cyberflashing by putting nude images behind a safety screen. Users will be able to choose whether or not to view such images.

“We’ll also show them a message encouraging them not to feel pressure to respond, with an option to block the sender and report the chat,” said Meta.

The nudity safety screen will be turned on by default for users under 18 globally. Older users will see a notification encouraging them to turn the feature on.

“When nudity protection is turned on, people sending images containing nudity will see a message reminding them to be cautious when sending sensitive photos, and that they can unsend these photos if they’ve changed their mind,” the company added.

Anyone trying to forward a nude image will see the same warning encouraging them to reconsider.

The feature is powered by on-device machine learning, so Meta said it will work within end-to-end encrypted chats because the image analysis is carried out on the user’s own device.

The nudity filter has been in development for nearly two years.

Safety tips

In another safeguarding measure, Instagram users who send or receive nudes will be directed to safety tips (with information about the potential risks involved), which, according to Meta, have been developed with guidance from experts.

“These tips include reminders that people may screenshot or forward images without your knowledge, that your relationship to the person may change in the future, and that you should review profiles carefully in case they’re not who they say they are,” the company wrote in a statement. “They also link to a range of resources, including Meta’s Safety Center, support helplines, StopNCII.org for those over 18, and Take It Down for those under 18.”

The company is also testing showing pop-up messages to people who may have interacted with an account that has been removed for sextortion. These pop-ups will also direct users to relevant resources.

“We’re also adding new child safety helplines from around the world into our in-app reporting flows. This means when teens report relevant issues — such as nudity, threats to share private images or sexual exploitation or solicitation — we’ll direct them to local child safety helplines where available,” the company said.

Tech to spot sextortionists

While Meta says it removes sextortionists’ accounts when it becomes aware of them, it first needs to spot bad actors to shut them down. So, the company is trying to go further by “developing technology to help identify where accounts may potentially be engaging in sextortion scams, based on a range of signals that could indicate sextortion behavior.”

“While these signals aren’t necessarily evidence that an account has broken our rules, we’re taking precautionary steps to help prevent these accounts from finding and interacting with teen accounts,” the company said. “This builds on the work we already do to prevent other potentially suspicious accounts from finding and interacting with teens.”

It’s not clear what technology Meta is using to do this analysis, nor which signals might denote a potential sextortionist (we’ve asked for more details). Presumably, the company may analyze patterns of communication to try to detect bad actors.

Accounts that get flagged by Meta as potential sextortionists will face restrictions on messaging or interacting with other users.

“[A]ny message requests potential sextortion accounts try to send will go straight to the recipient’s hidden requests folder, meaning they won’t be notified of the message and never have to see it,” the company wrote.

Users who are already chatting with potential scam or sextortion accounts will not have their chats shut down, but will be shown Safety Notices “encouraging them to report any threats to share their private images, and reminding them that they can say ‘no’ to anything that makes them feel uncomfortable,” according to the company.

Teen users are already protected from receiving DMs from adults they are not connected with on Instagram (and also from other teens, in some cases). But Meta is taking this a step further: The company said it is testing a feature that hides the “Message” button on teenagers’ profiles for potential sextortion accounts — even if they’re connected.

“We’re also testing hiding teens from these accounts in people’s follower, following and like lists, and making it harder for them to find teen accounts in Search results,” it added.

It’s worth noting the company is under increasing scrutiny in Europe over child safety risks on Instagram, and enforcers have questioned its approach since the bloc’s Digital Services Act (DSA) came into force last summer.

A long, slow creep towards safety

Meta has announced measures to combat sextortion before — most recently in February, when it expanded access to Take It Down. The third-party tool lets people generate a hash of an intimate image locally on their own device and share it with the National Center for Missing and Exploited Children, helping to create a repository of non-consensual image hashes that companies can use to search for and remove revenge porn.

The company’s previous approaches to tackle that problem had been criticized, as they required young people to upload their nudes. In the absence of hard laws regulating how social networks need to protect children, Meta was left to self-regulate for years — with patchy results.

However, some requirements have landed on platforms in recent years — such as the U.K.’s Children Code (which came into force in 2021) and the more recent DSA in the EU — and tech giants like Meta are finally having to pay more attention to protecting minors.

For example, in July 2021, Meta started defaulting young people’s Instagram accounts to private just ahead of the U.K. compliance deadline. Even tighter privacy settings for teens on Instagram and Facebook followed in November 2022.

This January, the company announced it would set stricter messaging settings for teens on Facebook and Instagram by default, shortly before the full compliance deadline for the DSA kicked in in February.

This slow and iterative feature creep at Meta concerning protective measures for young users raises questions about what took the company so long to apply stronger safeguards. It suggests Meta opted for a cynical minimum in safeguarding in a bid to manage the impact on usage, and prioritize engagement over safety. That is exactly what Meta whistleblower Francis Haugen repeatedly denounced her former employer for.

Asked why the company is not also rolling out these new protections to Facebook, a spokeswoman for Meta told TechCrunch, “We want to respond to where we see the biggest need and relevance — which, when it comes to unwanted nudity and educating teens on the risks of sharing sensitive images — we think is on Instagram DMs, so that’s where we’re focusing first.”


Software Development in Sri Lanka

Robotic Automations

Exclusive: Life360 launches flight landing notifications to alert friends and family


Family location services company Life360 has launched a new notification for its apps to automatically alert friends and family when you reach a destination after taking a flight.

Life360 said that the feature uses phone sensors to measure location, altitude and speed to determine if you are taking a flight. Plus, its algorithms can detect takeoff and landing times, and alert family members when you connect to the network post-landing.

The company said the landing notification feature is a useful alternative to online flight trackers or waiting for the traveler to send updates to their circle. The feature is enabled for all users with the latest app update and can be turned off through the Flight Detection toggle in the settings.

Life360, which has more than 66 million active users on its platform, said that the people in a flight-taking user’s circle can see a plane icon as a movement indicator — adding to the current set of activities such as walking, running, biking and driving.

The company’s CEO Chris Hulls told TechCrunch that the company wants to focus on safety and protection updates for users’ inner circle. He said that comparatively, Apple’s solution is very generic. Additionally, he noted that Life360 has the advantage of being on both iOS and Android.

Hulls said that the company is looking to launch a new Tile lineup, which it acquired in 2021 for $205 million, this year without providing more detail.

“We are going to make a unified hardware lineup that is far more robust than Apple, which is one size fits all. We will have Bluetooth tags and GPS devices with LTE connections, and it will be a more holistic service,” he said.

Life360 launched its premium membership in Canada in 2022 and in the U.K. in 2023. This year, the company aims to expand the paid tier to Australia.


Software Development in Sri Lanka

Robotic Automations

US and EU commit to links aimed at boosting AI safety and risk research | TechCrunch


The European Union and United States put out a joint statement Friday affirming a desire to increase cooperation over artificial intelligence. The agreement covers AI safety and governance, but also, more broadly, an intent to collaborate across a number of other tech issues, such as developing digital identity standards and applying pressure on platforms to defend human rights.

As we reported Wednesday, this is the fruit of the sixth (and possibly last) meeting of the EU-U.S. Trade and Technology Council (TTC). The TTC has been meeting since 2021 in a bid to rebuild transatlantic relations battered by the Trump presidency.

Given the possibility of Donald Trump returning to the White House in the U.S. presidential elections taking place later this year, it’s not clear how much EU-U.S. cooperation on AI or any other strategic tech area will actually happen in the near future.

But, under the current political make-up across the Atlantic, the will to push for closer alignment across a range of tech issues has gained in strength. There is also a mutual desire to get this message heard — hence today’s joint statement — which is itself, perhaps, also a wider appeal aimed at each side’s voters to opt for a collaborative program, rather than a destructive opposite, come election time.

An AI dialogue

In a section of the joint statement focused on AI, filed under a heading of “Advancing Transatlantic Leadership on Critical and Emerging Technologies”, the pair write that they “reaffirm our commitment to a risk-based approach to artificial intelligence… and to advancing safe, secure, and trustworthy AI technologies.”

“We encourage advanced AI developers in the United States and Europe to further the application of the Hiroshima Process International Code of Conduct for Organisations Developing Advanced AI Systems which complements our respective governance and regulatory systems,” the statement also reads, referencing a set of risk-based recommendations that came out of G7 discussions on AI last year.

The main development out of the sixth TTC meeting appears to be a commitment from EU and U.S. AI oversight bodies, the European AI Office and the U.S. AI Safety Institute, to set up what’s couched as “a Dialogue.” The aim is a deeper collaboration between the AI institutions, with a particular focus on encouraging the sharing of scientific information among respective AI research ecosystems.

Topics highlighted here include benchmarks, potential risks and future technological trends.

“This cooperation will contribute to making progress with the implementation of the Joint Roadmap on Evaluation and Measurement Tools for Trustworthy AI and Risk Management, which is essential to minimise divergence as appropriate in our respective emerging AI governance and regulatory systems, and to cooperate on interoperable and international standards,” the two sides go on to suggest.

The statement also flags an updated version of a list of key AI terms, with “mutually accepted joint definitions” as another outcome from ongoing stakeholder talks flowing from the TTC.

Agreement on definitions will be a key piece of the puzzle to support work toward AI standardization.

A third element of what’s been agreed by the EU and U.S. on AI shoots for collaboration to drive research aimed at applying machine learning technologies for beneficial use cases, such as advancing healthcare outcomes, boosting agriculture and tackling climate change, with a particular focus on sustainable development. In a briefing with journalists earlier this week a senior commission official suggested this element of the joint working will focus on bringing AI advancements to developing countries and the global south.

“We are advancing on the promise of AI for sustainable development in our bilateral relationship through joint research cooperation as part of the Administrative Arrangement on Artificial Intelligence and computing to address global challenges for the public good,” the joint statement reads. “Working groups jointly staffed by United States science agencies and European Commission departments and agencies have achieved substantial progress by defining critical milestones for deliverables in the areas of extreme weather, energy, emergency response, and reconstruction. We are also making constructive progress in health and agriculture.”

In addition, an overview document on the collaboration around AI for the public good was published Friday. Per the document, multidisciplinary teams from the EU and U.S. have spent more than 100 hours in scientific meetings over the past half-year “discussing how to advance applications of AI in on-going projects and workstreams”.

“The collaboration is making positive strides in a number of areas in relation to challenges like energy optimisation, emergency response, urban reconstruction, and extreme weather and climate forecasting,” it continues, adding: “In the coming months, scientific experts and ecosystems in the EU and the United States intend to continue to advance their collaboration and present innovative research worldwide. This will unlock the power of AI to address global challenges.”

According to the joint statement, there is a desire to expand collaboration efforts in this area by adding more global partners.

“We will continue to explore opportunities with our partners in the United Kingdom, Canada, and Germany in the AI for Development Donor Partnership to accelerate and align our foreign assistance in Africa to support educators, entrepreneurs, and ordinary citizens to harness the promise of AI,” the EU and U.S. note.

On platforms, an area where the EU is enforcing recently passed, wide-ranging legislation — including laws like the Digital Services Act (DSA) and Digital Markets Act — the two sides are united in calling for Big Tech to take protecting “information integrity” seriously.

The joint statement refers to 2024 as “a Pivotal Year for Democratic Resilience”, on account of the number of elections being held around the world. It includes an explicit warning about threats posed by AI-generated information, saying the two sides “share the concern that malign use of AI applications, such as the creation of harmful ‘deepfakes,’ poses new risks, including to further the spread and targeting of foreign information manipulation and interference”.

It goes on to discuss a number of areas of ongoing EU-U.S. cooperation on platform governance and includes a joint call for platforms to do more to support researchers’ access to data — especially for the study of societal risks (something the EU’s DSA makes a legal requirement for larger platforms).

On e-identity, the statement refers to ongoing collaboration on standards work, adding: “The next phase of this project will focus on identifying potential use cases for transatlantic interoperability and cooperation with a view toward enabling the cross-border use of digital identities and wallets.”

Other areas of cooperation the statement covers include clean energy, quantum and 6G.


Software Development in Sri Lanka

Robotic Automations

India, grappling with election misinfo, weighs up labels and its own AI safety coalition | TechCrunch


India, long in the tooth when it comes to co-opting tech to persuade the public, has become a global hot spot when it comes to how AI is being used, and abused, in political discourse, and specifically the democratic process. Tech companies, who built the tools in the first place, are making trips to the country to push solutions.

Earlier this year, Andy Parsons, a senior director at Adobe who oversees its involvement in the cross-industry Content Authenticity Initiative (CAI), stepped into the whirlpool when he made a trip to India to visit with media and tech organizations in the country to promote tools that can be integrated into content workflows to identify and flag AI content.

“Instead of detecting what’s fake or manipulated, we as a society, and this is an international concern, should start to declare authenticity, meaning saying if something is generated by AI that should be known to consumers,” he said in an interview.

Parsons added that some Indian companies — currently not part of a Munich AI election safety accord signed by OpenAI, Adobe, Google and Amazon in February — intended to construct a similar alliance in the country.

“Legislation is a very tricky thing. To assume that the government will legislate correctly and rapidly enough in any jurisdiction is something hard to rely on. It’s better for the government to take a very steady approach and take its time,” he said.

Detection tools are famously inconsistent, but they are a start in fixing some of the problems, or so the argument goes.

“The concept is already well understood,” he said during his Delhi trip. “I’m helping raise awareness that the tools are also ready. It’s not just an idea. This is something that’s already deployed.”

Andy Parsons, senior director at Adobe. Image Credits: Adobe

The CAI — which promotes royalty-free, open standards for identifying if digital content was generated by a machine or a human — predates the current hype around generative AI: It was founded in 2019 and now has 2,500 members, including Microsoft, Meta, and Google, The New York Times, The Wall Street Journal and the BBC.

Just as there is an industry growing around the business of leveraging AI to create media, there is a smaller one being created to try to course-correct some of the more nefarious applications of that.

So in February 2021, Adobe went one step further into building one of those standards itself and co-founded the Coalition for Content Provenance and Authenticity (C2PA) with ARM, BBC, Intel, Microsoft and Truepic. The coalition aims to develop an open standard, which taps the metadata of images, videos, text and other media to highlight their provenance and tell people about the file’s origins, the location and time of its generation, and whether it was altered before it reached the user. The CAI works with C2PA to promote the standard and make it available to the masses.

Now it is actively engaging with governments like India’s to widen the adoption of that standard to highlight the provenance of AI content and participate with authorities in developing guidelines for AI’s advancement.

Adobe has nothing but also everything to lose by playing an active role in this game. It’s not — yet — acquiring or building large language models (LLMs) of its own, but as the home of apps like Photoshop and Lightroom, it’s the market leader in tools for the creative community, and so not only is it building new products like Firefly to generate AI content natively, but it is also infusing legacy products with AI. If the market develops as some believe it will, AI will be a must-have in the mix if Adobe wants to stay on top. If regulators (or common sense) have their way, Adobe’s future may well be contingent on how successful it is in making sure what it sells does not contribute to the mess.

The bigger picture in India in any case is indeed a mess.

Google focused on India as a test bed for how it will bar use of its generative AI tool Gemini when it comes to election content; parties are weaponizing AI to create memes with likenesses of opponents; Meta has set up a deepfake “helpline” for WhatsApp, such is the popularity of the messaging platform in spreading AI-powered missives; and at a time when countries are sounding increasingly alarmed about AI safety and what they have to do to ensure it, we’ll have to see what the impact will be of India’s government deciding in March to relax rules on how new AI models are built, tested and deployed. It’s certainly meant to spur more AI activity, at any rate.

Using its open standard, the C2PA has developed a digital nutrition label for content called Content Credentials. The CAI members are working to deploy the digital watermark on their content to let users know its origin and whether it is AI-generated. Adobe has Content Credentials across its creative tools, including Photoshop and Lightroom. It also automatically attaches to AI content generated by Adobe’s AI model Firefly. Last year, Leica launched its camera with Content Credentials built in, and Microsoft added Content Credentials to all AI-generated images created using Bing Image Creator.

Image Credits: Content Credentials

Parsons told TechCrunch the CAI is talking with global governments on two areas: one is to help promote the standard as an international standard, and the other is to adopt it.

“In an election year, it’s especially critical for candidates, parties, incumbent offices and administrations who release material to the media and to the public all the time to make sure that it is knowable that if something is released from PM [Narendra] Modi’s office, it is actually from PM Modi’s office. There have been many incidents where that’s not the case. So, understanding that something is truly authentic for consumers, fact-checkers, platforms and intermediaries is very important,” he said.

India’s large population, vast language and demographic diversity make it challenging to curb misinformation, he added, a vote in favor of simple labels to cut through that.

“That’s a little ‘CR’ … it’s two western letters like most Adobe tools, but this indicates there’s more context to be shown,” he said.

Controversy continues to surround what the real point might be behind tech companies supporting any kind of AI safety measure: Is it really about existential concern, or just having a seat at the table to give the impression of existential concern, all the while making sure their interests get safeguarded in the process of rule making?

“It’s generally not controversial with the companies who are involved, and all the companies who signed the recent Munich accord, including Adobe, who came together, dropped competitive pressures because these ideas are something that we all need to do,” he said in defense of the work.


Software Development in Sri Lanka

Robotic Automations

EU and US set to announce joint working on AI safety, standards & R&D | TechCrunch


The European Union and the U.S. expect to announce a cooperation on AI at a meeting of the EU-U.S. Trade and Technology Council (TTC) on Friday, according to a senior commission official who was briefing journalists on background ahead of the confab.

The mood music points to growing cooperation between lawmakers on both sides of the Atlantic when it comes to devising strategies to respond to challenges and opportunities posed by powerful AI technologies — in spite of what remains a very skewed commercial picture where U.S. giants like OpenAI continue to dominate developments in cutting-edge AI.

The TTC was set up a few years ago, post-Trump, to provide a forum where EU and U.S. lawmakers could meet to discuss transatlantic cooperation on trade and tech policy issues. Friday’s meeting, the sixth since the forum started operating in 2021, will be the last before elections in both regions. The prospect of a second Trump presidency derailing future EU-U.S. cooperation may well be concentrating lawmakers’ minds on maximizing opportunities for joint working now.

“There will be certainly an announcement at the TTC around the AI Office and the [U.S.] AI Safety Institute,” the senior commission official said, referencing an EU oversight body that’s in the process of being set up as part of the incoming EU AI Act, a comprehensive risk-based framework for regulating AI apps that will start to apply across the bloc later this year.

This element of the incoming accord — seemingly set to be focused on AI safety or oversight — is being framed as a “collaboration or dialogue” between the respective AI oversight bodies in the EU and the U.S. to bolster implementation of regulatory powers on AI, per the official.

A second area of focus for the expected EU-U.S. AI agreement will be around standardization, they said. This will take the form of joint working aimed at developing standards that can underpin developments by establishing an “AI roadmap.”

The EU-U.S. partnership will also have a third element — “AI for public good”. This concerns joint work on fostering research activities but with a focus on implementing AI technologies in developing countries and the global south, per the commission.

The official suggested there’s a shared perspective that AI technologies will be able to bring “very quantifiable” benefits to developing regions — in areas like healthcare, agriculture and energy. So this is also set to be an area of focus for transatlantic collaboration on fostering uptake of AI in the near term.

‘AI’ stands for aligned interests?

AI is no longer being seen as a trade issue by the U.S., as the EU tells it. “Through the TTC, we have been able to explain our policies, and also to show to the Americans that, in fact, we have the same goals,” the commission official suggested. “Through the AI Act and through the [AI safety- and security-focused] Executive Order — which is to mitigate the risks of AI technologies while supporting their uptake in our economies.”

Earlier this week the U.S. and the U.K. signed a partnership agreement on AI safety. Although the EU-U.S. collaboration appears to be more wide ranging — as it’s slated to cover not just shared safety and standardization goals with the joint support for “public good” research.

The commission official teased additional areas of collaboration on emerging technologies — including standardization work in the area of electronic identity (where the EU has been developing an e-ID proposal for several years) that they suggested will also be announced Friday. “Electronic identity is a very strong area of cooperation with a lot of potential,” they said, claiming the U.S. is interested in “vast new business opportunities” created by the EU’s electronic identity wallet.

The official also suggested there is growing accord between the EU and U.S. on how to handle platform power — another area where the EU has targeted lawmaking in recent years. “We see a lot of commonalities [between EU laws like the DMA, aka Digital Markets Act] with the recent antitrust cases that are being launched also in the United States,” said the official. “I think in many of these areas there is no doubt that there is a win-win opportunity.”

Meanwhile, the U.S.-U.K. AI memorandum of understanding signed Monday in Washington by U.S. Commerce Secretary Gina Raimondo and the U.K.’s secretary of state for technology, Michelle Donelan, states the pair will aim to accelerate joint working on a range of AI safety issues, including in the area of national security as well as broader societal AI safety concerns.

The U.S.-U.K. agreement mentions at least one joint testing exercise on a publicly accessible AI model, the U.K.’s Department for Science, Innovation and Technology (DSIT) said in a press release. It also suggested there could be personnel exchanges between the two country’s respective AI safety institutes to collaborate on expertise-sharing.

Wider information-sharing is envisaged under the U.S.-U.K. agreement — about “capabilities and risks” associated with AI models and systems, and on “fundamental technical research on AI safety and security”. “This will work to underpin a common approach to AI safety testing, allowing researchers on both sides of the Atlantic — and around the world — to coalesce around a common scientific foundation,” DSIT’s PR continued.

Last summer, ahead of hosting a global AI summit, the U.K. government said it had obtained a commitment from U.S. AI giants Anthropic, DeepMind and OpenAI to provide “early or priority access” to their AI models to support research into evaluation and safety. It also announced a plan to spend £100 million on an AI safety taskforce which it said would be focused on so-called foundational AI models.

At the U.K. AI Summit last November, Raimondo announced the creation of a U.S. AI safety institute on the heels of the U.S. executive order on AI. This new institute will be housed within her department, under the National Institute of Standards and Technology, which she said would aim to work closely with other AI safety groups set up by other governments.

Neither the U.S. nor the U.K. have proposed comprehensive legislation on AI safety, as yet — with the EU remaining ahead of the pack when it comes to legislating on AI safety. But more cross-border joint working looks like a given.


Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber