From Digital Age to Nano Age. WorldWide.

Tag: safety

Robotic Automations

Hinge adds a way to mute requests containing words you specify | TechCrunch


Hinge is adding a “Hidden Words” feature to its app, which will filter out likes with comments containing those phrases or words. It pretty much works like a mute filter on social media apps.

On Hinge, users can send a message when they like a profile — that message can contain any text. Rivals, such as Bumble also introduced a similar feature called compliments in 2022. Match Group-owned Hinge is trying to cut down on online harassment through those messages with this new ability to hide words.

If a like with a comment contains any of the words you added to the mute list, the profile doesn’t appear in your usual like list. Instead, it shows up in a separate “Hidden Likes” section. This is reminiscent of the message request inbox on Twitter/X or Instagram. You can review these comments or delete these requests directly.

Image Credits: Hinge

Users can add words, phrases, or emojis to their hidden words list. Hinge said users can add up to 1,000 hidden words to their profile.

Hinge’s Jeff Dunn told TechCrunch over a call that the company has been testing a bunch of features for its safety toolkit, and “Hidden Words” proved to be the first choice for a public release. Dunn hinted that this feature will have an expanded scope in the future and maybe even include filtering for chats.

“We have a roadmap for Hidden Words that involves expanding its abilities, flexibility, and coverage. We are currently researching how we can improve the feature while also understanding what people want out of it with the first release,” he said.


Software Development in Sri Lanka

Robotic Automations

Meta will auto-blur nudity in Instagram DMs in latest teen safety step | TechCrunch


Meta said on Thursday that it is testing new features on Instagram intended to help safeguard young people from unwanted nudity or sextortion scams. This includes a feature called “Nudity Protection in DMs,” which automatically blurs images detected as containing nudity.

The tech giant said it will also nudge teens to protect themselves by serving a warning encouraging them to think twice about sharing intimate images. Meta hopes this will boost protection against scammers who may send nude images to trick people into sending their own images in return.

The company said it is also implementing changes that will make it more difficult for potential scammers and criminals to find and interact with teens. Meta said it is developing new technology to identify accounts that are “potentially” involved in sextortion scams, and will apply limits on how these suspect accounts can interact with other users.

In another step announced on Thursday, Meta said it has increased the data it is sharing with the cross-platform online child safety program, Lantern, to include more “sextortion-specific signals.”

The social networking giant has had long-standing policies that ban people from sending unwanted nudes or seeking to coerce others into sharing intimate images. However, that doesn’t stop these problems from occurring and causing misery for scores of teens and young people — sometimes with extremely tragic results.

We’ve rounded up the latest crop of changes in more detail below.

Nudity screens

Nudity Protection in DMs aims to protect teen users of Instagram from cyberflashing by putting nude images behind a safety screen. Users will be able to choose whether or not to view such images.

“We’ll also show them a message encouraging them not to feel pressure to respond, with an option to block the sender and report the chat,” said Meta.

The nudity safety screen will be turned on by default for users under 18 globally. Older users will see a notification encouraging them to turn the feature on.

“When nudity protection is turned on, people sending images containing nudity will see a message reminding them to be cautious when sending sensitive photos, and that they can unsend these photos if they’ve changed their mind,” the company added.

Anyone trying to forward a nude image will see the same warning encouraging them to reconsider.

The feature is powered by on-device machine learning, so Meta said it will work within end-to-end encrypted chats because the image analysis is carried out on the user’s own device.

The nudity filter has been in development for nearly two years.

Safety tips

In another safeguarding measure, Instagram users who send or receive nudes will be directed to safety tips (with information about the potential risks involved), which, according to Meta, have been developed with guidance from experts.

“These tips include reminders that people may screenshot or forward images without your knowledge, that your relationship to the person may change in the future, and that you should review profiles carefully in case they’re not who they say they are,” the company wrote in a statement. “They also link to a range of resources, including Meta’s Safety Center, support helplines, StopNCII.org for those over 18, and Take It Down for those under 18.”

The company is also testing showing pop-up messages to people who may have interacted with an account that has been removed for sextortion. These pop-ups will also direct users to relevant resources.

“We’re also adding new child safety helplines from around the world into our in-app reporting flows. This means when teens report relevant issues — such as nudity, threats to share private images or sexual exploitation or solicitation — we’ll direct them to local child safety helplines where available,” the company said.

Tech to spot sextortionists

While Meta says it removes sextortionists’ accounts when it becomes aware of them, it first needs to spot bad actors to shut them down. So, the company is trying to go further by “developing technology to help identify where accounts may potentially be engaging in sextortion scams, based on a range of signals that could indicate sextortion behavior.”

“While these signals aren’t necessarily evidence that an account has broken our rules, we’re taking precautionary steps to help prevent these accounts from finding and interacting with teen accounts,” the company said. “This builds on the work we already do to prevent other potentially suspicious accounts from finding and interacting with teens.”

It’s not clear what technology Meta is using to do this analysis, nor which signals might denote a potential sextortionist (we’ve asked for more details). Presumably, the company may analyze patterns of communication to try to detect bad actors.

Accounts that get flagged by Meta as potential sextortionists will face restrictions on messaging or interacting with other users.

“[A]ny message requests potential sextortion accounts try to send will go straight to the recipient’s hidden requests folder, meaning they won’t be notified of the message and never have to see it,” the company wrote.

Users who are already chatting with potential scam or sextortion accounts will not have their chats shut down, but will be shown Safety Notices “encouraging them to report any threats to share their private images, and reminding them that they can say ‘no’ to anything that makes them feel uncomfortable,” according to the company.

Teen users are already protected from receiving DMs from adults they are not connected with on Instagram (and also from other teens, in some cases). But Meta is taking this a step further: The company said it is testing a feature that hides the “Message” button on teenagers’ profiles for potential sextortion accounts — even if they’re connected.

“We’re also testing hiding teens from these accounts in people’s follower, following and like lists, and making it harder for them to find teen accounts in Search results,” it added.

It’s worth noting the company is under increasing scrutiny in Europe over child safety risks on Instagram, and enforcers have questioned its approach since the bloc’s Digital Services Act (DSA) came into force last summer.

A long, slow creep towards safety

Meta has announced measures to combat sextortion before — most recently in February, when it expanded access to Take It Down. The third-party tool lets people generate a hash of an intimate image locally on their own device and share it with the National Center for Missing and Exploited Children, helping to create a repository of non-consensual image hashes that companies can use to search for and remove revenge porn.

The company’s previous approaches to tackle that problem had been criticized, as they required young people to upload their nudes. In the absence of hard laws regulating how social networks need to protect children, Meta was left to self-regulate for years — with patchy results.

However, some requirements have landed on platforms in recent years — such as the U.K.’s Children Code (which came into force in 2021) and the more recent DSA in the EU — and tech giants like Meta are finally having to pay more attention to protecting minors.

For example, in July 2021, Meta started defaulting young people’s Instagram accounts to private just ahead of the U.K. compliance deadline. Even tighter privacy settings for teens on Instagram and Facebook followed in November 2022.

This January, the company announced it would set stricter messaging settings for teens on Facebook and Instagram by default, shortly before the full compliance deadline for the DSA kicked in in February.

This slow and iterative feature creep at Meta concerning protective measures for young users raises questions about what took the company so long to apply stronger safeguards. It suggests Meta opted for a cynical minimum in safeguarding in a bid to manage the impact on usage, and prioritize engagement over safety. That is exactly what Meta whistleblower Francis Haugen repeatedly denounced her former employer for.

Asked why the company is not also rolling out these new protections to Facebook, a spokeswoman for Meta told TechCrunch, “We want to respond to where we see the biggest need and relevance — which, when it comes to unwanted nudity and educating teens on the risks of sharing sensitive images — we think is on Instagram DMs, so that’s where we’re focusing first.”


Software Development in Sri Lanka

Robotic Automations

Exclusive: Life360 launches flight landing notifications to alert friends and family


Family location services company Life360 has launched a new notification for its apps to automatically alert friends and family when you reach a destination after taking a flight.

Life360 said that the feature uses phone sensors to measure location, altitude and speed to determine if you are taking a flight. Plus, its algorithms can detect takeoff and landing times, and alert family members when you connect to the network post-landing.

The company said the landing notification feature is a useful alternative to online flight trackers or waiting for the traveler to send updates to their circle. The feature is enabled for all users with the latest app update and can be turned off through the Flight Detection toggle in the settings.

Life360, which has more than 66 million active users on its platform, said that the people in a flight-taking user’s circle can see a plane icon as a movement indicator — adding to the current set of activities such as walking, running, biking and driving.

The company’s CEO Chris Hulls told TechCrunch that the company wants to focus on safety and protection updates for users’ inner circle. He said that comparatively, Apple’s solution is very generic. Additionally, he noted that Life360 has the advantage of being on both iOS and Android.

Hulls said that the company is looking to launch a new Tile lineup, which it acquired in 2021 for $205 million, this year without providing more detail.

“We are going to make a unified hardware lineup that is far more robust than Apple, which is one size fits all. We will have Bluetooth tags and GPS devices with LTE connections, and it will be a more holistic service,” he said.

Life360 launched its premium membership in Canada in 2022 and in the U.K. in 2023. This year, the company aims to expand the paid tier to Australia.


Software Development in Sri Lanka

Robotic Automations

US and EU commit to links aimed at boosting AI safety and risk research | TechCrunch


The European Union and United States put out a joint statement Friday affirming a desire to increase cooperation over artificial intelligence. The agreement covers AI safety and governance, but also, more broadly, an intent to collaborate across a number of other tech issues, such as developing digital identity standards and applying pressure on platforms to defend human rights.

As we reported Wednesday, this is the fruit of the sixth (and possibly last) meeting of the EU-U.S. Trade and Technology Council (TTC). The TTC has been meeting since 2021 in a bid to rebuild transatlantic relations battered by the Trump presidency.

Given the possibility of Donald Trump returning to the White House in the U.S. presidential elections taking place later this year, it’s not clear how much EU-U.S. cooperation on AI or any other strategic tech area will actually happen in the near future.

But, under the current political make-up across the Atlantic, the will to push for closer alignment across a range of tech issues has gained in strength. There is also a mutual desire to get this message heard — hence today’s joint statement — which is itself, perhaps, also a wider appeal aimed at each side’s voters to opt for a collaborative program, rather than a destructive opposite, come election time.

An AI dialogue

In a section of the joint statement focused on AI, filed under a heading of “Advancing Transatlantic Leadership on Critical and Emerging Technologies”, the pair write that they “reaffirm our commitment to a risk-based approach to artificial intelligence… and to advancing safe, secure, and trustworthy AI technologies.”

“We encourage advanced AI developers in the United States and Europe to further the application of the Hiroshima Process International Code of Conduct for Organisations Developing Advanced AI Systems which complements our respective governance and regulatory systems,” the statement also reads, referencing a set of risk-based recommendations that came out of G7 discussions on AI last year.

The main development out of the sixth TTC meeting appears to be a commitment from EU and U.S. AI oversight bodies, the European AI Office and the U.S. AI Safety Institute, to set up what’s couched as “a Dialogue.” The aim is a deeper collaboration between the AI institutions, with a particular focus on encouraging the sharing of scientific information among respective AI research ecosystems.

Topics highlighted here include benchmarks, potential risks and future technological trends.

“This cooperation will contribute to making progress with the implementation of the Joint Roadmap on Evaluation and Measurement Tools for Trustworthy AI and Risk Management, which is essential to minimise divergence as appropriate in our respective emerging AI governance and regulatory systems, and to cooperate on interoperable and international standards,” the two sides go on to suggest.

The statement also flags an updated version of a list of key AI terms, with “mutually accepted joint definitions” as another outcome from ongoing stakeholder talks flowing from the TTC.

Agreement on definitions will be a key piece of the puzzle to support work toward AI standardization.

A third element of what’s been agreed by the EU and U.S. on AI shoots for collaboration to drive research aimed at applying machine learning technologies for beneficial use cases, such as advancing healthcare outcomes, boosting agriculture and tackling climate change, with a particular focus on sustainable development. In a briefing with journalists earlier this week a senior commission official suggested this element of the joint working will focus on bringing AI advancements to developing countries and the global south.

“We are advancing on the promise of AI for sustainable development in our bilateral relationship through joint research cooperation as part of the Administrative Arrangement on Artificial Intelligence and computing to address global challenges for the public good,” the joint statement reads. “Working groups jointly staffed by United States science agencies and European Commission departments and agencies have achieved substantial progress by defining critical milestones for deliverables in the areas of extreme weather, energy, emergency response, and reconstruction. We are also making constructive progress in health and agriculture.”

In addition, an overview document on the collaboration around AI for the public good was published Friday. Per the document, multidisciplinary teams from the EU and U.S. have spent more than 100 hours in scientific meetings over the past half-year “discussing how to advance applications of AI in on-going projects and workstreams”.

“The collaboration is making positive strides in a number of areas in relation to challenges like energy optimisation, emergency response, urban reconstruction, and extreme weather and climate forecasting,” it continues, adding: “In the coming months, scientific experts and ecosystems in the EU and the United States intend to continue to advance their collaboration and present innovative research worldwide. This will unlock the power of AI to address global challenges.”

According to the joint statement, there is a desire to expand collaboration efforts in this area by adding more global partners.

“We will continue to explore opportunities with our partners in the United Kingdom, Canada, and Germany in the AI for Development Donor Partnership to accelerate and align our foreign assistance in Africa to support educators, entrepreneurs, and ordinary citizens to harness the promise of AI,” the EU and U.S. note.

On platforms, an area where the EU is enforcing recently passed, wide-ranging legislation — including laws like the Digital Services Act (DSA) and Digital Markets Act — the two sides are united in calling for Big Tech to take protecting “information integrity” seriously.

The joint statement refers to 2024 as “a Pivotal Year for Democratic Resilience”, on account of the number of elections being held around the world. It includes an explicit warning about threats posed by AI-generated information, saying the two sides “share the concern that malign use of AI applications, such as the creation of harmful ‘deepfakes,’ poses new risks, including to further the spread and targeting of foreign information manipulation and interference”.

It goes on to discuss a number of areas of ongoing EU-U.S. cooperation on platform governance and includes a joint call for platforms to do more to support researchers’ access to data — especially for the study of societal risks (something the EU’s DSA makes a legal requirement for larger platforms).

On e-identity, the statement refers to ongoing collaboration on standards work, adding: “The next phase of this project will focus on identifying potential use cases for transatlantic interoperability and cooperation with a view toward enabling the cross-border use of digital identities and wallets.”

Other areas of cooperation the statement covers include clean energy, quantum and 6G.


Software Development in Sri Lanka

Robotic Automations

India, grappling with election misinfo, weighs up labels and its own AI safety coalition | TechCrunch


India, long in the tooth when it comes to co-opting tech to persuade the public, has become a global hot spot when it comes to how AI is being used, and abused, in political discourse, and specifically the democratic process. Tech companies, who built the tools in the first place, are making trips to the country to push solutions.

Earlier this year, Andy Parsons, a senior director at Adobe who oversees its involvement in the cross-industry Content Authenticity Initiative (CAI), stepped into the whirlpool when he made a trip to India to visit with media and tech organizations in the country to promote tools that can be integrated into content workflows to identify and flag AI content.

“Instead of detecting what’s fake or manipulated, we as a society, and this is an international concern, should start to declare authenticity, meaning saying if something is generated by AI that should be known to consumers,” he said in an interview.

Parsons added that some Indian companies — currently not part of a Munich AI election safety accord signed by OpenAI, Adobe, Google and Amazon in February — intended to construct a similar alliance in the country.

“Legislation is a very tricky thing. To assume that the government will legislate correctly and rapidly enough in any jurisdiction is something hard to rely on. It’s better for the government to take a very steady approach and take its time,” he said.

Detection tools are famously inconsistent, but they are a start in fixing some of the problems, or so the argument goes.

“The concept is already well understood,” he said during his Delhi trip. “I’m helping raise awareness that the tools are also ready. It’s not just an idea. This is something that’s already deployed.”

Andy Parsons, senior director at Adobe. Image Credits: Adobe

The CAI — which promotes royalty-free, open standards for identifying if digital content was generated by a machine or a human — predates the current hype around generative AI: It was founded in 2019 and now has 2,500 members, including Microsoft, Meta, and Google, The New York Times, The Wall Street Journal and the BBC.

Just as there is an industry growing around the business of leveraging AI to create media, there is a smaller one being created to try to course-correct some of the more nefarious applications of that.

So in February 2021, Adobe went one step further into building one of those standards itself and co-founded the Coalition for Content Provenance and Authenticity (C2PA) with ARM, BBC, Intel, Microsoft and Truepic. The coalition aims to develop an open standard, which taps the metadata of images, videos, text and other media to highlight their provenance and tell people about the file’s origins, the location and time of its generation, and whether it was altered before it reached the user. The CAI works with C2PA to promote the standard and make it available to the masses.

Now it is actively engaging with governments like India’s to widen the adoption of that standard to highlight the provenance of AI content and participate with authorities in developing guidelines for AI’s advancement.

Adobe has nothing but also everything to lose by playing an active role in this game. It’s not — yet — acquiring or building large language models (LLMs) of its own, but as the home of apps like Photoshop and Lightroom, it’s the market leader in tools for the creative community, and so not only is it building new products like Firefly to generate AI content natively, but it is also infusing legacy products with AI. If the market develops as some believe it will, AI will be a must-have in the mix if Adobe wants to stay on top. If regulators (or common sense) have their way, Adobe’s future may well be contingent on how successful it is in making sure what it sells does not contribute to the mess.

The bigger picture in India in any case is indeed a mess.

Google focused on India as a test bed for how it will bar use of its generative AI tool Gemini when it comes to election content; parties are weaponizing AI to create memes with likenesses of opponents; Meta has set up a deepfake “helpline” for WhatsApp, such is the popularity of the messaging platform in spreading AI-powered missives; and at a time when countries are sounding increasingly alarmed about AI safety and what they have to do to ensure it, we’ll have to see what the impact will be of India’s government deciding in March to relax rules on how new AI models are built, tested and deployed. It’s certainly meant to spur more AI activity, at any rate.

Using its open standard, the C2PA has developed a digital nutrition label for content called Content Credentials. The CAI members are working to deploy the digital watermark on their content to let users know its origin and whether it is AI-generated. Adobe has Content Credentials across its creative tools, including Photoshop and Lightroom. It also automatically attaches to AI content generated by Adobe’s AI model Firefly. Last year, Leica launched its camera with Content Credentials built in, and Microsoft added Content Credentials to all AI-generated images created using Bing Image Creator.

Image Credits: Content Credentials

Parsons told TechCrunch the CAI is talking with global governments on two areas: one is to help promote the standard as an international standard, and the other is to adopt it.

“In an election year, it’s especially critical for candidates, parties, incumbent offices and administrations who release material to the media and to the public all the time to make sure that it is knowable that if something is released from PM [Narendra] Modi’s office, it is actually from PM Modi’s office. There have been many incidents where that’s not the case. So, understanding that something is truly authentic for consumers, fact-checkers, platforms and intermediaries is very important,” he said.

India’s large population, vast language and demographic diversity make it challenging to curb misinformation, he added, a vote in favor of simple labels to cut through that.

“That’s a little ‘CR’ … it’s two western letters like most Adobe tools, but this indicates there’s more context to be shown,” he said.

Controversy continues to surround what the real point might be behind tech companies supporting any kind of AI safety measure: Is it really about existential concern, or just having a seat at the table to give the impression of existential concern, all the while making sure their interests get safeguarded in the process of rule making?

“It’s generally not controversial with the companies who are involved, and all the companies who signed the recent Munich accord, including Adobe, who came together, dropped competitive pressures because these ideas are something that we all need to do,” he said in defense of the work.


Software Development in Sri Lanka

Robotic Automations

EU and US set to announce joint working on AI safety, standards & R&D | TechCrunch


The European Union and the U.S. expect to announce a cooperation on AI at a meeting of the EU-U.S. Trade and Technology Council (TTC) on Friday, according to a senior commission official who was briefing journalists on background ahead of the confab.

The mood music points to growing cooperation between lawmakers on both sides of the Atlantic when it comes to devising strategies to respond to challenges and opportunities posed by powerful AI technologies — in spite of what remains a very skewed commercial picture where U.S. giants like OpenAI continue to dominate developments in cutting-edge AI.

The TTC was set up a few years ago, post-Trump, to provide a forum where EU and U.S. lawmakers could meet to discuss transatlantic cooperation on trade and tech policy issues. Friday’s meeting, the sixth since the forum started operating in 2021, will be the last before elections in both regions. The prospect of a second Trump presidency derailing future EU-U.S. cooperation may well be concentrating lawmakers’ minds on maximizing opportunities for joint working now.

“There will be certainly an announcement at the TTC around the AI Office and the [U.S.] AI Safety Institute,” the senior commission official said, referencing an EU oversight body that’s in the process of being set up as part of the incoming EU AI Act, a comprehensive risk-based framework for regulating AI apps that will start to apply across the bloc later this year.

This element of the incoming accord — seemingly set to be focused on AI safety or oversight — is being framed as a “collaboration or dialogue” between the respective AI oversight bodies in the EU and the U.S. to bolster implementation of regulatory powers on AI, per the official.

A second area of focus for the expected EU-U.S. AI agreement will be around standardization, they said. This will take the form of joint working aimed at developing standards that can underpin developments by establishing an “AI roadmap.”

The EU-U.S. partnership will also have a third element — “AI for public good”. This concerns joint work on fostering research activities but with a focus on implementing AI technologies in developing countries and the global south, per the commission.

The official suggested there’s a shared perspective that AI technologies will be able to bring “very quantifiable” benefits to developing regions — in areas like healthcare, agriculture and energy. So this is also set to be an area of focus for transatlantic collaboration on fostering uptake of AI in the near term.

‘AI’ stands for aligned interests?

AI is no longer being seen as a trade issue by the U.S., as the EU tells it. “Through the TTC, we have been able to explain our policies, and also to show to the Americans that, in fact, we have the same goals,” the commission official suggested. “Through the AI Act and through the [AI safety- and security-focused] Executive Order — which is to mitigate the risks of AI technologies while supporting their uptake in our economies.”

Earlier this week the U.S. and the U.K. signed a partnership agreement on AI safety. Although the EU-U.S. collaboration appears to be more wide ranging — as it’s slated to cover not just shared safety and standardization goals with the joint support for “public good” research.

The commission official teased additional areas of collaboration on emerging technologies — including standardization work in the area of electronic identity (where the EU has been developing an e-ID proposal for several years) that they suggested will also be announced Friday. “Electronic identity is a very strong area of cooperation with a lot of potential,” they said, claiming the U.S. is interested in “vast new business opportunities” created by the EU’s electronic identity wallet.

The official also suggested there is growing accord between the EU and U.S. on how to handle platform power — another area where the EU has targeted lawmaking in recent years. “We see a lot of commonalities [between EU laws like the DMA, aka Digital Markets Act] with the recent antitrust cases that are being launched also in the United States,” said the official. “I think in many of these areas there is no doubt that there is a win-win opportunity.”

Meanwhile, the U.S.-U.K. AI memorandum of understanding signed Monday in Washington by U.S. Commerce Secretary Gina Raimondo and the U.K.’s secretary of state for technology, Michelle Donelan, states the pair will aim to accelerate joint working on a range of AI safety issues, including in the area of national security as well as broader societal AI safety concerns.

The U.S.-U.K. agreement mentions at least one joint testing exercise on a publicly accessible AI model, the U.K.’s Department for Science, Innovation and Technology (DSIT) said in a press release. It also suggested there could be personnel exchanges between the two country’s respective AI safety institutes to collaborate on expertise-sharing.

Wider information-sharing is envisaged under the U.S.-U.K. agreement — about “capabilities and risks” associated with AI models and systems, and on “fundamental technical research on AI safety and security”. “This will work to underpin a common approach to AI safety testing, allowing researchers on both sides of the Atlantic — and around the world — to coalesce around a common scientific foundation,” DSIT’s PR continued.

Last summer, ahead of hosting a global AI summit, the U.K. government said it had obtained a commitment from U.S. AI giants Anthropic, DeepMind and OpenAI to provide “early or priority access” to their AI models to support research into evaluation and safety. It also announced a plan to spend £100 million on an AI safety taskforce which it said would be focused on so-called foundational AI models.

At the U.K. AI Summit last November, Raimondo announced the creation of a U.S. AI safety institute on the heels of the U.S. executive order on AI. This new institute will be housed within her department, under the National Institute of Standards and Technology, which she said would aim to work closely with other AI safety groups set up by other governments.

Neither the U.S. nor the U.K. have proposed comprehensive legislation on AI safety, as yet — with the EU remaining ahead of the pack when it comes to legislating on AI safety. But more cross-border joint working looks like a given.


Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber