From Digital Age to Nano Age. WorldWide.

Tag: age

Robotic Automations

Ofcom to push for better age verification, filters and 40 other checks in new online child safety code | TechCrunch


Ofcom is cracking down on Instagram, YouTube and 150,000 other web services to improve child safety online. A new Children’s Safety Code from the U.K. Internet regulator will push tech firms to run better age checks, filter and downrank content, and apply around 40 other steps to assess harmful content around subjects like suicide, self harm and pornography, to reduce under-18’s access to it. Currently in draft form and open for feedback until July 17, enforcement of the Code is expected to kick in next year after Ofcom publishes the final in the spring. Firms will have three months to get their inaugural child safety risk assessments done after the final Children’s Safety Code is published.

The Code is significant because it could force a step-change in how Internet companies approach online safety. The government has repeatedly said it wants the U.K. to be the safest place to go online in the world. Whether it will be any more successful at preventing digital slurry from pouring into kids’ eyeballs than it has actual shit from polluting the country’s waterways remains to be seen. Critics of the approach suggest the law will burden tech firms with crippling compliance costs and make it harder for citizens to access certain types of information.

Meanwhile, failure to comply with the Online Safety Act can have serious consequences for UK-based web services large and small, with fines of up to 10% of global annual turnover for violations, and even criminal liability for senior managers in certain scenarios.

The guidance puts a big focus on stronger age verification. Following on from last year’s draft guidance on age assurance for porn sites, age verification and estimation technologies deemed “accurate, robust, reliable and fair” will be applied to a wider range of services as part of the plan. Photo-ID matching, facial age estimation and reusable digital identity services are in; self-declaration of age and contractual restrictions on the use of services by children are out.

That suggests Brits may need to get accustomed to proving their age before they access a range of online content — though how exactly platforms and services will respond to their legal duty to protect children will be for private companies to decide: that’s the nature of the guidance here.

The draft proposal also sets out specific rules on how content is handled. Suicide, self-harm and pornography content — deemed the most harmful — will have to be actively filtered (i.e. removed) so minors do not see it. Ofcom wants other types of content such as violence to be downranked and made far less visible in children’s feeds. Ofcom also said it may expect services to act on potentially harmful content (e.g. depression content). The regulator told TechCrunch it will encourage firms to pay particular attention to the “volume and intensity” of what kids are exposed to as they design safety interventions. All of this demands services be able to identify child users — again pushing robust age checks to the fore.

Ofcom previously named child safety as its first priority in enforcing the UK’s Online Safety Act — a sweeping content moderation and governance rulebook that touches on harms as diverse as online fraud and scam ads; cyberflashing and deepfake revenge porn; animal cruelty; and cyberbullying and trolling, as well as regulating how services tackle illegal content like terrorism and child sexual abuse material (CSAM).

The Online Safety Bill passed last fall, and now the regulator is busy with the process of implementation, which includes designing and consulting on detailed guidance ahead of its enforcement powers kicking in once parliament approves Codes of Practice it’s cooking up.

With Ofcom estimating around 150,000 web services in scope of the Online Safety Act, scores of tech firms will, at the least, have to assess whether children are accessing their services and, if so, take steps to identify and mitigate a range of safety risks. The regulator said it’s already working with some larger social media platforms where safety risks are likely to be greatest, such as Facebook and Instagram, to help them design their compliance plans.

Consultation on the Children’s Safety Code

In all, Ofcom’s draft Children’s Safety Code contains more than 40 “practical steps” the regulator wants web services to take to ensure child protection is enshrined in their operations. A wide range of apps and services are likely to fall in-scope — including popular social media sites, games and search engines.

“Services must prevent children from encountering the most harmful content relating to suicide, self-harm, eating disorders, and pornography. Services must also minimise children’s exposure to other serious harms, including violent, hateful or abusive material, bullying content, and content promoting dangerous challenges,” Ofcom wrote in a summary of the consultation.

“In practice, this means that all services which do not ban harmful content, and those at higher risk of it being shared on their service, will be expected to implement highly effective age-checks to prevent children from seeing it,” it added in a press release Monday. “In some cases, this will mean preventing children from accessing the entire site or app. In others it might mean age-restricting parts of their site or app for adults-only access, or restricting children’s access to identified harmful content.”

Ofcom’s current proposal suggests that almost all services will have to take mitigation measures to protect children. Only those deploying age verification or age estimation technology that is “highly effective” and used to prevent children from accessing the service (or the parts of it where content poses risks to kids) will not be subject to the children’s safety duties.

Those who find — on the contrary — that children can access their service will need to carry out a follow-on assessment known as the “child user condition”. This requires them to assess whether “a significant number” of kids are using the service and/or are likely to be attracted to it. Those that are likely to be accessed by children must then take steps to protect minors from harm, including conducting a Children’s Risk Assessment and implementing safety measures (such as age assurance, governance measures, safer design choices and so on) — as well as applying an ongoing review of their approach to ensure they keep up with changing risks and patterns of use. 

Ofcom does not define what “a significant number” means in this context — but “even a relatively small number of children could be significant in terms of the risk of harm. We suggest service providers should err on the side of caution in making their assessment.” In other words, tech firms may not be able to eschew child safety measures by arguing there aren’t many minors using their stuff.

Nor is there a simple one-shot fix for services that fall in scope of the child safety duty. Multiple measures are likely to be needed, combined with ongoing assessment of efficacy.

“There is no single fix-all measure that services can take to protect children online. Safety measures need to work together to help create an overall safer experience for children,” Ofcom wrote in an overview of the consultation, adding: “We have proposed a set of safety measures within our draft Children’s Safety Codes, that will work together to achieve safer experiences for children online.” 

Recommender systems, reconfigured

Under the draft Code, any service that operates a recommender system — a form of algorithmic content sorting, tracking user activity — and is at “higher risk” of showing harmful content, must use “highly-effective” age assurance to identify who their child users are. They must then configure their recommender algorithms to filter out the most harmful content (i.e. suicide, self harm, porn) from the feeds of users it has identified as children, and reduce the “visibility and prominence” of other harmful content.

Under the Online Safety Act, suicide, self harm, eating disorders and pornography are classed “primary priority content”. Harmful challenges and substances; abuse and harassment targeted at people with protected characteristics; real or realistic violence against people or animals; and instructions for acts of serious violence are all classified “priority content.” Web services may also identify other content risks they feel they need to act on as part of their risk assessments.

In the proposed guidance, Ofcom wants children to be able to provide negative feedback directly to the recommender feed — in order that it can better learn what content they don’t want to see too.

Content moderation is another big focus in the draft Code, with the regulator highlighting research showing content that’s harmful to children is available on many services at scale and which it said suggests services’ current efforts are insufficient.

Its proposal recommends all “user-to-user” services (i.e. those allowing users to connect with each other, such as via chat functions or through exposure to content uploads) must have content moderation systems and processes that ensure “swift action” is taken against content harmful to children. Ofcom’s proposal does not contain any expectations that automated tools are used to detect and review content. But the regulator writes that it’s aware large platforms often use AI for content moderation at scale and says it’s “exploring” how to incorporate measures on automated tools into its Codes in the future.

“Search engines are expected to take similar action,” Ofcom also suggested. “And where a user is believed to be a child, large search services must implement a ‘safe search’ setting which cannot be turned off must filter out the most harmful content.”

“Other broader measures require clear policies from services on what kind of content is allowed, how content is prioritised for review, and for content moderation teams to be well-resourced and trained,” it added.

The draft Code also includes measures it hopes will ensure “strong governance and accountability” around children’s safety inside tech firms. “These include having a named person accountable for compliance with the children’s safety duties; an annual senior-body review of all risk management activities relating to children’s safety; and an employee Code of Conduct that sets standards for employees around protecting children,” Ofcom wrote.

Facebook- and Instagram-owner Meta was frequently singled out by ministers during the drafting of the law for having a lax attitude to child protection. The largest platforms may be likely to pose the greatest safety risks — and therefore have “the most extensive expectations” when it comes to compliance — but there’s no free pass based on size.

Services cannot decline to take steps to protect children merely because it is too expensive or inconvenient — protecting children is a priority and all services, even the smallest, will have to take action as a result of our proposals,” it warned.

Other proposed safety measures Ofcom highlights include suggesting services provide more choice and support for children and the adults who care for them — such as by having “clear and accessible” terms of service; and making sure children can easily report content or make complaints.

The draft guidance also suggests children are provided with support tools that enable them to have more control over their interactions online — such an option to decline group invites; block and mute user accounts; or disable comments on their own posts.

The UK’s data protection authority, the Information Commission’s Office, has expected compliance with its own age-appropriate children’s design Code since September 2021 so it’s possible there may be some overlap. Ofcom for instance notes that service providers may already have assessed children’s access for a data protection compliance purpose — adding they “may be able to draw on the same evidence and analysis for both.”

Flipping the child safety script?

The regulator is urging tech firms to be proactive about safety issues, saying it won’t hesitate to use its full range of enforcement powers once they’re in place. The underlying message to tech firms is get your house in order sooner rather than later or risk costly consequences.

“We are clear that companies who fall short of their legal duties can expect to face enforcement action, including sizeable fines,” it warned in a press release.

The government is rowing hard behind Ofcom’s call for a proactive response, too. Commenting in a statement today, the technology secretary Michelle Donelan said: “To platforms, my message is engage with us and prepare. Do not wait for enforcement and hefty fines — step up to meet your responsibilities and act now.”

“The government assigned Ofcom to deliver the Act and today the regulator has been clear; platforms must introduce the kinds of age-checks young people experience in the real world and address algorithms which too readily mean they come across harmful material online,” she added. “Once in place these measures will bring in a fundamental change in how children in the UK experience the online world.

“I want to assure parents that protecting children is our number one priority and these laws will help keep their families safe.”

Ofcom said it wants its enforcement of the Online Safety Act to deliver what it couches as a “reset” for children’s safety online — saying it believes the approach it’s designing, with input from multiple stakeholders (including thousands of children and young people), will make a “significant difference” to kids’ online experiences.

Fleshing out its expectations, it said it wants the rulebook to flip the script on online safety so children will “not normally” be able to access porn and will be protected from “seeing, and being recommended, potentially harmful content”.

Beyond identity verification and content management, it also wants the law to ensure kids won’t be added to group chats without their consent; and wants it to make it easier for children to complain when they see harmful content, and be “more confident” that their complaints will be acted on.

As it stands, the opposite looks closer to what UK kids currently experience online, with Ofcom citing research over a four-week period in which a majority (62%) of children aged 13-17 reported encountering online harm and many saying they consider it an “unavoidable” part of their lives online.

Exposure to violent content begins in primary school, Ofcom found, with children who encounter content promoting suicide or self-harm characterizing it as “prolific” on social media; and frequent exposure contributing to a “collective normalisation and desensitisation”, as it put it. So there’s a huge job ahead for the regulator to reshape the online landscape kids encounter.

As well as the Children’s Safety Code, its guidance for services includes a draft Children’s Register of Risk, which it said sets out more information on how risks of harm to children manifest online; and draft Harms Guidance which sets out examples and the kind of content it considers to be harmful to children. Final versions of all its guidance will follow the consultation process, a legal duty on Ofcom. It also told TechCrunch that it will be providing more information and launching some digital tools to further support services’ compliance ahead of enforcement kicking in.

“Children’s voices have been at the heart of our approach in designing the Codes,” Ofcom added. “Over the last 12 months, we’ve heard from over 15,000 youngsters about their lives online and spoken with over 7,000 parents, as well as professionals who work with children.

“As part of our consultation process, we are holding a series of focused discussions with children from across the UK, to explore their views on our proposals in a safe environment. We also want to hear from other groups including parents and carers, the tech industry and civil society organisations — such as charities and expert professionals involved in protecting and promoting children’s interests.”

The regulator recently announced plans to launch an additional consultation later this year which it said will look at how automated tools, aka AI technologies, could be deployed to content moderation processes to proactively detect illegal content and content most harmful to children — such as previously undetected CSAM and content encouraging suicide and self-harm.

However, there is no clear evidence today that AI will be able to improve detection efficacy of such content without causing large volumes of (harmful) false positives. It thus remains to be seen whether Ofcom will push for greater use of such tech tools given the risks that leaning on automation in this context could backfire.

In recent years, a multi-year push by the Home Office geared towards fostering the development of so-called “safety tech” AI tools — specifically to scan end-to-end encrypted messages for CSAM — culminated in a damning independent assessment which warned such technologies aren’t fit for purpose and pose an existential threat to people’s privacy and the confidentiality of communications.

One question parents might have is what happens on a kid’s 18th birthday, when the Code no longer applies? If all these protections wrapping kids’ online experiences end overnight, there could be a risk of (still) young people being overwhelmed by sudden exposure to harmful content they’ve been shielded from until then. That sort of shocking content transition could itself create a new online coming-of-age risk for teens.

Ofcom told us future proposals for larger platforms could be introduced to mitigate this sort of risk.

“Children are accepting this harmful content as a normal part of the online experience — by protecting them from this content while they are children, we are also changing their expectations for what’s an appropriate experience online,” an Ofcom spokeswoman responded when we asked about this. “No user, regardless of their age, should accept to have their feed flooded with harmful content. Our phase 3 consultation will include further proposals on how the largest and riskiest services can empower all users to take more control of the content they see online. We plan to launch that consultation early next year.”


Software Development in Sri Lanka

Robotic Automations

Anon is building an automated authentication layer for the GenAI age | TechCrunch


As the notion of the AI agent begins to take hold, and more tasks will be completed without a human involved, it is going to require a new kind of authentication to make sure only agents with the proper approval can access particular resources. Anon, an early-stage startup, is helping developers add automated authentication in a safe and secure way.

On Wednesday, the company announced a $6.5 million investment — and that the product is now generally available to all.

The founders came up with the idea for this company out of necessity. Their first idea was actually building an AI agent, but CEO Daniel Mason says they quickly came up against a problem around authentication — simply put, how to enter a username and password automatically and safely. “We kept running into this hard edge of customers wanting us to do X, but we couldn’t do X unless we had this delegated authentication system,” Mason told TechCrunch.

He began asking around about how other AI startups were handling authentication, and found there weren’t really any good answers. “In fact, a lot of the solutions, people that were using, were actually quite a bit less secure. They were mostly inheriting authentication credentials from a user’s local machine or browser-based permissions,” he said.

And as they explored this problem more in depth, they realized that this was in fact a better idea for a company than their original AI agent idea. At this point, they pivoted to becoming a developer tool for building an automated authentication layer designed for AI-driven applications and workflows. The solution is delivered in the form of a software development kit (SDK), and lets developers incorporate authentication for a specific service with a few lines of code. “We want to sit at that authentication level and really build access permissioning, and our customers are specifically the developers,” Mason said.

The company is addressing security concerns about an automated authentication tool by working toward building a zero trust architecture where they protect the credentials in a few key ways. For starters, they never control the credentials themselves; those are held by the end user. There is also an encryption layer, where half the key is held by the user and half by Anon, and it requires both to unlock the encryption. Finally, the user always has ultimate control.

“Our platform is such that as a user, when I grant access, I still maintain control of that session — I’m the ultimate holder of the password, user Name, 2FA — and so even in the event of our system, or even a customer system getting compromised, they do not have access to those root credentials,” company co-founder and CTO Kai Aichholz said.

The founders recognize that other companies, large and small, will probably take a stab at solving this problem, but they are banking on a head start and a broad vision to help them stave off eventual competitors. “We’re looking to basically become a one-stop integration platform where you can come and build these actions and build the automation and know that you’re doing it in a way that’s secure and your end user credentials are secure and the automations are going to happen,” Mason said.

The $6.5 million investment breaks down into two tranches: a pre-seed of around $2 million at launch and a seed that closed at the end of last year for around $4.5 million. Investors include Union Square Ventures and Abstract Ventures, which led the seed, and Impatient Ventures and ex/ante, which led the pre-seed, along with several industry angels.


Software Development in Sri Lanka

Robotic Automations

Meta now requires users to verify their age to use its Quest VR headsets | TechCrunch


During the congressional online safety hearing in January, Meta CEO Mark Zuckerberg argued that mobile app store providers like Apple and Google should be the ones to implement parental controls for social media. Now, it appears Meta is using its Quest VR store to demonstrate how it thinks devices with app stores should approach online age verification.

Meta announced today that it’s prompting Quest 2 and 3 users to confirm their age by reentering their birthdays so it can provide the “right experience, settings, and protections for teens and preteens,” the company explained. For instance, teenagers aged 13 to 17 will have their profile automatically set to private, and guardians can use parental supervision tools to tailor their teens’ experiences. Meanwhile, parents are required to set up an account for preteens aged 10 to 12. In that case, parents can control which apps the preteen can download.

Image Credits: Meta

Users have a 30-day window to confirm their age. If they fail to do so within this period, their account will be temporarily blocked until they provide their birthdate. Since it’s easy to lie about someone’s age when entering only a birthdate, Meta says it’ll require people who accidentally enter a wrong birthdate to verify with an ID or credit card.

Meta has previously told developers that, starting in March 2024, it will require them to identify their app’s intended age group (preteens, teens or adults). It also announced the launch of its user age group APIs, which officially launched last month. The APIs allow developers to report to Meta if a user is too young to use their app.

Meta first added parental supervision tools to its VR headset in 2022. The company released parent-managed accounts for preteens last year.


Software Development in Sri Lanka

Robotic Automations

Your cut-out-and-keep guide to Big Tech talking points in a new age of antitrust | TechCrunch


With tech giants facing new laws and enforcements aimed at cutting their empires down to size, a lobbying frenzy replete with wildly binary claims is underway.

As the likes of Amazon, Apple, Google, Meta, Microsoft and TikTok face unprecedented (yes, actually!) scrutiny from lawmakers and law enforcers around the world, lobbyists are working overtime to put a self-serving spin on entrenched, profit-extracting machinery.

Their job? Apply high-gloss, pro-competition narratives to cloak accusations of naked monopoly. The goal? Seek to bend new rules, such as the EU’s Digital Markets Act, to fit existing operations and business models to avoid as much commercial damage as possible.

It’s all about fending off wrecking-ball enforcement — and new, targeted laws — which could force the world’s most valuable companies to dismantle the chokepoints they’ve built to make money, ingest data and capture attention.

But there’s an even greater nightmare for Big Tech: The breakup of established empires may be on the cards.

Platform PR ops — which you can trace through official blog posts, user-facing messaging, regulatory filings and more — seek to reframe Big Tech’s actions as beneficent and stain-free. As such, their contortions can be highly gymnastic. It’s fair to say commercial juggernauts are long practiced in the dark art of doublespeak, with accusations of unfair behavior dating back decades in some cases.

This may explain why some of the defensive claims put out in response to dialed-up regulatory attention are so familiar. But it’s possible to spot newer concoctions, too — such as talk of muscular new EU market contestability laws demanding “difficult trade-offs.” (Rough translation: “Our compliance will degrade the service in a way that’s intended to annoy you because we want you to complain about the law.”)

Amid all the noise, one thing looks clear: The regulatory risk is finally real.

As the world’s most valuable companies pay flacks to come up with semantic tactics to paint their market power as nothing-to-see-here, good ol’ business-as-usual, we present some plain English translations of commonly seen Big Tech talking points…

Our platform is essential for small businesses to reach consumers.
Gatekeeping is our line of business.

Our interests are aligned with thousands of small and medium-sized businesses.
We’re also in rent collection.

We have built a safe and trusted place for users.
Rent’s due!

We create a magical experience for our users.
Don’t touch our rents.

We believe in the free market.
We’ll do whatever we want until we’re made to stop.

We compete with a wide variety of services.
We crush as much competition as we can, as fast as we can.

We face intense competition.
Sometimes it takes us longer than we’d like to crush the competition.

We believe competition is good for our economy.
Baby, we ARE the economy!

We’re taking a compliance-first approach.
We’re looking out for No.1.

We take your privacy seriously.
We’re using your information.

We take the security of your information seriously.
We want exclusive access to your information.

We are committed to keeping people’s information private and secure.
We want exclusive access to everyone’s information — and, btw, if you use the web, we’re tracking you.

Privacy fundamentalists.
Literally anyone who cares about privacy; typically denotes a European.

We offer unprecedented choice.
You get no choice.

We’re offering a clear choice.
You definitely get no choice.

You can easily switch your default.
Good luck finding the setting!

Manage your consent choices.
We make it really hard/impossible for you to stop us tracking you.

There’s a lack of clear regulatory guidance.
We’ll do whatever we want until we’re made to stop.

We need more clarity about how to comply.
We’ll do whatever we want until we’re made to stop.

We’re complying with the law.
We’re not — but make us stop, punk.

The regulatory landscape is evolving.
We’re breaking the law.

We remain committed to complying with the law.
We broke the law.

It addresses the latest regulatory developments, guidance and judgments.
We’re breaking the law — but make us stop, punk.

New ways to manage your data.
We got caught breaking the law.

Subscription for no ads.
We found a new way to ignore the law.

Opt-out process.
We track you by default.

Help center.
Unhelpful by default.

The new rules involve difficult trade-offs.
Our compliance will degrade the service in a way that’s intended to annoy you because we want you to complain about the law.

We believe in a free, ad-supported internet.
We intend to keep tracking you, profiling you and selling your attention to anyone who pays us.

Personalized advertising.
Surveillance advertising, aka tracking.

Relevant ads.
Tracking.

Personalized products.
Tracking.

Relevant content.
Tracking.

Personalization.
Tracking.

Personal data that is collected about your interaction can be shared across linked services.
Tracking.

An inclusive internet where everyone can access online content and services for free.
Our business model requires privacy to be an unaffordable luxury because you’re the product.

Free services.
In this context just another way of saying we’re tracking you.

A way for people to consent to data processing for personalized advertising.
A mechanism for tracking so fiendishly simplistic to activate that a child already has.

The validity of our approach has been validated by numerous authorities.
We’re breaking the law in a new way so regulators haven’t caught up yet.

Information sharing.
Yep, that’s us, normalizing how we’re taking your private information and doing what we want with it again!

We do not sell your information.
We sell your attention.

Manage how your data is used to inform ads.
There’s no way to stop us abusing your privacy.

Ad preferences.
There’s no way to stop us abusing your privacy.

Privacy center.
Srsly, there’s no way to stop us abusing your privacy and we’re just trolling you now!

Why am I seeing this ad?
Because we tracked you.

Why are we doing this?
To keep tracking you for 🤑 

Publicly available information.
Stuff we stole.


Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber