From Digital Age to Nano Age. WorldWide.

Tag: children

Robotic Automations

Biden signs bill to protect children from online sexual abuse and exploitation | TechCrunch


On April 29, Senators Jon Ossoff (D-GA) and Marsha Blackburn (R-SC) proposed a bipartisan bill to protect children from online sexual exploitation.

President Biden officially signed the REPORT Act into law on Tuesday. This marks the first time that websites and social media platforms are legally obligated to report crimes related to federal trafficking, grooming, and enticement of children to the National Center for Missing and Exploited Children’s (NCMEC) CyberTipline.

Under the new law, companies that intentionally neglect to report child sex abuse material on their site will suffer a hefty fine. For platforms with over 100 million users, a first-time offense would yield a fine of $850,000, for example. To ensure urgent threats of child sexual exploitation are investigated by law enforcement carefully and thoroughly, the law requires evidence to be held for a longer period, which can be up to a year, instead of only 90 days.

The NCMEC faces challenges in investigating the millions of child sex abuse reports they receive each year due to being understaffed and using outdated technology. Although the new law cannot solve the problem entirely, it is expected to make the assessment of reports more efficient by allowing for things like legal storage of data on commercial cloud computing services.

“Children are increasingly looking at screens, and the reality is that this leaves more innocent kids at risk of online exploitation,” said Senator Blackburn in a statement. “I’m honored to champion this bipartisan solution alongside Senator Ossoff and Representative Laurel Lee to protect vulnerable children and hold perpetrators of these heinous crimes accountable. I also appreciate the National Center for Missing and Exploited Children’s unwavering partnership to get this across the finish line.”


Software Development in Sri Lanka

Robotic Automations

Bill to strengthen national tipline for missing and exploited children heads to Biden's desk | TechCrunch


A bipartisan bill designed to protect children from online sexual exploitation is headed to President Biden’s desk.

Proposed by Senators Jon Ossoff (D-GA) and Marsha Blackburn (R-SC), the bill aims to strengthen the National Center for Missing and Exploited Children’s (NCMEC) CyberTipline. When an online service provider detects child sexual abuse material (CSAM), the platform is legally required to report it to the CyberTipline. Then, NCMEC works with law enforcement to investigate the crime.

The problem is that NCMEC is understaffed and running on outdated tech. According to a report from The Wall Street Journal and the Stanford Internet Observatory, platforms mail CDs and thumb drives containing CSAM to NCMEC, where it’s manually uploaded into the nonprofit’s database. And as AI-generated CSAM becomes increasingly prevalent, the deluge of reports will only make it more difficult for NCMEC to investigate urgent threats of child sexual exploitation in a timely manner. Currently, per Stanford’s research, only 5 to 8% of reports lead to arrests, due to funding shortages, inefficient technology, and other constraints. That’s especially staggering considering that the CyberTipline received over 36 million reports last year — when the tipline was created in 1998, it handled 4,450 reports.

“NCMEC faces resource constraints and lower salaries, leading to difficulties in retaining personnel who are often poached by industry trust and safety teams,” Stanford’s report reads. “While there has been progress in report deconfliction — identifying connections between reports, such as identical offenders — the pace of improvement has been considered slow.”

This bill won’t solve all of these issues, but it will allow providers to preserve the contents of reports for up to a year, rather than just 90 days — which gives law enforcement more time to investigate crimes. Instead of relying on decades-old storage methods, the bill also carves out a way for NCMEC to legally store data using commercial cloud computing services, which could make the process of assessing reports more efficient. Providers will also face steeper fines if they don’t report suspected violations to NCMEC — for platforms with over 100 million users, a first time offense yields a fine of $850,000, up from $150,000. In addition to their requirement to report CSAM, platforms will also be obligated to report the enticement of children.

“At a time of such division in Congress, we are bringing Republicans and Democrats together to protect kids on the internet,” said Senator Ossoff in a statement.


Software Development in Sri Lanka

Robotic Automations

Langdock raises $3M with General Catalyst to help companies avoid vendor lock-in with LLMs | TechCrunch


Plenty of large corporations want to join the AI revolution, but many feel it’s too early to be locked into one foundational model. That means there’s a market for a layer between companies and Large Language Models (LLMs) — something companies can use to pick LLMs easily without needing to commit for all time to one platform.

That’s the market Langdock is targeting with its chat interface that sits between LLMs and a company. Based out of Germany, the startup has recently raised a $3 million seed round led by General Catalyst, and its European seed-stage partner, La Famiglia.

“Companies don’t want to have a vendor lock-in on just one of those LLM providers,” Lennard Schmidt, co-founder and CEO of Langdock, told TechCrunch. “So we’ve kind of abstracted that away in an interface that allows a company to choose which of the underlying models from different vendors can be used by employees.”

Langdock’s chat interface lets companies tap foundational models, open source models, or host their own models and make that accessible, Schmidt said.

The funding round also saw participation from Y Combinator and some noted German founders, including Rolf Schrömgens (Trivago), Hanno Renner (Personio), Johannes Reck (GetYourGuide), and Erik Muttersbach (Forto), along with around 25 other angel investors.

In particular, there is a European play here: Langdock is “going heavy” into the idea that companies in the EU will want to safely and securely integrate LLMs in a manner that’s compliant with regulation.

That means employees can operate in a slightly more closed environment, enabling them to create, for instance, prompt libraries, use more than one LLM, and add sensitive documents.

In addition to the chat interface, the company also offers security, cloud and on-premises solutions.

Langdock claims to have a number of customers including Merck, GetYourGuide, HeyJobs, and Forto. Merck has rolled out the startup’s interface to its 63,000 employees. Walid Mehanna, chief data and AI officer at Merck, said in a statement: “We are early adopters of GenAI and see a paradigm shift in how technology can enable our employees to become more effective and efficient in their daily work life.”

Langdock is not the only company to tackle this space.

Dust, based out of Paris, has raised €5 million to date and is backed by Sequoia. The company is building an interface that companies can use to leverage LLMs for various use cases like customer service, internal reports, research, and more. In contrast, Langdock’s chat interface works for a broader range of use cases and can be used by any kind of staff.


Software Development in Sri Lanka

Robotic Automations

Internet users are getting younger; now the UK is weighing up if AI can help protect them | TechCrunch


Artificial intelligence has been in the crosshairs of governments concerned about how it might be misused for fraud, disinformation and other malicious online activity; now in the U.K. a regulator is preparing to explore how AI is used in the fight against some of the same, specifically as it relates to content harmful to children.

Ofcom, the regulator charged with enforcing the U.K.’s Online Safety Act, announced that it plans to launch a consultation on how AI and other automated tools are used today, and can be used in the future, to proactively detect and remove illegal content online, specifically to protect children from harmful content and to identify child sex abuse material previously hard to detect.

The tools would be part of a wider set of proposals Ofcom is putting together focused on online child safety. Consultations for the comprehensive proposals will start in the coming weeks with the AI consultation coming later this year, Ofcom said.

Mark Bunting, a director in Ofcom’s Online Safety Group, says that its interest in AI is starting with a look at how well it’s used as a screening tool today.

“Some services do already use those tools to identify and shield children from this content,” he said in an interview with TechCrunch. “But there isn’t much information about how accurate and effective those tools are. We want to look at ways in which we can ensure that industry is assessing [that] when they’re using them, making sure that risks to free expression and privacy are being managed.”

One likely result will be Ofcom recommending how and what platforms should assess, which could potentially lead not only to the platforms adopting more sophisticated tooling, but potentially fines if they fail to deliver improvements either in blocking content, or creating better ways to keep younger users from seeing it.

“As with a lot of online safety regulation, the responsibility sits with the firms to make sure that they’re taking appropriate steps and using appropriate tools to protect users,” he said.

There will be both critics and supporters of the moves. AI researchers are finding ever-more sophisticated ways of using AI to detect, for example, deep fakes, as well as to verify users online. Yet there are just as many skeptics who note that AI detection is far from foolproof.

Ofcom announced the consultation on AI tools at the same time as it published its latest research into how children are engaging online in the U.K., which found that overall, there are more younger children connected up than ever before, so much so that Ofcom is now breaking out activity among ever-younger age brackets.

Nearly one-quarter, 24%, of all 5-7 year-olds now own their own smartphones, and when you include tablets, the numbers go up to 76%, according to a survey of U.S. parents. That same age bracket is also using media a lot more on those devices: 65% have made voice and video calls (versus 59% just a year ago), and half of the kids (versus 39% a year ago) are watching streamed media.

Age restrictions around some mainstream social media apps are getting lower, yet whatever the limits, in the U.K. they do not appear to be heeded anyway. Some 38% of 5-7 year-olds are using social media, Ofcom found. Meta’s WhatsApp, at 37%, is the most popular app among them. And in possibly the first instance of Meta’s flagship image app being relieved to be less popular than ByteDance’s viral sensation, TikTok was found to be used by 30% of 5-7 year-olds with Instagram at ‘just’ 22%. Discord rounded out the list but is significantly less popular at only 4%.

Around one-third, 32%, of kids of this age are going online on their own, and 30% of parents said that they were fine with their underaged children having social media profiles. YouTube Kids remains the most popular network for younger users, at 48%.

Gaming, a perennial favorite with children, has grown to be used by 41% of 5-7 year-olds, with 15% of kids of this age bracket playing shooter games.

While 76% parents surveyed said that they talked to their young children about staying safe online, there are question marks, Ofcom points out, between what a child sees and what that child might report. In researching older children aged 8-17, Ofcom interviewed them directly. It found that 32% of the kids reported that they’d seen worrying content online, but only 20% of their parents said they reported anything.

Even accounting for some reporting inconsistencies, “The research suggests a disconnect between older children’s exposure to potentially harmful content online, and what they share with their parents about their online experiences,” Ofcom writes. And worrying content is just one challenge: deep fakes are also an issue. Among children aged 16-17, Ofcom said, 25% said they were not confident about distinguishing fake from real online.


Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber