From Digital Age to Nano Age. WorldWide.

Tag: protect

Robotic Automations

Biden signs bill to protect children from online sexual abuse and exploitation | TechCrunch


On April 29, Senators Jon Ossoff (D-GA) and Marsha Blackburn (R-SC) proposed a bipartisan bill to protect children from online sexual exploitation.

President Biden officially signed the REPORT Act into law on Tuesday. This marks the first time that websites and social media platforms are legally obligated to report crimes related to federal trafficking, grooming, and enticement of children to the National Center for Missing and Exploited Children’s (NCMEC) CyberTipline.

Under the new law, companies that intentionally neglect to report child sex abuse material on their site will suffer a hefty fine. For platforms with over 100 million users, a first-time offense would yield a fine of $850,000, for example. To ensure urgent threats of child sexual exploitation are investigated by law enforcement carefully and thoroughly, the law requires evidence to be held for a longer period, which can be up to a year, instead of only 90 days.

The NCMEC faces challenges in investigating the millions of child sex abuse reports they receive each year due to being understaffed and using outdated technology. Although the new law cannot solve the problem entirely, it is expected to make the assessment of reports more efficient by allowing for things like legal storage of data on commercial cloud computing services.

“Children are increasingly looking at screens, and the reality is that this leaves more innocent kids at risk of online exploitation,” said Senator Blackburn in a statement. “I’m honored to champion this bipartisan solution alongside Senator Ossoff and Representative Laurel Lee to protect vulnerable children and hold perpetrators of these heinous crimes accountable. I also appreciate the National Center for Missing and Exploited Children’s unwavering partnership to get this across the finish line.”


Software Development in Sri Lanka

Robotic Automations

Ex-NSA hacker and ex-Apple researcher launch startup to protect Apple devices | TechCrunch


Two veteran security experts are launching a startup that aims to help other makers of cybersecurity products to up their game in protecting Apple devices.

Their startup is called DoubleYou, the name taken from the initials of its co-founder, Patrick Wardle, who worked at the U.S. National Security Agency between 2006 and 2008. Wardle then worked as an offensive security researcher for years before switching to independently researching Apple macOS defensive security. Since 2015, Wardle has developed free and open-source macOS security tools under the umbrella of his Objective-See Foundation, which also organizes the Apple-centric Objective By The Sea conference.

His co-founder is Mikhail Sosonkin, who was also an offensive cybersecurity researcher for years before working at Apple between 2019 and 2021. Wardle, who described himself as “the mad scientist in the lab,” said Sosonkin is the “right partner” he needed to make his ideas reality.

“Mike might not hype himself up, but he is an incredible software engineer,” Wardle said.

The idea behind DoubleYou is that, compared to Windows, there still are only a few good security products for macOS and iPhones. And that’s a problem because Macs are becoming a more popular choice for companies all over the world, meaning malicious hackers are also increasingly targeting Apple computers. Wardle and Sosonkin said there aren’t as many talented macOS and iOS security researchers, which means companies are struggling to develop their products.

Wardle and Sosonkin’s idea is to take a page out of the playbook of hackers that specialize in attacking systems, and applying it to defense. Several offensive cybersecurity companies offer modular products, capable of delivering a full chain of exploits, or just one component of it. The DoubleYou team wants to do just that — but with defensive tools.

“Instead of building, for example, a whole product from scratch, we really took a step back, and we said ‘hey, how do the offensive adversaries do this?’” Wardle said in an interview with TechCrunch. “Can we basically take that same model of essentially democratizing security but from a defensive point of view, where we develop individual capabilities that then we can license out and have other companies integrate into their security products?”

Wardle and Sosonkin believe that they can.

And while the co-founders haven’t decided on the full list of modules they want to offer, they said their product will certainly include a core offering, which includes the analyzing all new process to detect and block untrusted code (which in MacOS means they are not “notarized” by Apple), and monitoring for and blocking anomalous DNS network traffic, which can uncover malware when it connects to domains known to be associated to hacking groups. Wardle said that these, at least for now, will be primarily for macOS.

Also, the founders want to develop tools to monitor software that wants to become persistent — a hallmark of malware, to detect cryptocurrency miners and ransomware based on their behavior, and to detect when software tries to get permission to use the webcam and microphone.

Sosonkin described it as “an off-the-shelf catalog approach,” where every customer can pick and choose what components they need to implement in their product. Wardle described it as being like a supplier of car parts, rather than the maker of the whole car. This approach, Wardle added, is similar to the one he took in developing the various Objective-See tools such as Oversight, which monitors microphone and webcam usage; and KnockKnock, which monitors if an app wants to become persistent.

“We don’t need to use new technology to make this work. What we need is to actually take the tools available and put them in the right place,” Sosonkin said.

Wardle and Sosonkin’s plan, for now, is not to take any outside investment. The co-founders said they want to remain independent and avoid some of the pitfalls of getting outside investment, namely the need to scale too much and too fast, which will allow them to focus on developing their technology.

“Maybe in a way, we are kind of like foolish idealists,” Sosonkin said. “We just want to catch some malware. I hope we can make some money in the process.”


Software Development in Sri Lanka

Robotic Automations

Google's Gradient backs Patlytics to help companies protect their intellectual property | TechCrunch


Patlytics, an AI-powered patent analytics platform, wants to help enterprises, IP professionals and law firms speed up their patent workflows, from discovery, analytics, comparisons and prosecution to litigation. 

The fledgling startup secured $4.5 million in seed funding, oversubscribed and closed in a few days, led by Google’s AI-focused VC arm, Gradient Ventures.

Patlytics was co-founded by CEO Paul Lee, a former venture capitalist at Tribe, and CTO Arthur Jen, a serial entrepreneur who co-founded and served as CTO of the web3 wallet platform Magic. Their shared vision and complementary skills laid the foundation for Patlytics, driven by their firsthand experiences and a deep understanding of the industry’s pain points. 

The co-founders told TechCrunch they witnessed many opportunities in the IP space. Lee, who spent most of his previous career investing in vertical SaaS and AI and a few legal tech startups, came across many IP companies that used antiquated techniques in a workflow that (he thought) should be digitalized. While working at Magic, Jen dealt intensively with filing and defending patents to protect company technology. 

“The AI revolution in patent intelligence is not just about efficiency; it’s about transforming how patent professionals strategize and engage with the entire patent lifecycle,” Lee said in an exclusive interview with TechCrunch. “Recognizing the intricate blend of technical and legal expertise required for patent work, we’ve developed our platform to be an indispensable ally for patent professionals.” 

Traditional patent prosecution and litigation workflows, which rely heavily on manual input, are complex and time-consuming, Lee continued. The research and discovery phase, which involves searching and analyzing large volumes of patent data, demands significant effort, encompassing internet searches, piecemeal manual investigations, and inherently inefficient procedures.”

What sets the startup apart from its industry peers like Anaqua, Clarivate and Patsnap is that Patlytics is “the sole provider offering end drafts and extensive chart solutions” in its current AI-first approach in terms of insights and analytics, Lee explained. 

Another difference is the platform doesn’t rely entirely on software solutions, but has a place for human participation in the process.

Image Credits: Patlytics / Co-founders: Arthur Jen (CTO) and Paul Lee (CEO)

The outfit recently launched its product, which is SOC-2 certified, and already serves some top-tier law firms and a few in-house legal counsels at enterprises as customers. The company did not disclose the number of clients due to confidentiality agreements. Its target users include IP law firms and companies with several patents. 

“Protecting intellectual property remains a major priority and business requirement for information technology, physical product and biotechnology companies. As companies incorporate AI into their new products, companies from the automobile to the pharmaceutical industry are keen to protect new inventions and watch for infringement from competitors,” said Gradient’s general partner, Darian Shirazi. “We’re excited to partner with the team at Patlytics as they leverage the recent transformative innovations in AI to reinvent the intellectual property protection industry.” 

The outfit will use the proceeds to invest in product and AI development and go-to-market function, aiming to cover all relevant workflows for patent prosecution and litigation. In addition, it plans to bolster its engineering team. The company has 11 employees. 

“Knowing that navigating the intricate landscape of intellectual property can be laborious, our AI-integrated patent workflow aims to enhance the efficiency and provide insights, transforming IP protection into a dynamic force shaping the future technological landscape,” Jen said. “We build our technology with data security and privacy in mind, safeguarding sensitive information throughout the patent lifecycle.” 

Other participants in the round included 8VC, Alumni Ventures, Gaingels, Joe Montana’s Liquid 2 Ventures, Position Ventures, Tribe Capital and Vermilion Ventures. Notably, the round also attracted a host of angel backers, including partners at premier law firms, Datadog president Amit Agarwal, Fiscal Note founder Tim Hwang and Tapas Media founder Chang Kim. 


Software Development in Sri Lanka

Robotic Automations

Internet users are getting younger; now the UK is weighing up if AI can help protect them | TechCrunch


Artificial intelligence has been in the crosshairs of governments concerned about how it might be misused for fraud, disinformation and other malicious online activity; now in the U.K. a regulator is preparing to explore how AI is used in the fight against some of the same, specifically as it relates to content harmful to children.

Ofcom, the regulator charged with enforcing the U.K.’s Online Safety Act, announced that it plans to launch a consultation on how AI and other automated tools are used today, and can be used in the future, to proactively detect and remove illegal content online, specifically to protect children from harmful content and to identify child sex abuse material previously hard to detect.

The tools would be part of a wider set of proposals Ofcom is putting together focused on online child safety. Consultations for the comprehensive proposals will start in the coming weeks with the AI consultation coming later this year, Ofcom said.

Mark Bunting, a director in Ofcom’s Online Safety Group, says that its interest in AI is starting with a look at how well it’s used as a screening tool today.

“Some services do already use those tools to identify and shield children from this content,” he said in an interview with TechCrunch. “But there isn’t much information about how accurate and effective those tools are. We want to look at ways in which we can ensure that industry is assessing [that] when they’re using them, making sure that risks to free expression and privacy are being managed.”

One likely result will be Ofcom recommending how and what platforms should assess, which could potentially lead not only to the platforms adopting more sophisticated tooling, but potentially fines if they fail to deliver improvements either in blocking content, or creating better ways to keep younger users from seeing it.

“As with a lot of online safety regulation, the responsibility sits with the firms to make sure that they’re taking appropriate steps and using appropriate tools to protect users,” he said.

There will be both critics and supporters of the moves. AI researchers are finding ever-more sophisticated ways of using AI to detect, for example, deep fakes, as well as to verify users online. Yet there are just as many skeptics who note that AI detection is far from foolproof.

Ofcom announced the consultation on AI tools at the same time as it published its latest research into how children are engaging online in the U.K., which found that overall, there are more younger children connected up than ever before, so much so that Ofcom is now breaking out activity among ever-younger age brackets.

Nearly one-quarter, 24%, of all 5-7 year-olds now own their own smartphones, and when you include tablets, the numbers go up to 76%, according to a survey of U.S. parents. That same age bracket is also using media a lot more on those devices: 65% have made voice and video calls (versus 59% just a year ago), and half of the kids (versus 39% a year ago) are watching streamed media.

Age restrictions around some mainstream social media apps are getting lower, yet whatever the limits, in the U.K. they do not appear to be heeded anyway. Some 38% of 5-7 year-olds are using social media, Ofcom found. Meta’s WhatsApp, at 37%, is the most popular app among them. And in possibly the first instance of Meta’s flagship image app being relieved to be less popular than ByteDance’s viral sensation, TikTok was found to be used by 30% of 5-7 year-olds with Instagram at ‘just’ 22%. Discord rounded out the list but is significantly less popular at only 4%.

Around one-third, 32%, of kids of this age are going online on their own, and 30% of parents said that they were fine with their underaged children having social media profiles. YouTube Kids remains the most popular network for younger users, at 48%.

Gaming, a perennial favorite with children, has grown to be used by 41% of 5-7 year-olds, with 15% of kids of this age bracket playing shooter games.

While 76% parents surveyed said that they talked to their young children about staying safe online, there are question marks, Ofcom points out, between what a child sees and what that child might report. In researching older children aged 8-17, Ofcom interviewed them directly. It found that 32% of the kids reported that they’d seen worrying content online, but only 20% of their parents said they reported anything.

Even accounting for some reporting inconsistencies, “The research suggests a disconnect between older children’s exposure to potentially harmful content online, and what they share with their parents about their online experiences,” Ofcom writes. And worrying content is just one challenge: deep fakes are also an issue. Among children aged 16-17, Ofcom said, 25% said they were not confident about distinguishing fake from real online.


Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber