From Digital Age to Nano Age. WorldWide.

Tag: law enforcement

Robotic Automations

FBI seizes hacking forum BreachForums — again | TechCrunch


The FBI along with a coalition of international law enforcement agencies seized the notorious cybercrime forum BreachForums on Wednesday.  For years, BreachForums has been a popular English-language forum for hackers and cybercriminals who wanted to advertise, sell, and trade stolen data. Just recently, a threat actor advertised Dell customers’ personal information and data stolen from […]

© 2024 TechCrunch. All rights reserved. For personal use only.


Software Development in Sri Lanka

Robotic Automations

Controversial drone company Xtend leans into defense with new $40M round | TechCrunch


Close to a decade ago, brothers Aviv and Matteo Shapira co-founded Replay, a company that created a video format for 360-degree replays — the sorts of replays that have become part and parcel of major sports broadcasts.

Replay caught the attention of Intel, which acquired the company in 2016 for a reported $175 million, and led Aviv and Matteo to a chance meeting with Rubi Liani, the founder of Israeli’s official drone racing league (FRIL).

Liani turned the brothers on to drone racing and planted the seed of the idea for their next startup, Xtend, which he helped found.

“As founders, we saw an opportunity to bridge the gap between our experiences,” Aviv told TechCrunch. “We recognized the exceptional skills required to control advanced robots, particularly drones. Our vision was to develop technology that would make controlling these robots intuitive and accessible, like how users interact with smartphones without needing in-depth technical knowledge.”

Xtend provides a platform that lets operators manage drones and robots developed both in-house by Xtend and third-party vendors. With Xtend’s platform, operators can directly control drones and robots — optionally with a VR headset — or train AI models to be deployed on drones that identify objects and help navigate indoor/outdoor environments. Today, the company announced a $40 million funding round led by Chartered Group at a post-money valuation around $110 million.

“Our platform empowers drones and robots to handle specific tasks autonomously, like entering buildings and scanning floors,” Aviv said. “Crucially, it allows the ‘common sense’ decisions — like judging situations or adapting to unforeseen circumstances — to remain in the hands of human supervisors.”

Xtend allows operators to orchestrate teams of drones and robots — not just individual machines — and have them perform certain tasks autonomously, like moving from waypoint to waypoint. All the while, Xtend analyzes data from past deployments to recommend actions that an operator might take.

Xtend’s Wolverine drone.

“Xos empowers a single supervisor to oversee a team of robots performing tasks at various locations simultaneously,” Aviv said. “We believe complete autonomy isn’t the ultimate goal, but rather a subset of capabilities.”

Xtend pitches its technology as general-purpose, aimed at customers in industries ranging from public safety to logistics. But the company leans heavily into military, defense and law enforcement applications.

Xtend has contracts with the Israel Defense Forces (IDF) and the U.S. Department of Defense to “develop and deliver its systems,” including drone interceptor systems, for “operational evaluation” — including a $9 million deal with the Pentagon’s irregular warfare office. And Aviv isn’t shy about the company’s ambitions to move into what he calls “new civil market opportunities,” like private and public security.

“Imagine a police officer coordinating drones to search a large area for a suspect,” Aviv said. “Xos can empower these professionals to leverage robotic assistance.”

This could be problematic, given that regulations are still largely lacking for law enforcement usage, and drones have been used to surveil legal demonstrations. For instance, in 2020, Congressional Democrats raised the alarm that drones and spy planes had been used by the administration of then-President Donald Trump to watch demonstrations in Las Vegas, Minneapolis and Washington, D.C., according to Al Jazeera.

In addition, Xtend has recently found itself in the crosshairs of international monitors.

Statewatch and Informationsstelle Militarisierung (IMI) found in an analysis that Xtend, among other Israeli military companies and institutions involved in drone deployment, received an R&D grant from the EU’s Horizon Europe fund despite a prohibition on EU funding for military and defense projects.

Aviv has taken a strongly pro-Israel stance in the country’s ongoing war with Hamas, telling Ctech that Xtend has “redirected energies to supporting the IDF 100%.” On its website, which features testimonials from Israeli troops in Gaza, Xtend says that it enables “soldiers to perform accurate manoeuvres in complex combat scenarios.”

In an interview with The Wall Street Journal, Aviv said that Xtend has been working with the IDF for some time — initially to take down incendiary balloons originating from the Gaza Strip. Since then, its drones have been used to map and scout out subterranean tunnels dug by Hamas in Gaza — and, far more alarmingly, sent on reconnaissance missions equipped with explosive payloads like grenades.

Controversial as it may be, the strategy appears to be working for Xtend’s business. The company says it’s won $50 million in contracts to date across its customer base of “over 50” organizations, including government defense agencies.

“We’re unlocking the true potential of robotics in complex scenarios, including first response, search and rescue and critical infrastructure inspection,” Aviv said. “Hundreds of Xtend’s drone and robotics systems are already operationally deployed worldwide, and we are continuously developing Xos and those platforms to deliver the future of human-machine teaming.”

With the new funding, which brings Xtend’s total raised to $65 million, Xtend plans to grow its 110-person workforce by 50% across the U.S., Israel and Singapore by the end of the year as it shifts to a combination of platform-as-a-service and software-as-a-service sales models. On the roadmap is international expansion, with a specific focus on Japan.


Software Development in Sri Lanka

Robotic Automations

Microsoft bans U.S. police departments from using enterprise AI tool for facial recognition | TechCrunch


Microsoft has changed its policy to ban U.S. police departments from using generative AI for facial recognition through the Azure OpenAI Service, the company’s fully managed, enterprise-focused wrapper around OpenAI technologies.

Language added Wednesday to the terms of service for Azure OpenAI Service prohibits integrations with Azure OpenAI Service from being used “by or for” police departments for facial recognition in the U.S., including integrations with OpenAI’s text- and speech-analyzing models.

A separate new bullet point covers “any law enforcement globally,” and explicitly bars the use of “real-time facial recognition technology” on mobile cameras, like body cameras and dashcams, to attempt to identify a person in “uncontrolled, in-the-wild” environments.

The changes in terms come a week after Axon, a maker of tech and weapons products for military and law enforcement, announced a new product that leverages OpenAI’s GPT-4 generative text model to summarize audio from body cameras. Critics were quick to point out the potential pitfalls, like hallucinations (even the best generative AI models today invent facts) and racial biases introduced from the training data (which is especially concerning given that people of color are far more likely to be stopped by police than their white peers).

It’s unclear whether Axon was using GPT-4 via Azure OpenAI Service, and, if so, whether the updated policy was in response to Axon’s product launch. OpenAI had previously restricted the use of its models for facial recognition through its APIs. We’ve reached out to Axon, Microsoft and OpenAI and will update this post if we hear back.

The new terms leave wiggle room for Microsoft.

The complete ban on Azure OpenAI Service usage pertains only to U.S., not international, police. And it doesn’t cover facial recognition performed with stationary cameras in controlled environments, like a back office (although the terms prohibit any use of facial recognition by U.S. police).

That tracks with Microsoft’s and close partner OpenAI’s recent approach to AI-related law enforcement and defense contracts.

In January, reporting by Bloomberg revealed that OpenAI is working with the Pentagon on a number of projects including cybersecurity capabilities — a departure from the startup’s earlier ban on providing its AI to militaries. Elsewhere, Microsoft has pitched using OpenAI’s image generation tool, DALL-E, to help the Department of Defense (DoD) build software to execute military operations, per The Intercept.

Azure OpenAI Service became available in Microsoft’s Azure Government product in February, adding additional compliance and management features geared toward government agencies including law enforcement. In a blog post, Candice Ling, SVP of Microsoft’s government-focused division Microsoft Federal, pledged that Azure OpenAI Service would be “submitted for additional authorization” to the DoD for workloads supporting DoD missions.

Update: After publication, Microsoft said its original change to the terms of service contained an error, and in fact the ban applies only to facial recognition in the U.S. It is not a blanket ban on police departments using the service. 

 


Software Development in Sri Lanka

Robotic Automations

Microsoft bans U.S. police departments from using enterprise AI tool | TechCrunch


Microsoft has changed its policy to ban U.S. police departments from using generative AI through the Azure OpenAI Service, the company’s fully managed, enterprise-focused wrapper around OpenAI technologies.

Language added Wednesday to the terms of service for Azure OpenAI Service prohibits integrations with Azure OpenAI Service from being used “by or for” police departments in the U.S., including integrations with OpenAI’s text- and speech-analyzing models.

A separate new bullet point covers “any law enforcement globally,” and explicitly bars the use of “real-time facial recognition technology” on mobile cameras, like body cameras and dashcams, to attempt to identify a person in “uncontrolled, in-the-wild” environments.

The changes in terms come a week after Axon, a maker of tech and weapons products for military and law enforcement, announced a new product that leverages OpenAI’s GPT-4 generative text model to summarize audio from body cameras. Critics were quick to point out the potential pitfalls, like hallucinations (even the best generative AI models today invent facts) and racial biases introduced from the training data (which is especially concerning given that people of color are far more likely to be stopped by police than their white peers).

It’s unclear whether Axon was using GPT-4 via Azure OpenAI Service, and, if so, whether the updated policy was in response to Axon’s product launch. OpenAI had previously restricted the use of its models for facial recognition through its APIs. We’ve reached out to Axon, Microsoft and OpenAI and will update this post if we hear back.

The new terms leave wiggle room for Microsoft.

The complete ban on Azure OpenAI Service usage pertains only to U.S., not international, police. And it doesn’t cover facial recognition performed with stationary cameras in controlled environments, like a back office (although the terms prohibit any use of facial recognition by U.S. police).

That tracks with Microsoft’s and close partner OpenAI’s recent approach to AI-related law enforcement and defense contracts.

In January, reporting by Bloomberg revealed that OpenAI is working with the Pentagon on a number of projects including cybersecurity capabilities — a departure from the startup’s earlier ban on providing its AI to militaries. Elsewhere, Microsoft has pitched using OpenAI’s image generation tool, DALL-E, to help the Department of Defense (DoD) build software to execute military operations, per The Intercept.

Azure OpenAI Service became available in Microsoft’s Azure Government product in February, adding additional compliance and management features geared toward government agencies including law enforcement. In a blog post, Candice Ling, SVP of Microsoft’s government-focused division Microsoft Federal, pledged that Azure OpenAI Service would be “submitted for additional authorization” to the DoD for workloads supporting DoD missions.

Microsoft and OpenAI did not immediately return requests for comment.


Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber