From Digital Age to Nano Age. WorldWide.

Tag: European

Robotic Automations

AWS confirms European 'sovereign cloud' to launch in Germany by 2025, plans €7.8B investment over 15 years | TechCrunch


Amazon Web Services (AWS), Amazon’s cloud computing business, has confirmed further details of its European “sovereign cloud” which is designed to enable greater data residency across the region. The company said that the first AWS sovereign cloud region will be in the German state of Brandenburg, and will go live by the end of 2025. […]

© 2024 TechCrunch. All rights reserved. For personal use only.


Software Development in Sri Lanka

Robotic Automations

How European disability tech startups are leveraging AI | TechCrunch


Making life better for people with disabilities is a laudable goal, but accessibility tech hasn’t traditionally been popular among VCs. In 2022, disability tech companies attracted around $4 billion in early-stage investments, which was a fraction of fintech’s intake, for example.

One reason is that disability tech startups are often considered too niche to attain business viability — at least on the scale that venture capital demands. By definition, they are assumed to be building for a minority. However, some startups in the space have also begun serving the wider population — and throwing in some AI always helps

Both cases are a balancing act: The wider business case needs to make sense without losing sight of the startup’s mission statement. AI, meanwhile, needs to be leveraged in a non-gimmicky way to pass the due diligence sniff test.

Some accessibility-focused startups understand these necessities, and their strategies are worth a look. Here are four European startups doing just that. 

Visualfy

Image Credits: Visualfy

Visualfy leverages AI to improve the lives of people with hearing loss. The Spanish startup is focused on safety and autonomy — this includes a sound recognition AI that recognizes fire alarms and the sound of a baby crying at home. “AI is crucial for our business,” CEO Manel Alcaide told TechCrunch last month.

The firm offers consumers an app that also serves as a companion to Visualfy Home, its hardware suite consisting of three detectors and a main device. It also entered the public sector with Visualfy Places — it’s no coincidence the startup recently raised funding from Spain’s national state-owned railway company, Renfe.

One reason Visualfy is gaining traction on the B2B side is that public venues are required to provide accessibility, especially when health and safety are on the line.

In an interview, Alcaide explained that the devices and PA systems Visualfy will install in places like stadiums could also monitor air quality and other metrics. In the EU, meeting these other goals could help companies get subsidies while doing the right thing for deaf people. 

The latter is still very much top of mind for Visualfy, which is set up as a B Corp and employs both hearing and non-hearing people. Incorporating deaf individuals at all steps is a moral stance — “nothing for us without us.” But it is also common sense for better design, Alcaide said.

Knisper

Image Credits: Audus Technologies

People with full hearing disability are a smaller segment of a large and growing group. By 2050, 2.5 billion people are projected to have some degree of hearing loss. Due to a mix of reasons, including stigma and cost, many won’t wear hearing aids. That’s the audience Dutch B2B startup Audus Technologies is targeting with its product, Knisper.

Knisper uses AI to make speech more intelligible in environments such as cinemas, museums, public transportation and work calls. In practice, this means splitting the audio and mixing it back into a clearer track. It does so without increasing background volume noise (something not every hearing aid company can say), which makes it comfortable for anyone to listen to, even without hearing loss.

A former ENT doctor, Audus founder Marciano Ferrier explained that this wasn’t possible to achieve with similar results before AI. Knisper was trained on thousands of videos in multiple languages, with variations such as background noise and distorted speech. This took work, but Audus is now leaving the development stage and focusing on adoption, managing director Joost Taverne told TechCrunch in February.

“We are already working with a number of museums, including the Museum of Fine Arts in Boston,” said Taverne, a former MP and diplomat who spent time in the U.S. “We also do audiobooks with a Dutch publishing house, where we make the audio book of Anne Frank’s diary accessible for people with hearing loss. And we now have the solution for the workspace.”

B2B go-to-market is not an easy route, so it makes sense for Audus to focus on clients like museums. They are often noisy, which can make audio guides hard for anyone to hear. Using Knisper’s technology to make them more intelligible brings benefits to the general public, not just those with hearing loss, which makes adoption easier.

Whispp

Image Credits: Whispp

Fellow Dutch startup Whispp also focuses on speech, but from a different angle. As TechCrunch reported from CES earlier this year, its technology converts whispered speech into a natural voice in real time.

Whispp’s core target audience is “a currently underserved group of worldwide 300 million people with voice disabilities who lost their voice but still have good articulation,” its site explains. 

For instance, individuals with voice disorders that only leave them able to whisper or use their esophageal voice; or who stutter, like CEO Joris Castermans. He knows all too well how his speech is less affected when whispering.

For those with reduced articulation due to ALS, MS, Parkinson’s or strokes, there are already solutions like text-to-speech apps — but these have downsides such as high latency. For people who are still able to articulate, that can be too much of a tradeoff.

Thanks to audio-to-audio AI, Whispp is able to provide them with a voice that can be produced in real time, is language agnostic and sounds real and natural. If users are able to provide a sample, it can even sound like their own voice.

Since there’s no text in the middle, Whispp is also more secure than alternatives, Castermans told TechCrunch. This could open up use cases for non-silent patients who need to have confidential conversations, he said. 

How much users without voice issues would be willing to pay for Whispp’s technology is unclear, but it also has several monetization routes to explore with its core audience, such as the subscription it charges for its voice calling app.

Acapela

Image Credits: Acapela Group

Whispp highlights the need some have to store their voice for later use. Known as voice banking, this process is what Acapela hopes to facilitate with a service it launched last year.

Acapela Group, which was bought by Swedish tech accessibility company Tobii Dynavox for €9.8 million in 2022, has been in the text-to-speech space for several decades, but it is only recently that AI changed the picture for voice cloning.

The results are much better and the process is faster too. This will lower the bar for voice banking, and although not everyone will do it yet, there may be demand for individuals who know they are at risk of losing their voice after getting diagnosed with certain conditions.

Acapela doesn’t charge for the initial phase of the service, which consists of recording 50 sentences. It is only when and if they need to install the voices on their devices that users have to buy it, either directly through Acapela or via a third party (partner, reseller, a national health insurance program or other).

Besides the new potential unlocked by AI, the above examples show some routes that startups are exploring to expand beyond a core target of users with disabilities. 

Part of the thinking is that a larger addressable market can increase their prospective revenue and spread out the costs. But for their customers and partners, it is also a way to stay true to the definition of accessibility as “the quality of being able to be entered or used by everyone, including people who have a disability.” 


Software Development in Sri Lanka

Robotic Automations

European police chiefs target E2EE in latest demand for 'lawful access' | TechCrunch


In the latest iteration of the neverending (and always head-scratching) crypto wars, Graeme Biggar, the director general of the UK’s National Crime Agency (NCA), has called on Instagram-owner Meta to rethink its continued rollout of end-to-end encryption (E2EE) — with web users’ privacy and security pulled into the frame yet again.

The call follows a joint declaration by European police chiefs, including the UK’s own, published Sunday — expressing “concern” at how E2EE is being rolled out by the tech industry and calling for platforms to design security systems in such a way that they can still identify illegal activity and sent reports on message content to law enforcement.

In remarks to the BBC today, the NCA chief suggested Meta’s current plan to beef up the security around Instagram users’ private chats by rolling out so-called “zero access” encryption, where only the message sender and recipient can access the content, poses a threat to child safety. The social networking giant also kicked off a long-planned rollout of default E2EE on Facebook Messenger back in December.

“Pass us the information”

Speaking to BBC Radio 4’s Today program on Monday morning, Biggar told interviewer Nick Robinson: “Our responsibility as law enforcement… is to protect the public from organised crime, from serious crime, and we need information to be able to do that.

“What is happening is the tech companies are putting a lot of the information on to end-to-end encrypted. We have no problem with encryption, I’ve got a responsibility to try and protect the public from cybercrime, too — so strong encryption is a good thing — but what we need is for the companies to still be able to pass us the information we need to keep the public safe.”

Currently, as a result of being able to scan message content where E2EE has not been rolled out, Biggar said platforms are sending tens of millions of child-safety related reports a year to police forces around the world — adding a further claim that “on the back of that information we typically safeguard 1,200 children a month and arrest 800 people”. Implication being those reports will dry up if Meta proceeds expanding its use of E2EE to Instagram.

Pointing out that Meta-owned WhatsApp has had the gold standard encryption as its default for years (E2EE was fully implemented across the messaging platform by April 2016), Robinson wondered if this wasn’t a case of the crime agency trying to close the stable door after the horse has bolted?

To which he got no straight answer — just more head-scratching equivocation.

Biggar: “It is a trend. We are not trying to stop encryption. As I said, we completely support encryption and privacy and even end-to-end encryption can be absolutely fine. What we want is the industry to find ways to still provide us with the information that we need kind.”

His intervention follows a joint declaration of around 30 European police chiefs, published Sunday, in which the law enforcement heads urge platforms to adopt unspecified “technical solutions” that they suggest can enable them to offer users robust security and privacy at the same time as maintaining the ability to spot illegal activity and report decrypted content to police forces.

“Companies will not be able to respond effectively to a lawful authority,” the police chiefs suggest, raising concerns that E2EE is being deployed in ways that undermine platforms’ abilities to identify illegal activity themselves and also their ability to send content reports to police.

“As a result, we will simply not be able to keep the public safe,” they claim, adding: “We therefore call on the technology industry to build in security by design, to ensure they maintain the ability to both identify and report harmful and illegal activities, such as child sexual exploitation, and to lawfully and exceptionally act on a lawful authority.”

A similar “lawful access” mandate was adopted on encrypted by the European Council back in a December 2020 resolution.

Client-side scanning?

The European police chiefs declaration does not explain which technologies they want platforms to deploy in order to enable CSAM-scanning and law enforcement to be sent decrypted content. But, most likely, it’s some form of client-side scanning technology they’re lobbying for — such as the system Apple had been poised to roll out in 2021, for detecting child sexual abuse material (CSAM) on users’ own devices, before a privacy backlash forced it to shelve and later quietly drop the plan. (Though Apple did roll out CSAM-scanning for iCloud Photos.)

European Union lawmakers, meanwhile, still have a controversial message-scanning CSAM legislative plan on the table. Privacy and legal expertsincluding the bloc’s own data protection supervisor — have warned the draft law poses an existential threat to democratic freedoms, as well as wreaking havoc with cybersecurity. Critics of the plan also argue it’s a flawed approach to child safeguarding, suggesting it’s likely to cause more harm than good by generating lots of false positives.

Last October parliamentarians pushed back against the Commission proposal, backing a substantially revised approach that aims to limit the scope of so-called CSAM “detection orders”. However the European Council has yet to agree its position. So where the controversial legislation will end up remains to be seen. This month scores of civil society groups and privacy experts warned the proposed “mass surveillance” law remains a threat to E2EE. (In the meanwhile EU lawmakers have agreed to extend a temporary derogation from the bloc’s ePrivacy rules that allows for platforms to carry out voluntary CSAM-scanning — but which the planned law is intended to replace.)

The timing of the joint declaration by European police chiefs suggests it’s intended to amp up pressure on EU lawmakers to stick with the CSAM-scanning plan despite trenchant opposition from the parliament. (Hence they also write: “We call on our democratic governments to put in place frameworks that give us the information we need to keep our publics safe.”)

The EU proposal does not prescribe particularly technologies that platforms must use to scan message content to detect CSAM either but critics warn it’s likely to force adoption of client-side scanning — despite the nascent technology being immature and unproven and simply not ready for mainstream use as they see it, which is another reason they’re so loudly sounding the alarm.

Robinson didn’t ask Biggar if police chiefs are lobbying for client-side scanning specifically but he did ask whether they want Meta to “backdoor” encryption. Again, the answer was fuzzy.

“We wouldn’t call it a backdoor — and exactly how it happens is for industry to determine. They are the experts in this,” he demurred, without specifying exactly what they do want, as if finding a way to circumvent strong encryption is a simple case of techies needing to nerd harder.

A confused Robinson pressed the UK police chief for clarification, pointing out information is either robustly encrypted (and so private) or it’s not. But Biggar danced even further away from the point — arguing “every platform is on a spectrum”, i.e. of information security vs information visibility. “Almost nothing is at the absolutely completely secure end,” he suggested. “Customers don’t want that for usability reasons [such as] their ability to get their data back if they’ve lost a phone.

“What we’re saying is being absolute on either side doesn’t work. Of course we don’t want everything to be absolutely open. But also we don’t want everything to be absolutely closed. So we want the company to find a way of making sure that they can provide security and encryption for the public but still provide us with the information that we need to protect the public.”

Non-existent safety tech

In recent years the UK Home Office has been pushing the notion of so-called “safety tech” that would allow for scanning of E2EE content to detect CSAM without impacting user privacy. However a 2021 “Safety Tech” challenge it ran, in a bid to deliver proof of concepts for such a technology, produced results so poor that the cyber security professor appointed to independently evaluate the projects, the University of Bristol’s Awais Rashid, warned last year that none of the technology developed for the challenge is fit for purpose, writing: “Our evaluation shows that the solutions under consideration will compromise privacy at large and have no built-in safeguards to stop repurposing of such technologies for monitoring any personal communications.”

If technology does exist to allow law enforcement to access E2EE data in the plain without harming users’ privacy, as Biggar appears to be claiming, one very basic question is why can’t police forces explain exactly what they want platforms to implement? (Reminder: Last year reports suggested government ministers had privately acknowledged no such privacy-safe E2EE-scanning technology currently exists.)

TechCrunch contacted Meta for a response to Biggar’s remarks and to the broader joint declaration. In an emailed statement a company spokesperson repeated its defence of expanding access to E2EE, writing: “The overwhelming majority of Brits already rely on apps that use encryption to keep them safe from hackers, fraudsters, and criminals. We don’t think people want us reading their private messages so have spent the last five years developing robust safety measures to prevent, detect and combat abuse while maintaining online security.

“We recently published an updated report setting out these measures, such as restricting people over 19 from messaging teens who don’t follow them and using technology to identify and take action against malicious behaviour. As we roll out end-to-end encryption, we expect to continue providing more reports to law enforcement than our peers due to our industry leading work on keeping people safe.” 

The company has weathered a string of similar calls from a string of UK Home Secretaries over the Conservative governments’ decade+ run. Just last September then Home Secretary, Suella Braverman, warned Meta it must deploy unspecified “safety measures” alongside E2EE — warning the government could use powers in the Online Safety Bill (now Act) to sanction the company if it failed to play ball.

Asked by Robinson if the government could (and should) act if Meta does not change course on E2EE, Biggar both invoked the Online Safety Act and pointed to another (older) piece of legislation, the surveillance-enabling Investigatory Powers Act (IPA), saying: “Government can act and government should act and it has strong powers under the Investigatory Powers Act and also the Online Safety Act to do so.”

Penalties for breaches of the Online Safety Act can be substantial — with Ofcom empowered to issues fines of up to 10% of worldwide annual turnover.

In another concerning step for people’s security and privacy, the government is in the process of beefing up the IPA with more powers targeted at messaging platforms, including a requirement that messaging services clear security features with the Home Office before releasing them.

The controversial plan to further expand IPA’s scope has triggered concern across the UK tech industry — which has suggested citizens’ security and privacy will be put at risk by the additional measures. Last summer Apple also warned it could be forced to shut down mainstream services like iMessage and FaceTime in the UK if the government did not rethink the expansion of surveillance powers.

There’s some irony in the latest law enforcement-led lobbying campaign aimed at derail the onward march of E2EE across mainstream digital services hinging on a plea by police chiefs against binary arguments in favor of privacy — given there has almost certainly never been more signals intelligence available for law enforcement and security services to scoop up to feed their investigations, even factoring in the rise of E2EE. So the idea that improved web security will suddenly spell the end of child safeguarding efforts is itself a distinctly binary claim.

However anyone familiar with the decades long crypto wars won’t be surprised to see double standard pleas being deployed in bid to weaken online security as that’s how this propaganda war has always been waged.


Software Development in Sri Lanka

Robotic Automations

European car manufacturer will pilot Sanctuary AI's humanoid robot | TechCrunch


Sanctuary AI announced that it will be delivering its humanoid robot to a Magna manufacturing facility. Based in Canada, with auto manufacturing facilities in Austria, Magna manufactures and assembles cars for a number of Europe’s top automakers, including Mercedes, Jaguar and BMW. As is often the nature of these deals, the parties have not disclosed how many of Sanctuary AI’s robots will be deployed.

The news follows similar deals announced by Figure and Apptronik, which are piloting their own humanoid systems with BMW and Mercedes, respectively. Agility also announced a deal with Ford at CES in January 2020, though that agreement found the American carmaker exploring the use of Digit units for last-mile deliveries. Agility has since put that functionality on the back burner, focusing on warehouse deployments through partners like Amazon.

For its part, Magna invested in Sanctuary AI back in 2021 — right around the time Elon Musk announced plans to build a humanoid robot to work in Tesla factories. The company would later dub the system “Optimus.” Vancouver-based Sanctuary unveiled its own system, Phoenix, back in May of last year. The system stands 5’7” (a pretty standard height for these machines) and weighs 155 pounds.

Phoenix isn’t Sanctuary’s first humanoid (an early model had been deployed at a Canadian retailer), but it is the first to walk on legs — this is in spite of the fact that most available videos only highlight the system’s torso. The company has also focused some of its efforts on creating dexterous hands — an important addition if the system is expected to expand functionality beyond moving around totes.

Sanctuary calls the pilot, “a multi-disciplinary assessment of improving cost and scalability of robots using Magna’s automotive product portfolio, engineering and manufacturing capabilities; and a strategic equity investment by Magna.”

As ever, these agreements should be taken as what they are: pilots. They’re not exactly validation of the form factor and systems — that comes later, if Magna gets what it’s looking for with the deal. That comes down to three big letters: ROI.

The company isn’t disclosing specifics with regard to the number of robots, the length of the pilot or even the specific factory where they will be deployed.


Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber