From Digital Age to Nano Age. WorldWide.

Tag: law

Robotic Automations

TikTok sues the US government over law that could ban the app | TechCrunch


TikTok is suing the United States government in an effort to block a law that would ban TikTok if its parent company, ByteDance, fails to sell it within a year. The lawsuit, which was filed on Tuesday, argues that the bill violates the U.S. Constitution. TikTok argues that the law violates the U.S. Constitution’s commitment to “both free speech and individual liberty.”

“For the first time in history, Congress has enacted a law that subjects a single, named speech platform to a permanent, nationwide ban, and bars every American from participating in a unique online community with more than 1 billion people worldwide,” the lawsuit reads. “That law — the Protecting Americans From Foreign Adversary Controlled Applications Act (the “Act”) — is unconstitutional.”

The lawsuit comes two weeks after President Biden signed the bill, which included aid for Ukraine and Israel. The bill gives ByteDance until January 19 to sell the app or face a ban, bringing the possibility of a TikTok ban closer to reality than ever before.

TikTok argues that the U.S. government has not offered evidence to support its claims that the app poses risks to national security.

“The statements of congressional committees and individual Members of Congress during the hasty, closed-door legislative process preceding the Act’s enactment confirm that there is at most speculation, not ‘evidence,’ as the First Amendment requires,” the lawsuit reads.

TikTok goes on to say that the law is effectively seeking to ban the app, arguing that it is not possible to sell TikTok within the 270-day timeline it has been given.

“Petitioners have repeatedly explained this to the U.S. government, and sponsors of the Act were aware that divestment is not possible,” the lawsuit states. “There is no question: the Act will force a shutdown of TikTok by January 19, 2025, silencing the 170 million Americans who use the platform to communicate in ways that cannot be replicated elsewhere.”

Even if ByteDance wanted to sell the app, the Chinese government would likely block a sale because it would need to approve the transfer of the TikTok’s algorithms. TikTok goes on to state that a sale would be technologically impossible, as “millions of lines of software code” would need to be moved to a new owner. 

The lawsuit follows fours years of allegations from the U.S. government that TikTok’s ties to China pose a national security risk and that it exposes Americans’ sensitive information to the Chinese government. TikTok has denied these allegations and said it has spent $2 billion to protect the data of U.S. users.

Lawmakers have also argued that TikTok has the potential to sway public opinion by deciding what it shows to users in its ‘For You’ feed.

When the U.S. government was seeking to ban TikTok under the Trump administration, TikTok considered selling its U.S. operations to an American company. Potential candidates included Oracle, Microsoft and Walmart, but none of these deals came to fruition. This time around, reports have indicated that ByteDance would prefer to shut down TikTok rather than sell it.


Software Development in Sri Lanka

Robotic Automations

TikTok ban signed into law by President Biden: How we got here, and what comes next | TechCrunch


TikTok faces an uncertain fate in the U.S. once again. A bill including a deadline for TikTok parent company Bytedance to divest within nine months or face a ban on app stores to distribute the app in the U.S., was signed by President Joe Biden on Wednesday as part of broader legislation including military aid for Israel and Ukraine. The White House’s approval comes swiftly after strong bipartisan approval in the House and a 79-18 Senate vote Tuesday in favor of moving the bill forward.

TikTok is based in Los Angeles and Singapore but is owned by Chinese tech giant ByteDance. That relationship has raised eyebrows among U.S. officials, who warn that the app could be leveraged to further the interests of an adversary. The bill’s critics argue that the U.S. is unfairly targeting a well-loved social network when the government could be dealing with household issues that directly benefit Americans.

What happened in the Senate?

Senate Majority Leader Chuck Schumer, who has the power to set the chamber’s priorities and round up Democrats for a unified vote, initially said that the Senate “will review the legislation when it comes over from the House.”

The Senate at first seemed far from presenting a united front against TikTok. Some Republican China hawks like Sens. Josh Hawley and Marsha Blackburn were pushing their chamber of Congress to take up the bill. On the Democratic side, Senate Intelligence Committee Chairman Mark Warner issued a joint statement with his Republican committee counterpart, Marco Rubio, in support of a forced sale or ban for TikTok.

“We are united in our concern about the national security threat posed by TikTok — a platform with enormous power to influence and divide Americans whose parent company ByteDance remains legally required to do the bidding of the Chinese Communist Party,” Warner and Rubio said in an emailed statement. Their Senate committee, which is frequently briefed on national security matters, is particularly relevant given the nature of the concerns expressed by TikTok’s critics in Congress.

Late Tuesday, the Senate approved the $95 billion aid package — including aid for Taiwan and humanitarian aid for Gaza — that also contained the much-debated TikTok ban.

What happened in the House?

In March, the House Energy and Commerce Committee introduced a new bill designed to pressure ByteDance into selling TikTok. The bill marked a fresh push by the U.S. government to separate the company from its Chinese ownership or force it out of the country.

The bill, known as the Protecting Americans from Foreign Adversary Controlled Applications Act, would make it illegal for software with ties to U.S. adversaries to be distributed by U.S. app stores or supported by U.S. web hosts. Within the bill’s definitions, ownership by an entity based in an adversary country, like ByteDance in China, counts.

In language of the bill, which goes on to name TikTok explicitly, “it shall be unlawful for an entity to distribute, maintain, or update (or enable the distribution, maintenance, or updating of) a foreign adversary controlled application.” If the bill became law, Apple’s App Store and Google Play could not legally distribute the app in the U.S.

The bill, which many of its detractors reasonably describe as a “ban,” would force ByteDance to sell TikTok within six months for the app to continue operating here. It also empowers the president to have oversight of this process to ensure that it results in the company in question “no longer being controlled by a foreign adversary.”

After getting wind of the bill’s swift and sudden progress in Congress, TikTok pushed back with a mass in-app message to U.S. users, complete with a button for calling their representatives.

“Speak up now — before your government strips 170 million Americans of their Constitutional right to free expression,” the message read. “Let Congress know what TikTok means to you and tell them to vote NO.”

In spite of TikTok’s decision to rile up its users — or perhaps because of it — the bill to force ByteDance to sell TikTok passed through the House Energy and Commerce Committee with a 50-0 vote. The fast-tracked bill passed a full vote in the House on March 13.

Prior to the vote, subcommittee members had a classified briefing with the FBI, the Justice Department and Office of the Director of National Intelligence at the behest of the Biden administration, Punchbowl News reported.

President Biden also explicitly said that he would sign the bill if it reaches his desk. “If they pass it, I’ll sign it,” Biden told a group of reporters. And Biden followed through with that statement in signing the bill Wednesday.

Why does the U.S. say TikTok is a threat?

To be clear, there is currently no public evidence that China has ever tapped into TikTok’s stores of data on Americans or otherwise compromised the app.

Still, that fact hasn’t stopped the U.S. government from highlighting the possibility that China could if it wanted to. The Chinese government hasn’t been shy about going hands-on with companies in the country or keeping critics from its business community in line.

FBI director Chris Wray once cautioned that users might not see “outward signs” if China were ever to meddle with TikTok. “Something that’s very sacred in our country — the difference between the private sector and the public sector — that’s a line that is nonexistent in the way the CCP operates,” Wray said in a Senate hearing last year.

TikTok has vehemently denied these accusations. “Let me state this unequivocally: ByteDance is not an agent of China or any other country,” TikTok CEO Shou Zi Chew said last year during a separate hearing with the House Energy and Commerce Committee.

To TikTok’s credit, if China wanted to get its hands on information about U.S. users, Beijing could easily turn to data brokers who openly sell troves of user data around the globe with little oversight.

Because the U.S. has not produced any public evidence to back up its serious claims, there’s a major disconnect between how politicians feel about TikTok and how most Americans do. For many TikTok users, the U.S. crackdown is just one more way that politicians are out of touch with young people and don’t understand how they use the internet. For them — and other skeptics of the U.S. government’s claims — the situation looks like pure political posturing between two countries with bad blood, sometimes with a dash of racism.

Where did this idea come from?

The campaign to force ByteDance to sell TikTok to a U.S. company originated with an executive order during the Trump administration. Trump’s threats against the company culminated in a plan to force TikTok to sell its U.S. operations to Oracle in late 2020. In the process, TikTok rejected an acquisition offer from Microsoft but ultimately didn’t sell to Oracle, either, in spite of Trump’s efforts to steer the acquisition to benefit close ally and Republican mega donor Larry Ellison.

The executive action ultimately fizzled in 2021 after Biden took office. But last year, the Biden administration picked up the baton, escalating a pressure campaign against the app along with Congress. Now that campaign looks to be back on track.

Oddly, former President Donald Trump, who himself initiated the idea of a forced TikTok sale four years ago, is no longer in support of a TikTok crackdown. Trump explained his abrupt about-face on TikTok by highlighting the benefit a ban or forced sale could have on Meta, which suspended the former president’s account over his role in inciting violence on January 6.

“Without TikTok, you can make Facebook bigger, and I consider Facebook to be an enemy of the people,” Trump told CNBC. Trump’s tune on TikTok may have changed following a recent meeting with billionaire Republican donor Jeffrey Yass, who owns a 15% stake in TikTok’s Chinese parent company ByteDance.

What’s TikTok’s response to the potential ban?

There is some strong bipartisan congressional support for regulating TikTok, but things are still pretty complex. The most obvious complication: TikTok is enormously popular and we’re in an election year. TikTok has 170 million users in the U.S. and they aren’t likely to quietly watch as Congress effectively bans their favorite source of entertainment and information.

TikTok’s creators and their followers likely won’t go quietly. TikTok accounts with millions of followers have a built-in platform for organizing against the threat to the app that connects them to their communities and facilitates brand deals and advertising income.

TikTok itself would also surely mount a strong legal challenge against the forced sale, much as it did when the Trump administration previously tried to accomplish the same thing through executive action. TikTok also sued when Montana attempted to enact its own ban at the state level, which ultimately resulted in a federal judge issuing an injunction and blocking the effort as unconstitutional.

“This legislation has a predetermined outcome: a total ban of TikTok in the United States,” TikTok spokesperson Alex Haurek told TechCrunch in an emailed statement. “The government is attempting to strip 170 million Americans of their Constitutional right to free expression,” Haurek said, foreshadowing the massive public outcry that could result.

The cultural reach of TikTok is so great that Biden is campaigning on TikTok, even as the White House calls the app a national security threat.

Even though the White House has now signed off on the legislation, the U.S. scheme to force ByteDance to sell TikTok could still fail — an outcome that may or may not result in a ban. China has previously stated that it would oppose a forced sale of TikTok, which is well within the Chinese government’s rights following an update to the country’s export rules in late 2020.

Beyond Congress and the courts, TikTok holds a direct line to a massive chunk of the American electorate and a fleet of creators who command many millions of loyal followers. Those levers of power shouldn’t be underestimated in the fight to come.

Still, it’s difficult for TikTok to more effectively organize these millions. Though the X platform, when it operated as Twitter, was highly efficient as a mechanism to share breaking news, TikTok’s algorithms make it less effective as a means of understanding what is happening minute by minute. Though TikTok users say it has become a source of news — among adults, those ages 18 to 29 are most likely to say they receive their news regularly on TikTok — that information tends to be highly targeted and asynchronous. While many users may know something is brewing in Washington, it’s likely they are less aware of the steps required to fight it, making it harder for TikTok to mobilize them.

This post was originally posted March 13, and has been updated as the legislation moves forward.




Software Development in Sri Lanka

Robotic Automations

Lawmakers vote to reauthorize US spying law that critics say expands government surveillance | TechCrunch


Lawmakers passed legislation early Saturday reauthorizing and expanding a controversial U.S. surveillance law shortly after the powers expired at midnight, rejecting opposition by privacy advocates and lawmakers.

The bill, which passed on a 60-34 vote, reauthorizes powers known as Section 702 under the Foreign Intelligence Surveillance Act (FISA), which allows the government to collect the communications of foreign individuals by accessing records from tech and phone providers. Critics, including lawmakers who voted against the reauthorization, say FISA also sweeps up the communications of Americans while spying on its foreign targets.

White House officials and spy chiefs rallied behind efforts to reauthorize FISA, arguing the law prevents terrorist and cyber attacks and that a lapse in powers would harm the U.S. government’s ability to gather intelligence. The Biden administration claims the majority of the classified information in the president’s daily intelligence briefing derives from the Section 702 program.

Privacy advocates and rights groups rejected the reauthorization of FISA, which does not require the FBI or the NSA to obtain a warrant before searching the Section 702 database for Americans’ communications. Accusations that the FBI and the NSA abused their authority to conduct warrantless searches on Americans’ communications became a key challenge for some Republicans initially seeking greater privacy protections.

Bipartisan efforts aimed to require the government obtain a warrant before searching its databases for Americans’ communications. But these failed ahead of the final vote on the Senate floor.

Following the passage in the early hours of today, Senator Mark Warner, who chairs the Senate Intelligence Committee, said that FISA was “indispensable” to the U.S. intelligence community.

The bill now goes to the President’s desk, where it will almost certainly pass into law.

FISA became law in 1978 prior to the advent of the modern internet. It started to come under increased public scrutiny in 2013 after a massive leak of classified documents exposed the U.S. government’s global wiretapping program under FISA, which implicated several major U.S. tech companies and phone companies as unwilling participants.

The Senate was broadly expected to pass the surveillance bill into law, but it faced fresh opposition after the House passed last week its version of the legislation that critics said would extend the reach of FISA to also include smaller companies and telecom providers not previously subject to the surveillance law.

Communications providers largely opposed the House’s expanded definition of an “electronic communications service provider,” which they said would unintentionally include companies beyond the big tech companies and telecom providers who are already compelled to hand over users’ data.

An amendment, introduced by Sen. Ron Wyden, to remove the expanded measure from the bill failed to pass in a vote.

Wyden, a Democratic privacy hawk and member of the Senate Intelligence Committee, accused senators of waiting “until the 11th hour to ram through renewal of warrantless surveillance in the dead of night.”

“Time after time anti-reformers pledge that their band-aid changes to the law will curb abuses, and yet every time, the public learns about fresh abuses by officials who face little meaningful oversight,” said Wyden in a statement.

In the end, the bill passed soon after midnight.

Despite the last-minute rush to pass the bill, a key provision in FISA prevents the government’s programs under Section 702 from suddenly shutting down in the event of lapsed legal powers. FISA requires the government to seek an annual certification from the secretive FISA Court, which oversees and approves the government’s surveillance programs. The FISA Court last certified the government’s surveillance program under Section 702 in early April, allowing the government to use its lapsed authority until at least April 2025.

FISA will now expire at the end of 2026, setting up a similar legislative showdown midway through the next U.S. administration.


Software Development in Sri Lanka

Robotic Automations

Could Congress actually pass a data privacy law? | TechCrunch


Hello, and welcome back to Equity, a podcast about the business of startups, where we unpack the numbers and nuance behind the headlines. This is our Monday show, where we dig into the weekend and take a peek at the week that is to come.

Now that we are finally past Y Combinator’s demo day — though our Friday show is worth listening to if you haven’t had a chance yet — we can dive into the latest news. So, this morning on Equity Monday we got into the chance that the United States might pass a real data privacy law. There’s movement to report, but we’re still very, very far from anything becoming law.

Elsewhere, the U.S. and TSMC have a new deal, there’s gaming news to consider (and a venture tie-in) and Spotify’s latest AI plans, which I am sure will delight some and annoy others. Hit play, and let’s talk about the news!

Oh, and on the crypto front, I forgot to mention that trading volume of digital tokens seems to have partially arrested its free fall, which should help some exchanges breathe a bit more easily.

Equity is TechCrunch’s flagship podcast and posts every Monday, Wednesday and Friday, and you can subscribe to us on Apple Podcasts, Overcast, Spotify and all the casts.

You also can follow Equity on X and Threads, at @EquityPod.

For the full interview transcript, for those who prefer reading over listening, read on, or check out our full archive of episodes over at Simplecast.




Software Development in Sri Lanka

Robotic Automations

Lawhive raises $12M to expand its legaltech AI platform for small firms | TechCrunch


UK-based legaltech company Lawhive, which offers an AI-based in-house ‘lawyer’ through a software-as-a-service platform targeted at small law firms, has raised £9.5 million ($11.9M) in a seed round to expand the reach of AI-driven services for ‘main street’ law firms.

To date, most legaltech startups that are deploying AI have concentrated on the big, juicy market of ‘Big Law’ — meaning large, either country-wide or global, law firms that are keenly pushing AI into their workflows. These include Harvey (US-based; raised $106M); Robin AI (UK-based; raised $43.4M); Spellbook (Canada-based; raised $32.4M). But there has been scant attention paid by startups to the thousands of ‘main street’ lawyers, which have far smaller budgets and are harder to monetize.

Lawhive targets its platform at small law firms or solo lawyers running their own shop. Lawyers can use its software to onboard and manage their own clients or be matched with consumers and small businesses through a marketplace feature.

The startup applies a variety of foundational AI models, and it’s own in-house model, to summarise documents and speed up the legal process for both lawyer and client across repetitive administrative tasks such as KYC/AML, client onboarding and document collection. Lawhive says its in-house AI lawyer, “Lawrence”, is built on top of its own large language model (LLM), which it claims has passed the Solicitors Qualifying Examination (SQE) — scoring 81% against a pass mark of 55%.

Speaking to TechCrunch over a call Pierre Proner, CEO and co-founder of Lawhive, said: “Pretty much all of the existing legaltech — AI companies like Harvey or Robin AI, or Spellbook — all go after the corporate market. That’s a very small number of big law firms in the US in the UK. We’re trying to solve the problem in the consumer legal space, which is totally different and a separate market, both in the UK and globally. It’s served at the moment by — in the UK — 10,000 small law firms.”

He said small firms have faced higher costs and a shrinking market: “They’ve got all of these high costs of staffing and paralegals and junior lawyers, trainees, etc, etc. And they only have one to three actual senior lawyers who are earning any money. So the model doesn’t work. There’s this huge exodus of like mid-career lawyers from the main-street/high street model, and a lot of them are going freelance self employed, and that’s where we’ve sort of seen a lot of traction through our platform of self-employed lawyers who use our AI lawyer.”

Although the UK consumer legal market is worth an estimated £25BN, like most legal markets, it’s groaning under the weight of its own costs. This means around 3.6 million people have an unmet legal need involving a dispute each year and around a million small businesses handle their legal issues on their own. So there’s a strong opportunity for automation to help the sector dial up productivity.

Proner added: “We do combine with foundational models from OpenAI and Anthropic, and as well as open source models. But it is our own model, which has been trained on the data that we’ve been able to gather from 1,000s of cases.”

The startup plans to use the seed round to enter other markets, per Proner: “We have our eyes on other markets yet to be publicly disclosed.”

It might be possible to infer where the planned market expansion will focus by looking at Lawhive’s lead investor: The seed round was led by GV, the venture capital investment arm of Alphabet, the US-based parent of Google. Also participating is London’s Episode 1 Ventures, following a £1.5M investment in April 2022.

In a statement, Vidu Shanmugarajah, partner at GV, said: “As a lawyer by training, I have experienced firsthand how needed technology-driven innovation is in the legal sector. Lawhive represents a transformative shift for both lawyers and consumers.”


Software Development in Sri Lanka

Robotic Automations

Uber Eats courier's fight against AI bias shows justice under UK law is hard won | TechCrunch


On Tuesday, the BBC reported that Uber Eats courier Pa Edrissa Manjang, who is Black, had received a payout from Uber after “racially discriminatory” facial recognition checks prevented him from accessing the app, which he had been using since November 2019 to pick up jobs delivering food on Uber’s platform.

The news raises questions about how fit U.K. law is to deal with the rising use of AI systems. In particular, the lack of transparency around automated systems rushed to market, with a promise of boosting user safety and/or service efficiency, that may risk blitz-scaling individual harms, even as achieving redress for those affected by AI-driven bias can take years.

The lawsuit followed a number of complaints about failed facial recognition checks since Uber implemented the Real Time ID Check system in the U.K. in April 2020. Uber’s facial recognition system — based on Microsoft’s facial recognition technology — requires the account holder to submit a live selfie checked against a photo of them held on file to verify their identity.

Failed ID checks

Per Manjang’s complaint, Uber suspended and then terminated his account following a failed ID check and subsequent automated process, claiming to find “continued mismatches” in the photos of his face he had taken for the purpose of accessing the platform. Manjang filed legal claims against Uber in October 2021, supported by the Equality and Human Rights Commission (EHRC) and the App Drivers & Couriers Union (ADCU).

Years of litigation followed, with Uber failing to have Manjang’s claim struck out or a deposit ordered for continuing with the case. The tactic appears to have contributed to stringing out the litigation, with the EHRC describing the case as still in “preliminary stages” in fall 2023, and noting that the case shows “the complexity of a claim dealing with AI technology”. A final hearing had been scheduled for 17 days in November 2024.

That hearing won’t take place after Uber offered — and Manjang accepted — a payment to settle, meaning fuller details of what exactly went wrong and why won’t be made public. Terms of the financial settlement have not been disclosed, either. Uber did not provide details when we asked, nor did it offer comment on exactly what went wrong.

We also contacted Microsoft for a response to the case outcome, but the company declined comment.

Despite settling with Manjang, Uber is not publicly accepting that its systems or processes were at fault. Its statement about the settlement denies courier accounts can be terminated as a result of AI assessments alone, as it claims facial recognition checks are back-stopped with “robust human review.”

“Our Real Time ID check is designed to help keep everyone who uses our app safe, and includes robust human review to make sure that we’re not making decisions about someone’s livelihood in a vacuum, without oversight,” the company said in a statement. “Automated facial verification was not the reason for Mr Manjang’s temporary loss of access to his courier account.”

Clearly, though, something went very wrong with Uber’s ID checks in Manjang’s case.

Pa Edrissa Manjang (Photo: Courtesy of ADCU)

Worker Info Exchange (WIE), a platform workers’ digital rights advocacy organization which also supported Manjang’s complaint, managed to obtain all his selfies from Uber, via a Subject Access Request under U.K. data protection law, and was able to show that all the photos he had submitted to its facial recognition check were indeed photos of himself.

“Following his dismissal, Pa sent numerous messages to Uber to rectify the problem, specifically asking for a human to review his submissions. Each time Pa was told ‘we were not able to confirm that the provided photos were actually of you and because of continued mismatches, we have made the final decision on ending our partnership with you’,” WIE recounts in discussion of his case in a wider report looking at “data-driven exploitation in the gig economy”.

Based on details of Manjang’s complaint that have been made public, it looks clear that both Uber’s facial recognition checks and the system of human review it had set up as a claimed safety net for automated decisions failed in this case.

Equality law plus data protection

The case calls into question how fit for purpose U.K. law is when it comes to governing the use of AI.

Manjang was finally able to get a settlement from Uber via a legal process based on equality law — specifically, a discrimination claim under the U.K.’s Equality Act 2006, which lists race as a protected characteristic.

Baroness Kishwer Falkner, chairwoman of the EHRC, was critical of the fact the Uber Eats courier had to bring a legal claim “in order to understand the opaque processes that affected his work,” she wrote in a statement.

“AI is complex, and presents unique challenges for employers, lawyers and regulators. It is important to understand that as AI usage increases, the technology can lead to discrimination and human rights abuses,” she wrote. “We are particularly concerned that Mr Manjang was not made aware that his account was in the process of deactivation, nor provided any clear and effective route to challenge the technology. More needs to be done to ensure employers are transparent and open with their workforces about when and how they use AI.”

U.K. data protection law is the other relevant piece of legislation here. On paper, it should be providing powerful protections against opaque AI processes.

The selfie data relevant to Manjang’s claim was obtained using data access rights contained in the U.K. GDPR. If he had not been able to obtain such clear evidence that Uber’s ID checks had failed, the company might not have opted to settle at all. Proving a proprietary system is flawed without letting individuals access relevant personal data would further stack the odds in favor of the much richer resourced platforms.

Enforcement gaps

Beyond data access rights, powers in the U.K. GDPR are supposed to provide individuals with additional safeguards, including against automated decisions with a legal or similarly significant effect. The law also demands a lawful basis for processing personal data, and encourages system deployers to be proactive in assessing potential harms by conducting a data protection impact assessment. That should force further checks against harmful AI systems.

However, enforcement is needed for these protections to have effect — including a deterrent effect against the rollout of biased AIs.

In the U.K.’s case, the relevant enforcer, the Information Commissioner’s Office (ICO), failed to step in and investigate complaints against Uber, despite complaints about its misfiring ID checks dating back to 2021.

Jon Baines, a senior data protection specialist at the law firm Mishcon de Reya, suggests “a lack of proper enforcement” by the ICO has undermined legal protections for individuals.

“We shouldn’t assume that existing legal and regulatory frameworks are incapable of dealing with some of the potential harms from AI systems,” he tells TechCrunch. “In this example, it strikes me…that the Information Commissioner would certainly have jurisdiction to consider both in the individual case, but also more broadly, whether the processing being undertaken was lawful under the U.K. GDPR.

“Things like — is the processing fair? Is there a lawful basis? Is there an Article 9 condition (given that special categories of personal data are being processed)? But also, and crucially, was there a solid Data Protection Impact Assessment prior to the implementation of the verification app?”

“So, yes, the ICO should absolutely be more proactive,” he adds, querying the lack of intervention by the regulator.

We contacted the ICO about Manjang’s case, asking it to confirm whether or not it’s looking into Uber’s use of AI for ID checks in light of complaints. A spokesperson for the watchdog did not directly respond to our questions but sent a general statement emphasizing the need for organizations to “know how to use biometric technology in a way that doesn’t interfere with people’s rights”.

“Our latest biometric guidance is clear that organisations must mitigate risks that come with using biometric data, such as errors identifying people accurately and bias within the system,” its statement also said, adding: “If anyone has concerns about how their data has been handled, they can report these concerns to the ICO.”

Meanwhile, the government is in the process of diluting data protection law via a post-Brexit data reform bill.

In addition, the government also confirmed earlier this year it will not introduce dedicated AI safety legislation at this time, despite Prime Minister Rishi Sunak making eye-catching claims about AI safety being a priority area for his administration.

Instead, it affirmed a proposal — set out in its March 2023 whitepaper on AI — in which it intends to rely on existing laws and regulatory bodies extending oversight activity to cover AI risks that might arise on their patch. One tweak to the approach it announced in February was a tiny amount of extra funding (£10 million) for regulators, which the government suggested could be used to research AI risks and develop tools to help them examine AI systems.

No timeline was provided for disbursing this small pot of extra funds. Multiple regulators are in the frame here, so if there’s an equal split of cash between bodies such as the ICO, the EHRC and the Medicines and Healthcare products Regulatory Agency, to name just three of the 13 regulators and departments the U.K. secretary of state wrote to last month asking them to publish an update on their “strategic approach to AI”, they could each receive less than £1 million to top up budgets to tackle fast-scaling AI risks.

Frankly, it looks like an incredibly low level of additional resource for already overstretched regulators if AI safety is actually a government priority. It also means there’s still zero cash or active oversight for AI harms that fall between the cracks of the U.K.’s existing regulatory patchwork, as critics of the government’s approach have pointed out before.

A new AI safety law might send a stronger signal of priority — akin to the EU’s risk-based AI harms framework that’s speeding toward being adopted as hard law by the bloc. But there would also need to be a will to actually enforce it. And that signal must come from the top.


Software Development in Sri Lanka

Robotic Automations

Oregon signs right to repair into law | TechCrunch


Oregon Governor Tina Kotek on Tuesday signed Senate Bill 1596 into law, joining California, Colorado, Maine, Massachusetts and Minnesota in a growing list of states embracing a right to repair for citizens. The law is set to go into effect January 1.

The bill’s coauthors Janeen Sollman and Representative Courtney Neron took inspiration from California’s Senate Bill 244, which passed toward the tail end of 2023. The lawmakers did, however, add a key provision that split industry representatives. Apple, in particular, has taken issue with its aggressive approach to outlawing parts pairing, a practice that requires the use of proprietary components in the repair process.

The iPhone maker, which had previously issued an unprecedented open letter in favor of the California bill, has said that it is mostly in favor of Oregon’s bill, with the above caveat.

“Apple agrees with the vast majority of Senate Bill 1596,” John Perry, Apple senior manager, Secure System Design, said in testimony to state lawmakers in February. “I have met with Senator Sollman several times and appreciate her willingness to engage in an open dialogue. Senate Bill 1596 is a step forward in making sure that the people of Oregon, myself included, can get their devices repaired easily and cost effectively.”

Apple has cited security concerns around opening the repair process to unauthorized parts — in particular biometric elements like fingerprint scanners. In a conversation with TechCrunch last month, Sollman expressed frustration over attempts to work with Apple on crafting the bill.

“People were coming to me with potential changes, and I felt like I was playing the game of operator, like I was being the one that was having to bring forward the changes, and not Apple themselves,” she said at the time. “That’s very frustrating. We entertained many of the changes that Apple brought forward that are in the California bill. There were two remaining items that were concerning to them. We’ve addressed one of them, because that was providing some ambiguity to the bill. And so, I think the one part that . . . they will stand on the hill on is the parts pairing.”

Google first stated its own approval of the bill back in January, calling it, “a compelling model for other states to follow.” Repair groups have also championed the legislation.

“By eliminating manufacturer restrictions, the Right to Repair will make it easier for Oregonians to keep their personal electronics running. That will conserve precious natural resources and prevent waste,” OSPIRG (Oregon State Public Interest Research Group) director Charlie Fisher noted in a statement following the news. “It’s a refreshing alternative to a ‘throwaway’ system that treats everything as disposable.”

Apple declined to comment on the news.


Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber