From Digital Age to Nano Age. WorldWide.

Tag: Face

Robotic Automations

Shein to face EU's strictest rules for online marketplaces | TechCrunch


Ultra-fast fashion ecommerce giant Shein will be subject to an additional layer of governance rules targeted at very large online platforms (VLOPs) under the European Union’s Digital Services Act (DSA), the Commission announced Friday.

Shein had reported passing an average of 45 million monthly users in the region — which is the threshold for the EU to designate VLOPs under the DSA.

The designation is important as it means the Singapore-headquartered marketplace will soon have to comply with the strictest level of online governance — requiring it to take steps to identify and mitigate systemic risks, such as related to the sale of counterfeit or illegal goods or other types of content which could pose harms to consumers’ well-being.

Other DSA obligations for VLOPs include a requirement to publish an ads library, as well as providing access to platform data to external researchers studying systemic risk.

Shein joins roughly two dozen platforms already designated as VLOPs or VLOSE (very large online search engines) by the EU. Other VLOP marketplaces include the likes of AliExpress, which is already under investigation by the Commission for suspected breaches of the DSA; Amazon, which has challenged its designation (but remains subject to the rules in the meanwhile); Booking.com; and Zalando. 

The DSA’s general obligations already applied to Shein, as one of likely thousands of online services in scope of the general rules. But being named a VLOP amps up the regulatory risk for the fast-fashion giant. The EU will expect Shein’s first risk assessment report to be submitted in four months’ time.

Penalties for failing to comply with the DSA, meanwhile, can reach up to 6% of global annual turnover. The maximum fine does not increase for VLOPs but with more obligations piled on them the level of regulatory risk they’re subject to certainly rises.

So far no platforms or services have been found to have breached the DSA so it remains to be seen how penalties might be meted out in practice. But it’s logical that larger platforms could also face stiffer fines for any compliance failures.

While fashion was Shein’s initial product focus the ecommerce giant has been rapidly expanding its inventory into a far broader marketplace, covering a growing range of lifestyle and homeware categories (such as cosmetics, supplies for schoolkids and products for pets).

Its tactic of offering a vast range of fashion-focused goods, typically at bargain basement prices, means the marketplace is especially popular with young users. However it’s a dynamic that could amp up the regulatory risk for Shein as the Commission has said its priorities in enforcing the DSA include honing in on risks related to child protection and marketplace safety. Cheap goods may also not have the highest safety standards.

“The Commission services will carefully monitor the application of the DSA rules and obligations by the platform, especially concerning measures to guarantee consumer protection and address the dissemination of illegal products,” the EU wrote in a press release accompanying Shein’s designation. It added that it is “ready to engage closely with Shein to ensure these are properly addressed”.

Prior to Shein being designated a VLOP oversight of its compliance with the DSA fell to the Irish Digital Services Coordinator (IDSC), as its EMEA HQ is located in Dublin. But the Commission enforces of the subset of DSA rules that apply to VLOPs so it will be taking up the oversight baton on the marketplace — alongside the IDSC’s ongoing supervision of Shein’s compliance with the rulebook’s general obligations.


Software Development in Sri Lanka

Robotic Automations

Senate passes a bill forcing TikTok to face a ban if ByteDance doesn't sell it | TechCrunch


The Senate passed a bill, included with the foreign aid package, that will ban TikTok if its owner, ByteDance, doesn’t sell it within a year. Senators passed the bill 79-18 Tuesday after the House passed it with overwhelming majority over the weekend.

President Joe Biden will have to sign the bill to make it law, and as per a statement released by the White House, he intends to do so on Wednesday.

Notably, in March, the House passed a similar standalone bill to ban TikTok or force its sale with a six-month time limit. However, the Senate never took that bill up. This time, as the bill was tied with critical foreign aid to Ukraine, Israel, and Taiwan, the Senate had to make a decision.

TikTok didn’t immediately release a statement. However, Michael Beckerman, the company’s head of public policy for the Americas, said that the company plans to challenge the move in courts, according to Bloomberg.

“This is an unprecedented deal worked out between the Republican Speaker and President Biden. The stage that the bill is signed, we will move to the courts for a legal challenge,” he said in a memo to TikTok’s US staff earlier this week.

The bill gives Bytedance nine months to force a sale with a 90-day extension  — so effectively a year to complete the deal.

Last week, when the House passed the bill, TikTok said it was “unfortunate” that the House was using the cover of important foreign and humanitarian assistance to jam through a ban bill that restricts the “free speech rights of 170 million Americans.”

While TikTok operates out of Singapore, the U.S. has been concerned about the data of its citizens, given the Chinese ownership of the social media platform. TikTok has continually tried to assure the government that it doesn’t give out U.S. user data to China with different campaigns.




Software Development in Sri Lanka

Robotic Automations

Hugging Face releases a benchmark for testing generative AI on health tasks | TechCrunch


Generative AI models are increasingly being brought to healthcare settings — in some cases prematurely, perhaps. Early adopters believe that they’ll unlock increased efficiency while revealing insights that’d otherwise be missed. Critics, meanwhile, point out that these models have flaws and biases that could contribute to worse health outcomes.

But is there a quantitative way to know how helpful, or harmful, a model might be when tasked with things like summarizing patient records or answering health-related questions?

Hugging Face, the AI startup, proposes a solution in a newly released benchmark test called Open Medical-LLM. Created in partnership with researchers at the nonprofit Open Life Science AI and the University of Edinburgh’s Natural Language Processing Group, Open Medical-LLM aims to standardize evaluating the performance of generative AI models on a range of medical-related tasks.

Open Medical-LLM isn’t a from-scratch benchmark, per se, but rather a stitching-together of existing test sets — MedQA, PubMedQA, MedMCQA and so on — designed to probe models for general medical knowledge and related fields, such as anatomy, pharmacology, genetics and clinical practice. The benchmark contains multiple choice and open-ended questions that require medical reasoning and understanding, drawing from material including U.S. and Indian medical licensing exams and college biology test question banks.

“[Open Medical-LLM] enables researchers and practitioners to identify the strengths and weaknesses of different approaches, drive further advancements in the field and ultimately contribute to better patient care and outcome,” Hugging Face wrote in a blog post.

Image Credits: Hugging Face

Hugging Face is positioning the benchmark as a “robust assessment” of healthcare-bound generative AI models. But some medical experts on social media cautioned against putting too much stock into Open Medical-LLM, lest it lead to ill-informed deployments.

On X, Liam McCoy, a resident physician in neurology at the University of Alberta, pointed out that the gap between the “contrived environment” of medical question-answering and actual clinical practice can be quite large.

Hugging Face research scientist Clémentine Fourrier, who co-authored the blog post, agreed.

“These leaderboards should only be used as a first approximation of which [generative AI model] to explore for a given use case, but then a deeper phase of testing is always needed to examine the model’s limits and relevance in real conditions,” Fourrier replied on X. “Medical [models] should absolutely not be used on their own by patients, but instead should be trained to become support tools for MDs.”

It brings to mind Google’s experience when it tried to bring an AI screening tool for diabetic retinopathy to healthcare systems in Thailand.

Google created a deep learning system that scanned images of the eye, looking for evidence of retinopathy, a leading cause of vision loss. But despite high theoretical accuracy, the tool proved impractical in real-world testing, frustrating both patients and nurses with inconsistent results and a general lack of harmony with on-the-ground practices.

It’s telling that of the 139 AI-related medical devices the U.S. Food and Drug Administration has approved to date, none use generative AI. It’s exceptionally difficult to test how a generative AI tool’s performance in the lab will translate to hospitals and outpatient clinics, and, perhaps more importantly, how the outcomes might trend over time.

That’s not to suggest Open Medical-LLM isn’t useful or informative. The results leaderboard, if nothing else, serves as a reminder of just how poorly models answer basic health questions. But Open Medical-LLM, and no other benchmark for that matter, is a substitute for carefully thought-out real-world testing.




Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber