From Digital Age to Nano Age. WorldWide.

Tag: coming

Robotic Automations

Vinod Khosla is coming to Disrupt to discuss how AI might change the future | TechCrunch


Few figures in the tech industry have earned the storied reputation of Vinod Khosla, founder and partner at Khosla Ventures. For over 40 years, he has been at the center of Silicon Valley, building or backing some of the biggest, most influential tech companies. His career stems from his days as founder of Sun Microsystems […]

© 2024 TechCrunch. All rights reserved. For personal use only.


Software Development in Sri Lanka

Robotic Automations

Google's 3D video conferencing platform, Project Starline, is coming in 2025 with help from HP | TechCrunch


In 2021, Google kicked off work on Project Starline, a corporate-focused teleconferencing platform that uses 3D imaging, cameras and a custom-designed screen to let people converse with someone as if they were in the same room — more or less. Now, after years of testing and private technical previews, Google’s bringing Starline to customers in […]

© 2024 TechCrunch. All rights reserved. For personal use only.


Software Development in Sri Lanka

Robotic Automations

Meta confirms that its Llama 3 open source LLM is coming in the next month | TechCrunch


At an event in London on Tuesday, Meta confirmed that it plans an initial release of Llama 3 — the next generation of its large language model used to power generative AI assistants — within the next month.

This confirms a report published on Monday by The Information that Meta was getting close to launch.

“Within the next month, actually less, hopefully in a very short period of time, we hope to start rolling out our new suite of next-generation foundation models, Llama 3,” said Nick Clegg, Meta’s president of global affairs. He described what sounds like the release of several different iterations or versions of the product. “There will be a number of different models with different capabilities, different versatilities [released] during the course of this year, starting really very soon.”

The plan, Meta Chief Product Officer Chris Cox added, will be to power multiple products across Meta with Llama 3.

Meta has been scrambling to catch up to OpenAI, which took it and other big tech companies like Google by surprise when it launched ChatGPT over a year ago and the app went viral, turning generative AI questions and answers into everyday, mainstream experiences.

Meta has largely taken a very cautious approach with AI, but that hasn’t gone over well with the public, with previous versions of Llama criticized as too limited. (Llama 2 was released publicly in July 2023. The first version of Llama was not released to the public, yet it still leaked online.)

Llama 3, which is bigger in scope than its predecessors, is expected to address this, with capabilities not just to answer questions more accurately but also to field a wider range of questions that might include more controversial topics. It hopes this will make the product catch on with users.

“Our goal over time is to make a Llama-powered Meta AI be the most useful assistant in the world,” said Joelle Pineau, vice president AI Research. “There’s quite a bit of work remaining to get there.” The company did not talk about the size of the parameters it’s using in Llama 3, nor did it offer any demos of how it would work. It’s expected to have about 140 billion parameters, compared to 70 billion for the biggest Llama 2 model.

Most notably, Meta’s Llama families, built as open source products, represent a different philosophical approach to how AI should develop as a wider technology. In doing so, Meta is hoping to play into wider favor with developers versus more proprietary models.

But Meta is also playing it more cautiously, it seems, especially when it comes to other generative AI beyond text generation. The company is not yet releasing Emu, its image generation tool, Pineau said.

“Latency matters a lot along with safety along with ease of use, to generate images that you’re proud of and that represent whatever your creative context is,” Cox said.

Ironically — or perhaps predictably (heh) — even as Meta works to launch Llama 3, it does have some significant generative AI skeptics in the house.

Yann LeCun, the celebrated AI academic who is also Meta’s chief AI scientist, took a swipe at the limitations of generative AI overall and said his bet is on what comes after it. He predicts that will be joint embedding predicting architecture (JEPA), a different approach both to training models and producing results, which Meta has been using to build more accurate predictive AI in the area of image generation.

“The future of AI is JEPA. It’s not generative AI,” he said. “We’re going to have to change the name of Chris’s product division.”


Software Development in Sri Lanka

Robotic Automations

Generative AI is coming for healthcare, and not everyone's thrilled | TechCrunch


Generative AI, which can create and analyze images, text, audio, videos and more, is increasingly making its way into healthcare, pushed by both Big Tech firms and startups alike.

Google Cloud, Google’s cloud services and products division, is collaborating with Highmark Health, a Pittsburgh-based nonprofit healthcare company, on generative AI tools designed to personalize the patient intake experience. Amazon’s AWS division says it’s working with unnamed customers on a way to use generative AI to analyze medical databases for “social determinants of health.” And Microsoft Azure is helping to build a generative AI system for Providence, the not-for-profit healthcare network, to automatically triage messages to care providers sent from patients.  

Prominent generative AI startups in healthcare include Ambience Healthcare, which is developing a generative AI app for clinicians; Nabla, an ambient AI assistant for practitioners; and Abridge, which creates analytics tools for medical documentation.

The broad enthusiasm for generative AI is reflected in the investments in generative AI efforts targeting healthcare. Collectively, generative AI in healthcare startups have raised tens of millions of dollars in venture capital to date, and the vast majority of health investors say that generative AI has significantly influenced their investment strategies.

But both professionals and patients are mixed as to whether healthcare-focused generative AI is ready for prime time.

Generative AI might not be what people want

In a recent Deloitte survey, only about half (53%) of U.S. consumers said that they thought generative AI could improve healthcare — for example, by making it more accessible or shortening appointment wait times. Fewer than half said they expected generative AI to make medical care more affordable.

Andrew Borkowski, chief AI officer at the VA Sunshine Healthcare Network, the U.S. Department of Veterans Affairs’ largest health system, doesn’t think that the cynicism is unwarranted. Borkowski warned that generative AI’s deployment could be premature due to its “significant” limitations — and the concerns around its efficacy.

“One of the key issues with generative AI is its inability to handle complex medical queries or emergencies,” he told TechCrunch. “Its finite knowledge base — that is, the absence of up-to-date clinical information — and lack of human expertise make it unsuitable for providing comprehensive medical advice or treatment recommendations.”

Several studies suggest there’s credence to those points.

In a paper in the journal JAMA Pediatrics, OpenAI’s generative AI chatbot, ChatGPT, which some healthcare organizations have piloted for limited use cases, was found to make errors diagnosing pediatric diseases 83% of the time. And in testing OpenAI’s GPT-4 as a diagnostic assistant, physicians at Beth Israel Deaconess Medical Center in Boston observed that the model ranked the wrong diagnosis as its top answer nearly two times out of three.

Today’s generative AI also struggles with medical administrative tasks that are part and parcel of clinicians’ daily workflows. On the MedAlign benchmark to evaluate how well generative AI can perform things like summarizing patient health records and searching across notes, GPT-4 failed in 35% of cases.

OpenAI and many other generative AI vendors warn against relying on their models for medical advice. But Borkowski and others say they could do more. “Relying solely on generative AI for healthcare could lead to misdiagnoses, inappropriate treatments or even life-threatening situations,” Borkowski said.

Jan Egger, who leads AI-guided therapies at the University of Duisburg-Essen’s Institute for AI in Medicine, which studies the applications of emerging technology for patient care, shares Borkowski’s concerns. He believes that the only safe way to use generative AI in healthcare currently is under the close, watchful eye of a physician.

“The results can be completely wrong, and it’s getting harder and harder to maintain awareness of this,” Egger said. “Sure, generative AI can be used, for example, for pre-writing discharge letters. But physicians have a responsibility to check it and make the final call.”

Generative AI can perpetuate stereotypes

One particularly harmful way generative AI in healthcare can get things wrong is by perpetuating stereotypes.

In a 2023 study out of Stanford Medicine, a team of researchers tested ChatGPT and other generative AI–powered chatbots on questions about kidney function, lung capacity and skin thickness. Not only were ChatGPT’s answers frequently wrong, the co-authors found, but also answers included several reinforced long-held untrue beliefs that there are biological differences between Black and white people — untruths that are known to have led medical providers to misdiagnose health problems.

The irony is, the patients most likely to be discriminated against by generative AI for healthcare are also those most likely to use it.

People who lack healthcare coverage — people of color, by and large, according to a KFF study — are more willing to try generative AI for things like finding a doctor or mental health support, the Deloitte survey showed. If the AI’s recommendations are marred by bias, it could exacerbate inequalities in treatment.

However, some experts argue that generative AI is improving in this regard.

In a Microsoft study published in late 2023, researchers said they achieved 90.2% accuracy on four challenging medical benchmarks using GPT-4. Vanilla GPT-4 couldn’t reach this score. But, the researchers say, through prompt engineering — designing prompts for GPT-4 to produce certain outputs — they were able to boost the model’s score by up to 16.2 percentage points. (Microsoft, it’s worth noting, is a major investor in OpenAI.)

Beyond chatbots

But asking a chatbot a question isn’t the only thing generative AI is good for. Some researchers say that medical imaging could benefit greatly from the power of generative AI.

In July, a group of scientists unveiled a system called complementarity-driven deferral to clinical workflow (CoDoC), in a study published in Nature. The system is designed to figure out when medical imaging specialists should rely on AI for diagnoses versus traditional techniques. CoDoC did better than specialists while reducing clinical workflows by 66%, according to the co-authors. 

In November, a Chinese research team demoed Panda, an AI model used to detect potential pancreatic lesions in X-rays. A study showed Panda to be highly accurate in classifying these lesions, which are often detected too late for surgical intervention. 

Indeed, Arun Thirunavukarasu, a clinical research fellow at the University of Oxford, said there’s “nothing unique” about generative AI precluding its deployment in healthcare settings.

“More mundane applications of generative AI technology are feasible in the short- and mid-term, and include text correction, automatic documentation of notes and letters and improved search features to optimize electronic patient records,” he said. “There’s no reason why generative AI technology — if effective — couldn’t be deployed in these sorts of roles immediately.”

“Rigorous science”

But while generative AI shows promise in specific, narrow areas of medicine, experts like Borkowski point to the technical and compliance roadblocks that must be overcome before generative AI can be useful — and trusted — as an all-around assistive healthcare tool.

“Significant privacy and security concerns surround using generative AI in healthcare,” Borkowski said. “The sensitive nature of medical data and the potential for misuse or unauthorized access pose severe risks to patient confidentiality and trust in the healthcare system. Furthermore, the regulatory and legal landscape surrounding the use of generative AI in healthcare is still evolving, with questions regarding liability, data protection and the practice of medicine by non-human entities still needing to be solved.”

Even Thirunavukarasu, bullish as he is about generative AI in healthcare, says that there needs to be “rigorous science” behind tools that are patient-facing.

“Particularly without direct clinician oversight, there should be pragmatic randomized control trials demonstrating clinical benefit to justify deployment of patient-facing generative AI,” he said. “Proper governance going forward is essential to capture any unanticipated harms following deployment at scale.”

Recently, the World Health Organization released guidelines that advocate for this type of science and human oversight of generative AI in healthcare as well as the introduction of auditing, transparency and impact assessments on this AI by independent third parties. The goal, the WHO spells out in its guidelines, would be to encourage participation from a diverse cohort of people in the development of generative AI for healthcare and an opportunity to voice concerns and provide input throughout the process.

“Until the concerns are adequately addressed and appropriate safeguards are put in place,” Borkowski said, “the widespread implementation of medical generative AI may be … potentially harmful to patients and the healthcare industry as a whole.”


Software Development in Sri Lanka

Robotic Automations

The AltStore, an alternative app store coming to the EU, will offer Patreon-backed apps | TechCrunch


Apple’s chokehold on the App Store ecosystem for iPhone apps stifles competition, according to the EU’s Digital Markets Act (DMA), so it’s now forcing the tech giant to open up to new rivals. As a result, we’re beginning to see what an app store ecosystem could look like when other developers are allowed to compete with the default iPhone App Store.

One notable case in point is the AltStore, an alternative app store that’s preparing to take advantage of the DMA to launch an updated version of its app marketplace in the EU, with plans to support Patreon-backed apps.

To comply with the new European law, Apple is introducing APIs and frameworks that allow developers to distribute apps independently of the App Store. The AltStore was quick to capitalize on this possibility, and last week, AltStore developer Riley Testut shared screenshots of the up-and-coming version of his app store that will be offered in the EU.

Instead of relying only on ads, paid downloads or in-app purchases to monetize, the AltStore will allow developers to use its custom Patreon integration to market their apps directly to consumers.

 

Post by @rileytestut
View on Threads

 

The store — which has offered sideloading apps like the video game emulator Delta, also from Testut — will initially launch in the EU with just two apps, the developer says. Delta will be available for free and the AltStore’s own clipboard manager Clip will require a pledge of $1 or more on the crowdfunding platform Patreon. The AltStore plans to add the beta versions of both Delta and Clip soon after, which will require a $3 per month Patreon pledge to use.

This unique business model for monetizing apps is similar in some ways to Apple’s in-app subscriptions but comes without the traditional 15% to 30% commission on sales that the tech giant currently takes. With Apple’s DMA rules, alternative app stores can opt to pay €0.50 for each first annual install per year over a 1 million threshold — a new scheme to tap into the revenue of larger apps, which Apple calls its Core Technology Fee. (Whether Apple’s fee will remain is uncertain, as the EU is investigating the tech giant for non-compliance with its competition law.)

As Testut explains, after the AltStore launches and is working properly, the plan is to then allow other developers to also distribute their apps through the storefront by establishing their own sources.

“They’ll also be able to use the same Patreon integration we use to distribute ‘paid’ apps,” Testut told TechCrunch. This integration will create a new business model for apps that wouldn’t be permitted without the DMA coming into effect.

“One thing @altstore does that should really get you thinking about alternative payment systems that Apple never would have considered: it has Patreon integration, and can tie access to apps to your Patreon pledge — which gives you an entirely different, personal relationship with your users, and lets you use the same reward system you use for videos, blog posts, merch, etc,” wrote iOS developer Steve Troughton-Smith in a post on Mastodon. “Alternative app stores don’t just have to recreate Apple’s model,” he added.

Plus, he pointed out how the AltStore will provide users with a “granular view” of the entitlements — or extra permissions — that an app has, before you install it.

Beyond offering developers a new way to make money, Testus claims that the EU version of the AltStore will be “dramatically simpler” to use compared with the current version.

Today, users who want to sideload apps via the AltStore without jailbreaking their iPhone have to use a Mac or PC, provide the AltStore with their Apple ID and password, and then refresh the apps every seven days. That process not only raises security concerns, but is also complex. However, the EU version of AltStore won’t require these steps.

“It all works virtually the same as the App Store now,” Testut says.

In the screenshots he shared, the AltStore looks much like a modern-day app store, with categories like Games, Lifestyle and Utilities, as well as buttons to download its free apps, as on Apple’s App Store. However, the user interface will be slightly different, as Apple requires developers to insert an additional confirmation screen after the user clicks to install an app. This screen warns consumers that updates and purchases will be managed by the AltStore, as opposed to Apple.

Testut also notes that the AltStore apps have to be notarized by Apple in order to be installed, so it won’t be able to install just any sideloaded app available as an .ipa file.

The new AltStore is ready to launch now, but Testut says he’s waiting on final approval from Apple.


Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber