From Digital Age to Nano Age. WorldWide.

Tag: 10M

Robotic Automations

Israeli startup Panax raises a $10M Series A for its AI-driven cash flow management platform | TechCrunch


High interest rates and financial pressures make it more important than ever for finance teams to have a better handle on their cash flow, and several startups are hoping to help. Two-year-old Israeli startup Panax is one, and it just raised a $10 million Series A round of funding led by Team8, with participation from […]

© 2024 TechCrunch. All rights reserved. For personal use only.


Software Development in Sri Lanka

Robotic Automations

YC-backed Recall.ai gets $10M Series A to help companies use virtual meeting data | TechCrunch


More money for the generative AI boom: Y Combinator-backed developer infrastructure startup Recall.ai announced Thursday it’s raised a $10 million Series A funding round, bringing its total raised to over $12M. The startup has built infrastructure and a unified API that enables companies to access raw data from virtual meeting platforms like Google Meet, Microsoft Teams, […]

© 2024 TechCrunch. All rights reserved. For personal use only.


Software Development in Sri Lanka

Robotic Automations

Israeli startup Panax raises a $10M Series A for its AI-driven cash flow management platform | TechCrunch


High interest rates and financial pressures make it more important than ever for finance teams to have a better handle on their cash flow, and several startups are hoping to help. Two-year-old Israeli startup Panax is one, and it just raised a $10 million Series A round of funding led by Team8, with participation from […]

© 2024 TechCrunch. All rights reserved. For personal use only.


Software Development in Sri Lanka

Robotic Automations

Carv raises $10M Series A to help gamers monetize their data | TechCrunch


Carv, a data layer platform that lets web3 gaming and AI companies, as well as gamers, control and monetize their data, has raised a $10 million Series A round led by Tribe Capital and IOSG Ventures. 

Carv’s new round comes approximately five months after it received a strategic investment led by HashKey Capital. The startup did not disclose its valuation and the total funding it has raised so far. In 2022, Carv was valued at roughly $40 million when it raised a seed round led by Temasek’s VC arm, Vertex Ventures. 

Carv’s initial focus is on two key industries, gaming and AI, where it sees the biggest opportunity to help users control their data and monetize it. Users can choose to provide their data to Carv’s corporate customers in a way that preserves their privacy and is compliant with regulations, so that companies can use it for training AI models, market research and more.

“While user data has powered tremendous economic growth, individuals don’t share the value created when their information is leveraged to build billion-dollar businesses,” Victor Yu, co-founder and COO of Carv, told TechCrunch. 

Carv offers three solutions: CARV Protocol, a modular data layer with cross-chain connectivity that connects web2 identities to web3 tokens; CARV Play, a cross-platform credentialing system and game distribution platform; and CARV’s AI Agent, CARA, a personalized gaming assistant that integrates with web3 wallets and can recommend games, activities and projects. 

“Carv differentiates itself by putting data ownership and monetization rights in the hands of users. Any revenue generated from leveraging users’ data gets shared back with the data creators and themselves,” Yu said. “Additionally, we’ve created a unified user ID standard (ERC-7231) that bridges web2 and web3, enabling seamless data portability versus today’s siloed solutions.” 

Carv has been profitable since December 2023, and generates monthly recurring revenue of more than $1 million, Yu said, adding that the company is also seeing significant month-over-month growth. 

The company now has 2.5 million registered users and over 350 integrated gaming and AI company partners. 

With the new capital, Carv plans to enhance the design of its CARV Protol to ensure it is scalable and can support a broader range of use cases. It will also launch CARV Link to improve on-chain identity and data authentication, and CARV Database to manage various types of user data. 

Arweave, Consensys (developer of MetaMask and Linea), Draper Dragon, Fenbushi Capital, LiquidX, MARBLEX, (the web3 arm of Korean gaming company Netmarble), No Limit Holdings, and OKX Ventures also participated in the Series A round. 


Software Development in Sri Lanka

Robotic Automations

Clarity Pediatrics raises $10M for treating ADHD and other chronic childhood conditions | TechCrunch


Raising young kids who have been diagnosed with, or are suspected of having, ADHD can be challenging. Some children with this condition may have difficulties completing school work or grow easily frustrated and throw tantrums.

Parents who try to turn to professionals for help are often shocked to learn that due to a nationwide shortage of psychologists, it can take as long as nearly a year to get diagnosed and start seeing a therapist. And that’s not even mentioning the high cost of treatment, which can add up to thousands of dollars a year for out-of-network care.

Clarity Pediatrics, a chronic care startup founded in 2021, says it can reduce the wait time for receiving a diagnosis and beginning ADHD therapy from many months to a couple of days, for an average $15 co-pay per session.

The company’s secret sauce is that instead of providing individual therapy to children, the startup runs 8-week group therapy sessions for parents of newly or previously diagnosed kids.

Clarity chose to offer behavioral parent training (BPT) for one simple reason: the American Academy of Pediatrics recommends it for families of children ages five to 12 with mild-to-moderate ADHD. Since young kids are not mature enough to change on their own, BPT teaches parents strategies and skills to help their children focus in school and control emotional outbursts.

“There is no evidence that one-to-one therapy is effective for young kids with ADHD,” said Clarity’s CEO and co-founder Christina LaMontagne.

Over the last 18 months, Clarity has provided online care to thousands of families in California, and it plans to use $10 million in seed funds it raised from Rethink Impact with participation from Homebrew and Maverick Ventures, to expend its services to other states in 2024.

Clarity is certainly not alone in trying to solve the problem of the lack of therapists for children. Startups like Brightline, Little Otter and Bend Health offer online pediatric mental health services, including ADHD.

For now, Clarity is solely focused on treating ADHD in kids ages five to 12 by providing diagnosis, therapy and prescriptions, but the company has plans to eventually offer healthcare for low-complexity pediatric chronic conditions like asthma, allergies and obesity.

Prior to founding Clarity, LaMontagne was the chief operating officer at Pill Club and a corporate development executive at Johnson & Johnson. The company’s co-founder, Dr. Alesandro Larrazbal, is a pediatrician who was trained at UCSF and Stanford and was in charge of specialty services at Kaiser Permanente.

Clarity’s seed round also included investments from January Ventures, Vamos Ventures, Alumni Ventures and Citylight VC.

Heidi Patel, a managing partner at Rethink Impact, said she invested in Clarity because the incidence rate of chronic disease in children has tripled over the last forty years, but the medical system doesn’t have enough specialists to treat these kids.

“There’s a really long wait time, and then even if you get a diagnosis, treatments are often not available, which is why 80% of kids are left completely untreated,” she said. “With Clarity, you’re getting a full basket of care.”


Software Development in Sri Lanka

Robotic Automations

TransferGo raises $10M to expand its remittance business in Asia, doubling valuation | TechCrunch


TransferGo, the U.K.-based fintech best known as a consumer platform for global remittances, has raised a $10 million growth funding round from Taiwan-based investor Taiwania Capital, with a view to expanding in the Asia-Pacific region. It last raised a $50 million Series C funding round in 2021.

TransferGo claims its growth, combined with the new investment, doubles its valuation. In September 2021 Dealroom valued it at $200 million-$300 million, but TransferGo declined to go into specifics.

Daumantas Dvilinskas, TransferGo co-founder and CEO, told TechCrunch: “We have been profitable for the last year in and out, and the only burn was marketing, but the burn was very limited. We achieved sustainability of the business and became profitable and we still have proceeds from the last funding round. So we are profitable. We don’t need external capital to grow.”

However, he saw the opportunity to raise funding from Asia to expand there. “We raised money because we wanted to expand faster in Asia Pacific. So that’s the next frontier for us,” he said. “We are still taking customers from incumbents: 75% come from cash, banks and Western Union — that’s still the gorilla in the room.”

He puts TransferGo’s growth down to focusing on the consumer experience. “We’ve always been probably the most consumer-centric company in the space,” he said. “This is evident in our Trusted Reviews — still better than others. We really build out the product for our consumers. So that instant settlement of 90%, 24/7 instant, consumers love that. And it’s not easy to do. It takes time. You have to solve existing technology issues.”

Still, it hasn’t all been plain sailing. Last year TransferGo was hit with a €310,000 fine from the Bank of Lithuania for AML (i.e. anti-money laundering) failings.

“We’ve been going through inspection and they found some procedural gaps that we closed by the end of the year,” Dvilinskas told me. “Regulation is getting stronger, but we’re happy that we closed the door on that, because we received successful feedback from them after closing the mediation.”

TransferGo largely competes with market dominator Western Union, but newer upstarts such as Remitly and Wise are also in the competitive mix.


Software Development in Sri Lanka

Robotic Automations

US offers $10M to help catch Change Healthcare hackers | TechCrunch


The U.S. government said it is extending its reward for information on key leadership of the ALPHV/BlackCat cybercrime gang to its affiliate members, one of which last month took credit for a massive ransomware attack on a U.S. health tech giant.

In a statement Wednesday, the U.S. Department of State said it is offering a reward of up to $10 million for information that identifies or locates any person associated with ALPHV/BlackCat, including “their affiliates, activities, or links to a foreign government.”

The Russia-based ALPHV/BlackCat is a ransomware-as-a-service operation, which recruits affiliates — effectively contractors who earn a commission for launching ransomware attacks — and takes a cut of whatever ransom demand the victim pays. Although security researchers have not yet drawn a connection between ALPHV/BlackCat and a foreign government, the State Department implied in its statement that the gang may be “acting at the direction or under the control of a foreign government,” such as Russia.

The State Department blamed the prolific ransomware group for targeting U.S. critical infrastructure, including healthcare services.

Last month, an affiliate group of the ALPHV/BlackCat gang took credit for a cyberattack and weekslong outage at U.S. health tech giant Change Healthcare, which processes around one in three U.S. patient medical records. The cyberattack knocked out much of the U.S. healthcare system’s access to patient records and billing information, causing massive outages and delays in fulfilling medications and prescriptions and surgical authorizations for weeks.

The affiliate group went public after accusing the main ALPHV/BlackCat gang of swindling the contract hackers out of $22 million in ransom that Change Healthcare allegedly paid to prevent the mass leak of patient records.

The group said ALPHV/BlackCat carried out an “exit scam,” where the hackers run off with their fortune to avoid paying their affiliates and keep the stolen funds for themselves.

Despite having lost their cut of the ransom demand, the affiliate group claimed to still have access to a huge amount of stolen sensitive patient data.

Change Healthcare has since said that it ejected the hackers from its network and restored much of its systems. U.S. health insurance giant UnitedHealth Group, the parent company of Change Healthcare, has not yet confirmed if any patient data was stolen.


Software Development in Sri Lanka

Robotic Automations

Databricks spent $10M on new DBRX generative AI model | TechCrunch


If you wanted to raise the profile of your major tech company and had $10 million to spend, how would you spend it? On a Super Bowl ad? An F1 sponsorship?

You could spend it training a generative AI model. While not marketing in the traditional sense, generative models are attention grabbers — and increasingly funnels to vendors’ bread-and-butter products and services.

See Databricks’ DBRX, a new generative AI model announced today akin to OpenAI’s GPT series and Google’s Gemini. Available on GitHub and the AI dev platform Hugging Face for research as well as for commercial use, base (DBRX Base) and fine-tuned (DBRX Instruct) versions of DBRX can be run and tuned on public, custom or otherwise proprietary data.

“DBRX was trained to be useful and provide information on a wide variety of topics,” Naveen Rao, VP of generative AI at Databricks, told TechCrunch in an interview. “DBRX has been optimized and tuned for English language usage, but is capable of conversing and translating into a wide variety of languages, such as French, Spanish and German.”

Databricks describes DBRX as “open source” in a similar vein as “open source” models like Meta’s Llama 2 and AI startup Mistral’s models. (It’s the subject of robust debate as to whether these models truly meet the definition of open source.)

Databricks says that it spent roughly $10 million and two months training DBRX, which it claims (quoting from a press release) “outperform[s] all existing open source models on standard benchmarks.”

But — and here’s the marketing rub — it’s exceptionally hard to use DBRX unless you’re a Databricks customer.

That’s because, in order to run DBRX in the standard configuration, you need a server or PC with at least four Nvidia H100 GPUs (or any other configuration of GPUs that add up to around 320GB of memory). A single H100 costs thousands of dollars — quite possibly more. That might be chump change to the average enterprise, but for many developers and solopreneurs, it’s well beyond reach.

It’s possible to run the model on a third-party cloud, but the hardware requirements are still pretty steep — for example, there’s only one instance type on the Google Cloud that incorporates H100 chips. Other clouds may cost less, but generally speaking running huge models like this is not cheap today.

And there’s fine print to boot. Databricks says that companies with more than 700 million active users will face “certain restrictions” comparable to Meta’s for Llama 2, and that all users will have to agree to terms ensuring that they use DBRX “responsibly.” (Databricks hadn’t volunteered those terms’ specifics as of publication time.)

Databricks presents its Mosaic AI Foundation Model product as the managed solution to these roadblocks, which in addition to running DBRX and other models provides a training stack for fine-tuning DBRX on custom data. Customers can privately host DBRX using Databricks’ Model Serving offering, Rao suggested, or they can work with Databricks to deploy DBRX on the hardware of their choosing.

Rao added:

“We’re focused on making the Databricks platform the best choice for customized model building, so ultimately the benefit to Databricks is more users on our platform. DBRX is a demonstration of our best-in-class pre-training and tuning platform, which customers can use to build their own models from scratch. It’s an easy way for customers to get started with the Databricks Mosaic AI generative AI tools. And DBRX is highly capable out-of-the-box and can be tuned for excellent performance on specific tasks at better economics than large, closed models.”

Databricks claims DBRX runs up to 2x faster than Llama 2, in part thanks to its mixture of experts (MoE) architecture. MoE — which DBRX shares in common with Mistral’s newer models and Google’s recently announced Gemini 1.5 Pro — basically breaks down data processing tasks into multiple subtasks and then delegates these subtasks to smaller, specialized “expert” models.

Most MoE models have eight experts. DBRX has 16, which Databricks says improves quality.

Quality is relative, however.

While Databricks claims that DBRX outperforms Llama 2 and Mistral’s models on certain language understanding, programming, math and logic benchmarks, DBRX falls short of arguably the leading generative AI model, OpenAI’s GPT-4, in most areas outside of niche use cases like database programming language generation.

Now, as some on social media have pointed out, DBRX and GPT-4, which cost significantly more to train, are very different — perhaps too different to warrant a direct comparison. It’s important that these large, enterprise-funded models get compared to the best of the field, but what distinguishes them should also be pointed out, like the fact that DBRX is “open source” and targeted at a distinctly enterprise audience.

At the same time, it can’t be ignored that DBRX is somewhat close to flagship models like GPT-4 in that it’s cost-prohibitive for the average person to run, its training data isn’t open and it isn’t open source in the strictest definition.

Rao admits that DBRX has other limitations as well, namely that it — like all other generative AI models — can fall victim to “hallucinating” answers to queries despite Databricks’ work in safety testing and red teaming. Because the model was simply trained to associate words or phrases with certain concepts, if those associations aren’t totally accurate, its responses won’t always be accurate.

Also, DBRX is not multimodal, unlike some more recent flagship generative AI models, including Gemini. (It can only process and generate text, not images.) And we don’t know exactly what sources of data were used to train it; Rao would only reveal that no Databricks customer data was used in training DBRX.

“We trained DBRX on a large set of data from a diverse range of sources,” he added. “We used open data sets that the community knows, loves and uses every day.”

I asked Rao if any of the DBRX training data sets were copyrighted or licensed, or show obvious signs of biases (e.g. racial biases), but he didn’t answer directly, saying only, “We’ve been careful about the data used, and conducted red teaming exercises to improve the model’s weaknesses.” Generative AI models have a tendency to regurgitate training data, a major concern for commercial users of models trained on unlicensed, copyrighted or very clearly biased data. In the worst-case scenario, a user could end up on the ethical and legal hooks for unwittingly incorporating IP-infringing or biased work from a model into their projects.

Some companies training and releasing generative AI models offer policies covering the legal fees arising from possible infringement. Databricks doesn’t at present — Rao says that the company’s “exploring scenarios” under which it might.

Given this and the other aspects in which DBRX misses the mark, the model seems like a tough sell to anyone but current or would-be Databricks customers. Databricks’ rivals in generative AI, including OpenAI, offer equally if not more compelling technologies at very competitive pricing. And plenty of generative AI models come closer to the commonly understood definition of open source than DBRX.

Rao promises that Databricks will continue to refine DBRX and release new versions as the company’s Mosaic Labs R&D team — the team behind DBRX — investigates new generative AI avenues.

“DBRX is pushing the open source model space forward and challenging future models to be built even more efficiently,” he said. “We’ll be releasing variants as we apply techniques to improve output quality in terms of reliability, safety and bias … We see the open model as a platform on which our customers can build custom capabilities with our tools.”

Judging by where DBRX now stands relative to its peers, it’s an exceptionally long road ahead.

This story was corrected to note that the model took two months to train, and removed an incorrect reference to Llama 2 in the fourteenth paragraph. We regret the errors.


Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber