From Digital Age to Nano Age. WorldWide.

Tag: access

Robotic Automations

Lumos helps companies manage their employees' identities — and access | TechCrunch


Andrej Safundzic, Alan Flores Lopez and Leo Mehr met in a class at Stanford focusing on ethics, public policy and technological change. Safundzic — speaking to TechCrunch — says that the class drove home the point that few people, particularly in the corporate sector, have control over their online identities. “The future of software is […]

© 2024 TechCrunch. All rights reserved. For personal use only.


Software Development in Sri Lanka

Robotic Automations

Google’s budget Pixel 8a delivers updated silicon, Gemini access for $499 | TechCrunch


Google, it seems, couldn’t wait until I/O next week to show off the latest addition to the Pixel line. For the past several years, the company has used its annual developer conference to showcase an update to its budget line, but exactly a week out from the event, Google just announced the Pixel 8a. When […]

© 2024 TechCrunch. All rights reserved. For personal use only.


Software Development in Sri Lanka

Robotic Automations

Alternative clouds are booming as companies seek cheaper access to GPUs | TechCrunch


The appetite for alternative clouds has never been bigger. Case in point: CoreWeave, the GPU infrastructure provider that began life as a cryptocurrency mining operation, this week raised $1.1 billion in new funding from investors including Coatue, Fidelity and Altimeter Capital. The round brings its valuation to $19 billion post-money, and its total raised to […]

© 2024 TechCrunch. All rights reserved. For personal use only.


Software Development in Sri Lanka

Robotic Automations

NYT Games launches a Wordle archive with access to more than 1,000 past puzzles | TechCrunch


The New York Times Games announced on Tuesday that it’s launching a Wordle archive, offering subscribers access to more than 1,000 past Wordle puzzles. The company has started rolling out the Wordle archive on mobile and desktop to “Games” and “All Access” subscribers, but notes that the the rollout is expected to take place over “the next couple of months.” The Times plans to bring the archive to its Games app in the coming weeks.

The media company, which acquired the popular puzzle game back in 2022, says the archive will allow players to catch up on any puzzles they may have missed, while also enabling them to play the game at their own pace. Players can browse through the calendar of past Wordle puzzles, dating back to June 2021. Subscribers will be able to see and save their progress on past Wordle puzzles and share their results with others.

In addition, the Times is bringing WordleBot, its personalized companion that analyzes your completed Wordle, to the NYT Games app. WordleBot allows players to challenge themselves to analyze how they could approach the puzzle differently by assessing their skills and strategies.

“This expansion is not just about playing past puzzles; it’s about deepening the connection our community has with Wordle and with each other,” said Jonathan Knight, head of Games at The New York Times, in a press release. “We believe this will make the daily puzzle even more engaging and provide even more moments of surprise and delight for our subscribers to share with friends and family.”

In March, the NYT Games app debuted a redesign to help users discover games and track their progress more easily. The redesign, which featured new game card designs and streamlined navigation, was the company’s next step in building out its gaming hub. The change came nearly a year after the company renamed its games-focused app from “NYT Crosswords” to “NYT Games” to better represent its growing family of games.

The Times says its Games app was downloaded 10 million times in 2023 alone, and that its games were played more than eight billion times last year. Wordle accounted for nearly half of that number, as it was played 4.8 billion times.


Software Development in Sri Lanka

Robotic Automations

Alternative clouds are booming as companies seek cheaper access to GPUs | TechCrunch


The appetite for alternative clouds has never been bigger.

Case in point: CoreWeave, the GPU infrastructure provider that began life as a cryptocurrency mining operation, this week raised $1.1 billion in new funding from investors including Coatue, Fidelity and Altimeter Capital. The round brings its valuation to $19 billion post-money, and its total raised to $5 billion in debt and equity — a remarkable figure for a company that’s less than ten years old.

It’s not just CoreWeave.

Lambda Labs, which also offers an array of cloud-hosted GPU instances, in early April secured a “special purpose financing vehicle” of up to $500 million, months after closing a $320 million Series C round. The nonprofit Voltage Park, backed by crypto billionaire Jed McCaleb, last October announced that it’s investing $500 million in GPU-backed data centers. And Together AI, a cloud GPU host that also conducts generative AI research, in March landed $106 million in a Salesforce-led round.

So why all the enthusiasm for — and cash pouring into — the alternative cloud space?

The answer, as you might expect, is generative AI.

As the generative AI boom times continue, so does the demand for the hardware to run and train generative AI models at scale. GPUs, architecturally, are the logical choice for training, fine-tuning and running models because they contain thousands of cores that can work in parallel to perform the linear algebra equations that make up generative models.

But installing GPUs is expensive. So most devs and organizations turn to the cloud instead.

Incumbents in the cloud computing space — Amazon Web Services (AWS), Google Cloud and Microsoft Azure — offer no shortage of GPU and specialty hardware instances optimized for generative AI workloads. But for at least some models and projects, alternative clouds can end up being cheaper — and delivering better availability.

On CoreWeave, renting an Nvidia A100 40GB — one popular choice for model training and inferencing — costs $2.39 per hour, which works out to $1,200 per month. On Azure, the same GPU costs $3.40 per hour, or $2,482 per month; on Google Cloud, it’s $3.67 per hour, or $2,682 per month.

Given generative AI workloads are usually performed on clusters of GPUs, the cost deltas quickly grow.

“Companies like CoreWeave participate in a market we call specialty ‘GPU as a service’ cloud providers,” Sid Nag, VP of cloud services and technologies at Gartner, told TechCrunch. “Given the high demand for GPUs, they offers an alternate to the hyperscalers, where they’ve taken Nvidia GPUs and provided another route to market and access to those GPUs.”

Nag points out that even some big tech firms have begun to lean on alternative cloud providers as they run up against compute capacity challenges.

Last June, CNBC reported that Microsoft had signed a multi-billion-dollar deal with CoreWeave to ensure that OpenAI, the maker of ChatGPT and a close Microsoft partner, would have adequate compute power to train its generative AI models. Nvidia, the furnisher of the bulk of CoreWeave’s chips, sees this as a desirable trend, perhaps for leverage reasons; it’s said to have given some alternative cloud providers preferential access to its GPUs.

Lee Sustar, principal analyst at Forrester, sees cloud vendors like CoreWeave succeeding in part because they don’t have the infrastructure “baggage” that incumbent providers have to deal with.

“Given hyperscaler dominance of the overall public cloud market, which demands vast investments in infrastructure and range of services that make little or no revenue, challengers like CoreWeave have an opportunity to succeed with a focus on premium AI services without the burden of hypercaler-level investments overall,” he said.

But is this growth sustainable?

Sustar has his doubts. He believes that alternative cloud providers’ expansion will be conditioned by whether they can continue to bring GPUs online in high volume, and offer them at competitively low prices.

Competing on pricing might become challenging down the line as incumbents like Google, Microsoft and AWS ramp up investments in custom hardware to run and train models. Google offers its TPUs; Microsoft recently unveiled two custom chips, Azure Maia and Azure Cobalt; and AWS has Trainium, Inferentia and Graviton.

“Hypercalers will leverage their custom silicon to mitigate their dependencies on Nvidia, while Nvidia will look to CoreWeave and other GPU-centric AI clouds,” Sustar said.

Then there’s the fact that, while many generative AI workloads run best on GPUs, not all workloads need them — particularly if they’re aren’t time-sensitive. CPUs can run the necessary calculations, but typically slower than GPUs and custom hardware.

More existentially, there’s a threat that the generative AI bubble will burst, which would leave providers with mounds of GPUs and not nearly enough customers demanding them. But the future looks rosy in the short term, say Sustar and Nag, both of whom are expecting a steady stream of upstart clouds.

“GPU-oriented cloud startups will give [incumbents] plenty of competition, especially among customers who are already multi-cloud and can handle the complexity of management, security, risk and compliance across multiple clouds,” Sustar said. “Those sorts of cloud customers are comfortable trying out a new AI cloud if it has credible leadership, solid financial backing and GPUs with no wait times.”


Software Development in Sri Lanka

Robotic Automations

French startup FlexAI exits stealth with $30M to ease access to AI compute | TechCrunch


A French startup has raised a hefty seed investment to “rearchitect compute infrastructure” for developers wanting to build and train AI applications more efficiently.

FlexAI, as the company is called, has been operating in stealth since October 2023, but the Paris-based company is formally launching Wednesday with €28.5 million ($30 million) in funding, while teasing its first product: an on-demand cloud service for AI training.

This is a chunky bit of change for a seed round, which normally means real substantial founder pedigree — and that is the case here. FlexAI co-founder and CEO Brijesh Tripathi was previously a senior design engineer at GPU giant and now AI darling Nvidia, before landing in various senior engineering and architecting roles at Apple; Tesla (working directly under Elon Musk); Zoox (before Amazon acquired the autonomous driving startup); and, most recently, Tripathi was VP of Intel’s AI and super compute platform offshoot, AXG.

FlexAI co-founder and CTO Dali Kilani has an impressive CV, too, serving in various technical roles at companies including Nvidia and Zynga, while most recently filling the CTO role at French startup Lifen, which develops digital infrastructure for the healthcare industry.

The seed round was led by Alpha Intelligence Capital (AIC), Elaia Partners and Heartcore Capital, with participation from Frst Capital, Motier Ventures, Partech and InstaDeep CEO Karim Beguir.

FlexAI team in Paris

The compute conundrum

To grasp what Tripathi and Kilani are attempting with FlexAI, it’s first worth understanding what developers and AI practitioners are up against in terms of accessing “compute”; this refers to the processing power, infrastructure and resources needed to carry out computational tasks such as processing data, running algorithms, and executing machine learning models.

“Using any infrastructure in the AI space is complex; it’s not for the faint-of-heart, and it’s not for the inexperienced,” Tripathi told TechCrunch. “It requires you to know too much about how to build infrastructure before you can use it.”

By contrast, the public cloud ecosystem that has evolved these past couple of decades serves as a fine example of how an industry has emerged from developers’ need to build applications without worrying too much about the back end.

“If you are a small developer and want to write an application, you don’t need to know where it’s being run, or what the back end is — you just need to spin up an EC2 (Amazon Elastic Compute cloud) instance and you’re done,” Tripathi said. “You can’t do that with AI compute today.”

In the AI sphere, developers must figure out how many GPUs (graphics processing units) they need to interconnect over what type of network, managed through a software ecosystem that they are entirely responsible for setting up. If a GPU or network fails, or if anything in that chain goes awry, the onus is on the developer to sort it.

“We want to bring AI compute infrastructure to the same level of simplicity that the general purpose cloud has gotten to — after 20 years, yes, but there is no reason why AI compute can’t see the same benefits,” Tripathi said. “We want to get to a point where running AI workloads doesn’t require you to become data centre experts.”

With the current iteration of its product going through its paces with a handful of beta customers, FlexAI will launch its first commercial product later this year. It’s basically a cloud service that connects developers to “virtual heterogeneous compute,” meaning that they can run their workloads and deploy AI models across multiple architectures, paying on a usage basis rather than renting GPUs on a dollars-per-hour basis.

GPUs are vital cogs in AI development, serving to train and run large language models (LLMs), for example. Nvidia is one of the preeminent players in the GPU space, and one of the main beneficiaries of the AI revolution sparked by OpenAI and ChatGPT. In the 12 months since OpenAI launched an API for ChatGPT in March 2023, allowing developers to bake ChatGPT functionality into their own apps, Nvidia’s shares ballooned from around $500 billion to more than $2 trillion.

LLMs are pouring out of the technology industry, with demand for GPUs skyrocketing in tandem. But GPUs are expensive to run, and renting them from a cloud provider for smaller jobs or ad-hoc use-cases doesn’t always make sense and can be prohibitively expensive; this is why AWS has been dabbling with time-limited rentals for smaller AI projects. But renting is still renting, which is why FlexAI wants to abstract away the underlying complexities and let customers access AI compute on an as-needed basis.

“Multicloud for AI”

FlexAI’s starting point is that most developers don’t really care for the most part whose GPUs or chips they use, whether it’s Nvidia, AMD, Intel, Graphcore or Cerebras. Their main concern is being able to develop their AI and build applications within their budgetary constraints.

This is where FlexAI’s concept of “universal AI compute” comes in, where FlexAI takes the user’s requirements and allocates it to whatever architecture makes sense for that particular job, taking care of the all the necessary conversions across the different platforms, whether that’s Intel’s Gaudi infrastructure, AMD’s Rocm or Nvidia’s CUDA.

“What this means is that the developer is only focused on building, training and using models,” Tripathi said. “We take care of everything underneath. The failures, recovery, reliability, are all managed by us, and you pay for what you use.”

In many ways, FlexAI is setting out to fast-track for AI what has already been happening in the cloud, meaning more than replicating the pay-per-usage model: It means the ability to go “multicloud” by leaning on the different benefits of different GPU and chip infrastructures.

For example, FlexAI will channel a customer’s specific workload depending on what their priorities are. If a company has limited budget for training and fine-tuning their AI models, they can set that within the FlexAI platform to get the maximum amount of compute bang for their buck. This might mean going through Intel for cheaper (but slower) compute, but if a developer has a small run that requires the fastest possible output, then it can be channeled through Nvidia instead.

Under the hood, FlexAI is basically an “aggregator of demand,” renting the hardware itself through traditional means and, using its “strong connections” with the folks at Intel and AMD, secures preferential prices that it spreads across its own customer base. This doesn’t necessarily mean side-stepping the kingpin Nvidia, but it possibly does mean that to a large extent — with Intel and AMD fighting for GPU scraps left in Nvidia’s wake — there is a huge incentive for them to play ball with aggregators such as FlexAI.

“If I can make it work for customers and bring tens to hundreds of customers onto their infrastructure, they [Intel and AMD] will be very happy,” Tripathi said.

This sits in contrast to similar GPU cloud players in the space such as the well-funded CoreWeave and Lambda Labs, which are focused squarely on Nvidia hardware.

“I want to get AI compute to the point where the current general purpose cloud computing is,” Tripathi noted. “You can’t do multicloud on AI. You have to select specific hardware, number of GPUs, infrastructure, connectivity, and then maintain it yourself. Today, that’s that’s the only way to actually get AI compute.”

When asked who the exact launch partners are, Tripathi said that he was unable to name all of them due to a lack of “formal commitments” from some of them.

“Intel is a strong partner, they are definitely providing infrastructure, and AMD is a partner that’s providing infrastructure,” he said. “But there is a second layer of partnerships that are happening with Nvidia and a couple of other silicon companies that we are not yet ready to share, but they are all in the mix and MOUs [memorandums of understanding] are being signed right now.”

The Elon effect

Tripathi is more than equipped to deal with the challenges ahead, having worked in some of the world’s largest tech companies.

“I know enough about GPUs; I used to build GPUs,” Tripathi said of his seven-year stint at Nvidia, ending in 2007 when he jumped ship for Apple as it was launching the first iPhone. “At Apple, I became focused on solving real customer problems. I was there when Apple started building their first SoCs [system on chips] for phones.”

Tripathi also spent two years at Tesla from 2016 to 2018 as hardware engineering lead, where he ended up working directly under Elon Musk for his last six months after two people above him abruptly left the company.

“At Tesla, the thing that I learned and I’m taking into my startup is that there are no constraints other than science and physics,” he said. “How things are done today is not how it should be or needs to be done. You should go after what the right thing to do is from first principles, and to do that, remove every black box.”

Tripathi was involved in Tesla’s transition to making its own chips, a move that has since been emulated by GM and Hyundai, among other automakers.

“One of the first things I did at Tesla was to figure out how many microcontrollers there are in a car, and to do that, we literally had to sort through a bunch of those big black boxes with metal shielding and casing around it, to find these really tiny small microcontrollers in there,” Tripathi said. “And we ended up putting that on a table, laid it out and said, ‘Elon, there are 50 microcontrollers in a car. And we pay sometimes 1,000 times margins on them because they are shielded and protected in a big metal casing.’ And he’s like, ‘let’s go make our own.’ And we did that.”

GPUs as collateral

Looking further into the future, FlexAI has aspirations to build out its own infrastructure, too, including data centers. This, Tripathi said, will be funded by debt financing, building on a recent trend that has seen rivals in the space including CoreWeave and Lambda Labs use Nvidia chips as collateral to secure loans — rather than giving more equity away.

“Bankers now know how to use GPUs as collaterals,” Tripathi said. “Why give away equity? Until we become a real compute provider, our company’s value is not enough to get us the hundreds of millions of dollars needed to invest in building data centres. If we did only equity, we disappear when the money is gone. But if we actually bank it on GPUs as collateral, they can take the GPUs away and put it in some other data center.”


Software Development in Sri Lanka

Robotic Automations

Apple opens access to used iPhone components for repair | TechCrunch


On Thursday, Apple announced that it has opened its iPhone repair process to include used components. Starting this fall, customers and independent repair shops will be able to fix the handset using compatible components.

Components that don’t require configuration (such as volume buttons) were already capable of being harvested from used devices. Today’s news adds all components — including the battery, display and camera — which Apple requires to be configured for full functionality. Face ID will not be available when the feature first rolls out, but it is coming down the road.

At launch, the feature will be available solely for the iPhone 15 line on both the supply and receiving ends of the repair. That caveat is due, in part, to limited interoperability between the models. In many cases, parts from older phones simply won’t fit.  The broader limitation that prohibited the use of components from used models comes down to a process commonly known as “parts paring.”

Apple has defended the process, stating that using genuine components is an important aspect of maintaining user security and privacy. Historically, the company hasn’t used the term “parts pairing” to refer to its configuration process, but it acknowledges that phrase has been widely adopted externally. It’s also aware that the term is loaded in many circles.

“‘Parts pairing’ is used a lot outside and has this negative connotation,” Apple senior vice president of hardware engineering, John Ternus, tells TechCrunch. “I think it’s led people to believe that we somehow block third-party parts from working, which we don’t. The way we look at it is, we need to know what part is in the device, for a few reasons. One, we need to authenticate that it’s a real Apple biometric device and that it hasn’t been spoofed or something like that. … Calibration is the other one.”

Right-to-repair advocates have accused Apple of hiding behind parts pairing as an excuse to stifle user-repairability. In January, iFixit called the process the “biggest threat to repair.” The post paints a scenario wherein an iPhone user attempts to harvest a battery from a friend’s old device, only to be greeted with a pop-up notification stating, “Important Battery Message. Unable to verify this iPhone has a genuine Apple battery.”

It’s a real scenario and surely one that’s proven confusing for more than a few people. After all, a battery that was taken directly from another iPhone is clearly the real deal.

Today’s news is a step toward resolving the issue on newer iPhones, allowing the system to effectively verify that the battery being used is, in fact, genuine.

“Parts pairing, regardless of what you call it, is not evil,” says Ternus. “We’re basically saying, if we know what module’s in there, we can make sure that when you put our module in a new phone, you’re gonna get the best quality you can. Why’s that a bad thing?”

The practice took on added national notoriety when it was specifically targeted by Oregon’s recently passed right-to-repair bill. Apple, which has penned an open letter in support of a similar California bill, heavily criticized the bill’s parts pairing clause.

“Apple supports a consumer’s right to repair, and we’ve been vocal in our support for both state and federal legislation,” a spokesperson for the company noted in March. “We support the latest repair laws in California and New York because they increase consumer access to repair while keeping in place critical consumer protections. However, we’re concerned a small portion of the language in Oregon Senate Bill 1596 could seriously impact the critical and industry-leading privacy, safety and security protections that iPhone users around the world rely on every day.”

While aspects of today’s news will be viewed as a step in the right direction among some repair advocates, it seems unlikely that it will make the iPhone wholly compliant with the Oregon bill. Apple declined to offer further speculation on the matter.

Biometrics — including fingerprint and facial scans — continue to be a sticking point for the company.

“You think about Touch ID and Face ID and the criticality of their security because of how much of our information is on our phones,” says Ternus. “Our entire life is on our phones. We have no way of validating the performance of any third-party biometrics. That’s an area where we don’t enable the use of third-party modules for the key security functions. But in all other aspects, we do.”

It doesn’t seem coincidental that today’s news is being announced within weeks of the Oregon bill’s passage — particularly given that these changes are set to roll out in the fall. The move also appears to echo Apple’s decision to focus more on user-repairability with the iPhone 14, news that arrived amid a rising international call for right-to-repair laws.

Apple notes, however, that the processes behind this work were set in motion some time ago. Today’s announcement around device harvesting, for instance, has been in the works for two years.

For his part, Ternus suggests that his team has been focused on increasing user access to repairs independent of looming state and international legislation. “We want to make things more repairable, so we’re doing that work anyway,” he says. “To some extent, with my team, we block out the news of the world, because we know what we’re doing is right, and we focus on that.”

Overall, the executive preaches a kind of right tool for the right job philosophy to product design and self-repair.

“Repairability in isolation is not always the best answer,” Ternus says. “One of the things that I worry about is that people get very focused as if repairability is the goal. The reality is repairability is a means to an end. The goal is to build products that last, and if you focus too much on [making every part repairable], you end up creating some unintended consequences that are worse for the consumer and worse for the planet.”

Also announced this morning is an enhancement to Activation Lock, which is designed to deter thieves from harvesting stolen phones for parts. “If a device under repair detects that a supported part was obtained from another device with Activation Lock or Lost Mode enabled,” the company notes, “calibration capabilities for that part will be restricted.”

Ternus adds that, in addition to harvesting used iPhones for parts, Apple “fundamentally support[s] the right for people to use third-party parts as well.” Part of that, however, is enabling transparency.

“We have hundreds of millions of iPhones in use that are second- or third-hand devices,” he explains. “They’re a great way for people to get into the iPhone experience at a lower price point. We think it’s important for them to have the transparency of: was a repair done on this device? What part was used? That sort of thing.”

When iOS 15.2 arrived in November 2021, it introduced a new feature called “iPhone parts and service history.” If your phone is new and has never been repaired, you simply won’t see it. If one of those two qualifications does apply to your device, however, the company surfaces a list of switched parts and repairs in Settings.

Ternus cites a recent UL Solutions study as evidence that third-party battery modules, in particular, can present a hazard to users.

“We don’t block the use of third-party batteries,” he says. “But we think it’s important to be able to notify the customer that this is or isn’t an authentic Apple battery, and hopefully that will motivate some of these third parties to improve the quality.”

While the fall update will open harvesting up to a good number of components, Apple has no plans to sell refurbished parts for user repairs.


Software Development in Sri Lanka

Robotic Automations

Meta's X competitor Threads invites developers to sign up for API access, publishes docs | TechCrunch


After opening its developer API to select companies for testing in March, Meta’s Twitter/X competitor Threads is now introducing developer documentation and a sign-up sheet for interested parties ahead of the API’s public launch, planned for June.

The new documentation details the API’s current limitations and its endpoints, among other things, which could help developers get started on their Threads-connected apps and any other projects that integrate with the new social network.

For instance, those who want to track analytics around Threads’ posts can use an Insights API to retrieve things like views, likes, replies, reposts, and quotes. There are also details on how to publish posts and media via the API, retrieve replies, and a series of troubleshooting tips.

The documentation indicates that Threads accounts are limited to 250 API-published posts within a 24-hour period and 1,000 replies — a measure to counteract spam or other excessive use. It also offers the image and video specifications for media uploaded with users’ posts and notes that Threads’ text post character counts have a hard limit of 500 characters — longer than old Twitter’s 280 characters, but far less than the 25,000 characters X offers to paid subscribers or the now 100,000 characters it permits in articles posted directly to its platform.

Whether or not Meta will ultimately favor certain kinds of apps over others remains to be seen.

So far, Threads API beta testers have included social tool makers like Sprinklr, Sprout Social, Social News Desk, Hootsuite, and tech news board Techmeme.

Although Threads has begun its integration with the wider fediverse — the network of interconnected social networking services that includes Mastodon and others — it doesn’t appear that fediverse sharing can be enabled or disabled through the API itself. Instead, users still have to visit their settings in the Threads app to publish to the fediverse.

Meta says the new documentation will be updated over time as it gathers feedback from developers. In addition, anyone interested in building with the new API and providing feedback can now request access via a sign-up page — something that could also help Meta track the apps that are preparing to go live alongside the API’s public launch.


Software Development in Sri Lanka

Robotic Automations

Finmid raises $24.7M to help SMBs access loans through platforms like Wolt | TechCrunch


Berlin-based finmid — one of the many startups building embedded fintech solutions, in its case targeting marketplaces that want to provide their own payment and financing options — has raised €23 million ($24.7 million) in a Series A round to further build out its product and enter new markets. The round values the company at €100 million ($107 million), post money.

Marketplaces — typically two-sided businesses that bring together retailers or other third-party providers with customers to buy their products or services — are very classic targets for embedded finance companies, not least because they host a lot of transaction activity already, so it makes sense for them to build in more functionality around that to improve their own margins.

Players like Airwallex, Rapyd, Kriya, and many more are among those building for that opportunity. But finmid believes it has the potential to lock in more business specifically in its home region. Small and medium-sized businesses in Europe typically look to banks to borrow money. The rise of fintech has opened the door to SMBs accessing more, varied sources of financing than ever before, and an increasing number are doing so.

The startup believes that it makes more sense for SMBs to access capital via business partners than via a bank or neobank, and they will do so. “In an ideal scenario, you don’t have to get out of that context,” finmid’s co-founder, Max Schertel, told TechCrunch in an interview.

It also makes sense for marketplaces to offer these services itself: a captive audience of customers and the customers of their customers means they are sitting on a trove of data that can help produce, for example, more personalized financing offers.

As one example of how that works, Schertel said that food delivery brand Wolt uses finmid’s tech to offer cash advances to some of its restaurant partners directly inside its app. Unlike a bank, Wolt has access to the restaurants’ sales history, and finmid helps it leverage that data to decide who will see a pre-approved financing offer.

Image Credits: finmid

The working capital doesn’t come from Wolt, but from finmid’s financing partners. Both finmid and the platform earn a percentage of every transaction. “We have banking relationships with a lot of the large banks,” Schertel said.

For a platform like Wolt, embedding finmid is a way to make life easier for restaurants while generating additional revenue without much additional effort. That’s a fairly straightforward value proposition, as long as partners are willing to give the startup’s API a go.

In its early days, finmid’s pitch wasn’t an easy sell to VCs, Schertel said. Embedded finance may get a lot of hype, but it is still an approach that requires signing on partners to get any results. That takes patience that not all VCs will have.

However, finmid managed to find investors who have stuck around since it started during the pandemic, and have helped the company raise €35 million in equity funding to date. Before this new Series A, the company raised €2 million in pre-seed and €10 million in seed funding, finmid’s other co-founder, Alexander Talkanitsa, told TechCrunch.

That support seems to be paying off. According to Schertel, once you are running on a platform like Wolt, “success really compounds.”

“I like [my] job today a lot better than I did a year ago,” he joked.

Schertel and Talkanitsa met at challenger bank N26, whose founder, Max Tayenthal, is now one of their investors alongside VC firms Blossom Capital and Earlybird VC.

The co-founders learned a crucial lesson at N26: financial infrastructure leaves no space for mistakes. “You have to invest a lot in reliability,” Schertel said.

Finmid has an API that connects several data points from the platform, and can also plug in other sources of information on the prospective borrower, like a bank would do.

To make the user experience more fluid, finmid can let its clients display pre-approved capital offers that end users can decide to take or not.

The company also offers a product called B2B Payments that allows partners to finance trading between their users. Marketplaces such as Frupro (for fruits and vegetables), VonWood (for timber), and Vanilla Steel (for metal) use this product.

The new money will go towards hiring, and Schertel said the startup is looking for people with deep experience in specific areas, especially finance.

The company is also looking to expand into other countries. First on the list is Italy, but there are no plans to open an office there, Schertel said. Talkanitsa spends half his time in Vienna, and finmid has an office in Berlin.


Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber