From Digital Age to Nano Age. WorldWide.

Tag: 30M

Robotic Automations

French startup FlexAI exits stealth with $30M to ease access to AI compute | TechCrunch


A French startup has raised a hefty seed investment to “rearchitect compute infrastructure” for developers wanting to build and train AI applications more efficiently.

FlexAI, as the company is called, has been operating in stealth since October 2023, but the Paris-based company is formally launching Wednesday with €28.5 million ($30 million) in funding, while teasing its first product: an on-demand cloud service for AI training.

This is a chunky bit of change for a seed round, which normally means real substantial founder pedigree — and that is the case here. FlexAI co-founder and CEO Brijesh Tripathi was previously a senior design engineer at GPU giant and now AI darling Nvidia, before landing in various senior engineering and architecting roles at Apple; Tesla (working directly under Elon Musk); Zoox (before Amazon acquired the autonomous driving startup); and, most recently, Tripathi was VP of Intel’s AI and super compute platform offshoot, AXG.

FlexAI co-founder and CTO Dali Kilani has an impressive CV, too, serving in various technical roles at companies including Nvidia and Zynga, while most recently filling the CTO role at French startup Lifen, which develops digital infrastructure for the healthcare industry.

The seed round was led by Alpha Intelligence Capital (AIC), Elaia Partners and Heartcore Capital, with participation from Frst Capital, Motier Ventures, Partech and InstaDeep CEO Karim Beguir.

FlexAI team in Paris

The compute conundrum

To grasp what Tripathi and Kilani are attempting with FlexAI, it’s first worth understanding what developers and AI practitioners are up against in terms of accessing “compute”; this refers to the processing power, infrastructure and resources needed to carry out computational tasks such as processing data, running algorithms, and executing machine learning models.

“Using any infrastructure in the AI space is complex; it’s not for the faint-of-heart, and it’s not for the inexperienced,” Tripathi told TechCrunch. “It requires you to know too much about how to build infrastructure before you can use it.”

By contrast, the public cloud ecosystem that has evolved these past couple of decades serves as a fine example of how an industry has emerged from developers’ need to build applications without worrying too much about the back end.

“If you are a small developer and want to write an application, you don’t need to know where it’s being run, or what the back end is — you just need to spin up an EC2 (Amazon Elastic Compute cloud) instance and you’re done,” Tripathi said. “You can’t do that with AI compute today.”

In the AI sphere, developers must figure out how many GPUs (graphics processing units) they need to interconnect over what type of network, managed through a software ecosystem that they are entirely responsible for setting up. If a GPU or network fails, or if anything in that chain goes awry, the onus is on the developer to sort it.

“We want to bring AI compute infrastructure to the same level of simplicity that the general purpose cloud has gotten to — after 20 years, yes, but there is no reason why AI compute can’t see the same benefits,” Tripathi said. “We want to get to a point where running AI workloads doesn’t require you to become data centre experts.”

With the current iteration of its product going through its paces with a handful of beta customers, FlexAI will launch its first commercial product later this year. It’s basically a cloud service that connects developers to “virtual heterogeneous compute,” meaning that they can run their workloads and deploy AI models across multiple architectures, paying on a usage basis rather than renting GPUs on a dollars-per-hour basis.

GPUs are vital cogs in AI development, serving to train and run large language models (LLMs), for example. Nvidia is one of the preeminent players in the GPU space, and one of the main beneficiaries of the AI revolution sparked by OpenAI and ChatGPT. In the 12 months since OpenAI launched an API for ChatGPT in March 2023, allowing developers to bake ChatGPT functionality into their own apps, Nvidia’s shares ballooned from around $500 billion to more than $2 trillion.

LLMs are pouring out of the technology industry, with demand for GPUs skyrocketing in tandem. But GPUs are expensive to run, and renting them from a cloud provider for smaller jobs or ad-hoc use-cases doesn’t always make sense and can be prohibitively expensive; this is why AWS has been dabbling with time-limited rentals for smaller AI projects. But renting is still renting, which is why FlexAI wants to abstract away the underlying complexities and let customers access AI compute on an as-needed basis.

“Multicloud for AI”

FlexAI’s starting point is that most developers don’t really care for the most part whose GPUs or chips they use, whether it’s Nvidia, AMD, Intel, Graphcore or Cerebras. Their main concern is being able to develop their AI and build applications within their budgetary constraints.

This is where FlexAI’s concept of “universal AI compute” comes in, where FlexAI takes the user’s requirements and allocates it to whatever architecture makes sense for that particular job, taking care of the all the necessary conversions across the different platforms, whether that’s Intel’s Gaudi infrastructure, AMD’s Rocm or Nvidia’s CUDA.

“What this means is that the developer is only focused on building, training and using models,” Tripathi said. “We take care of everything underneath. The failures, recovery, reliability, are all managed by us, and you pay for what you use.”

In many ways, FlexAI is setting out to fast-track for AI what has already been happening in the cloud, meaning more than replicating the pay-per-usage model: It means the ability to go “multicloud” by leaning on the different benefits of different GPU and chip infrastructures.

For example, FlexAI will channel a customer’s specific workload depending on what their priorities are. If a company has limited budget for training and fine-tuning their AI models, they can set that within the FlexAI platform to get the maximum amount of compute bang for their buck. This might mean going through Intel for cheaper (but slower) compute, but if a developer has a small run that requires the fastest possible output, then it can be channeled through Nvidia instead.

Under the hood, FlexAI is basically an “aggregator of demand,” renting the hardware itself through traditional means and, using its “strong connections” with the folks at Intel and AMD, secures preferential prices that it spreads across its own customer base. This doesn’t necessarily mean side-stepping the kingpin Nvidia, but it possibly does mean that to a large extent — with Intel and AMD fighting for GPU scraps left in Nvidia’s wake — there is a huge incentive for them to play ball with aggregators such as FlexAI.

“If I can make it work for customers and bring tens to hundreds of customers onto their infrastructure, they [Intel and AMD] will be very happy,” Tripathi said.

This sits in contrast to similar GPU cloud players in the space such as the well-funded CoreWeave and Lambda Labs, which are focused squarely on Nvidia hardware.

“I want to get AI compute to the point where the current general purpose cloud computing is,” Tripathi noted. “You can’t do multicloud on AI. You have to select specific hardware, number of GPUs, infrastructure, connectivity, and then maintain it yourself. Today, that’s that’s the only way to actually get AI compute.”

When asked who the exact launch partners are, Tripathi said that he was unable to name all of them due to a lack of “formal commitments” from some of them.

“Intel is a strong partner, they are definitely providing infrastructure, and AMD is a partner that’s providing infrastructure,” he said. “But there is a second layer of partnerships that are happening with Nvidia and a couple of other silicon companies that we are not yet ready to share, but they are all in the mix and MOUs [memorandums of understanding] are being signed right now.”

The Elon effect

Tripathi is more than equipped to deal with the challenges ahead, having worked in some of the world’s largest tech companies.

“I know enough about GPUs; I used to build GPUs,” Tripathi said of his seven-year stint at Nvidia, ending in 2007 when he jumped ship for Apple as it was launching the first iPhone. “At Apple, I became focused on solving real customer problems. I was there when Apple started building their first SoCs [system on chips] for phones.”

Tripathi also spent two years at Tesla from 2016 to 2018 as hardware engineering lead, where he ended up working directly under Elon Musk for his last six months after two people above him abruptly left the company.

“At Tesla, the thing that I learned and I’m taking into my startup is that there are no constraints other than science and physics,” he said. “How things are done today is not how it should be or needs to be done. You should go after what the right thing to do is from first principles, and to do that, remove every black box.”

Tripathi was involved in Tesla’s transition to making its own chips, a move that has since been emulated by GM and Hyundai, among other automakers.

“One of the first things I did at Tesla was to figure out how many microcontrollers there are in a car, and to do that, we literally had to sort through a bunch of those big black boxes with metal shielding and casing around it, to find these really tiny small microcontrollers in there,” Tripathi said. “And we ended up putting that on a table, laid it out and said, ‘Elon, there are 50 microcontrollers in a car. And we pay sometimes 1,000 times margins on them because they are shielded and protected in a big metal casing.’ And he’s like, ‘let’s go make our own.’ And we did that.”

GPUs as collateral

Looking further into the future, FlexAI has aspirations to build out its own infrastructure, too, including data centers. This, Tripathi said, will be funded by debt financing, building on a recent trend that has seen rivals in the space including CoreWeave and Lambda Labs use Nvidia chips as collateral to secure loans — rather than giving more equity away.

“Bankers now know how to use GPUs as collaterals,” Tripathi said. “Why give away equity? Until we become a real compute provider, our company’s value is not enough to get us the hundreds of millions of dollars needed to invest in building data centres. If we did only equity, we disappear when the money is gone. But if we actually bank it on GPUs as collateral, they can take the GPUs away and put it in some other data center.”


Software Development in Sri Lanka

Robotic Automations

Skyflow raises $30M more as AI spikes demand for its privacy business | TechCrunch


This morning, Skyflow announced that it has raised a $30 million Series B extension led by Khosla Ventures. The deal is interesting on a number of fronts, including the round’s structure and how Skyflow has been impacted by the growth of AI.

The new capital comes after Skyflow expanded its data privacy business to support new AI technologies last year. In an interview with TechCrunch, Skyflow co-founder and CEO Anshu Sharma said that its AI-related software offerings have rapidly become a material portion of its total business.

The startup saw its revenues from large language model–related usage grow from 0% to around 30% more recently, implying that the company’s growth rate was augmented by market need for data-management services stemming from the data voracity of LLMs.

Skyflow’s business began life as an API that stored personally identifying information (PII) on behalf of customers. AI has broadened the sort of data that it might contain. In the current era of data accretion — Databricks and Snowflake are not household names in tech today by accident — and desire to put that data to work using AI models, ensuring that only approved data is employed by an LLM and the person prompting it, is no small permissions and governance task.

That the startup is raising more capital now is not a massive shock. After raising a $45 million Series B back in 2021, the company told TechCrunch that it deployed a chunk of that capital to build out its regional footprint to better support data residency rules that are increasingly important pieces of regulation for companies to get right. (In its latest news dump, Skyflow said that it expanded its support of China and that market’s particular data rules.) A few years down the road is never an odd time to raise more capital, but the fact that the round came in smaller, and in the form of an extension, did catch our eye.

When asked why Skyflow is calling the round an extension instead of a Series C when it was raised at a slightly different valuation, Sharma said that his firm and its customers don’t really care what a round is called. “Money is money to us,” he said. What matters more, in his words, is that his company saw “very low dilution and [was] able to raise capital to grow even faster.” Fair enough.

There’s a bit more to the round name that is worth our time, however. Sharma said that he learned from talking to venture capitalists for this latest round that late-stage investors are pulling back on how much they are investing, while investors that he put under an “early growth” tag were more active. So, by calling the round a Series B extension, he could better tune his fundraising process. Sharma also stressed that Khosla Ventures has made a number of AI investments, making the firm aware of the importance of data privacy and security inside corporate LLM usage.

In a canned statement, Vinod Khosla said that as “the need for trust and privacy infrastructure is key to protecting sensitive data,” making a tool to ensure that data doesn’t leak in any context is “vital for every enterprise business.” Hence the Skyflow deal.

In broader growth terms, Skyflow more than doubled in size last year, expanding its revenues by 110%. Sharma declined to get specific regarding what revenue type that figure was tied to, be it annual recurring revenue or trailing revenue or similar. But he did say that the firm is now in the double-digit million revenue realm.

This Skyflow round slots neatly into several trends we’ve observed recently. Startups that raised at the peak of the 2021 venture froth are now seeking more modest capital follow-on rounds. The explosive growth in AI is creating healthy businesses for LLM infrastructure and support companies. And, finally, companies that offer their tools via an API are still doing quite well, even if usage-based pricing has taken some bruising in recent years.

Given how quickly AI-related data privacy work has become a key revenue plank for Skyflow, its growth will provide us a window into how quickly enterprise demand for LLMs expands, and just how much money you can make selling digital picks and shovels in this, the latest software gold rush.


Software Development in Sri Lanka

Robotic Automations

Buy now, pay later on a Porsche? Zaver now has $30M to make it a reality | TechCrunch


We last checked in on Zaver, a Swedish B2C buy-now-pay-later (BNPL) provider in Europe, when it raised a $5 million funding round in 2021. The company has now closed a $10 million extension to its Series A funding round, bringing its total Series A to $20 million. Total investment to date stands at $30 million.

In Europe, Zaver competes on BNPL with Klarna, PayPal, and incumbents such as Santander and BNP Paribas.

However, Zaver’s schtick is it claims it can assess the risk on BNPL cart sizes of up to €200,000 in real time due to its risk assessment algorithms. Other BNPL providers rarely fund anything beyond €3,000, at least in Europe.

Founded by Amir Marandi and Linus Malmén in mid-2016, while both were students at the KTH Royal Institute of Technology in Stockholm, the company has a strategic alliance with the Nissan Group for direct-to-consumer sales in the Nordics, and it has client relationships with Volkswagen and Porsche.

This allows customers to buy even a car on BNPL.

Marandi, CEO and founder, told me the company is able to offer size-agnostic payment solutions because it’s spent most of its product development not “on linear regression models (like the others) but on advanced risk assessment algorithms.”

“While our competitors have concentrated their efforts on marketing, our focus has been resolutely on the back-end engineering side of things,” he said.

He thinks the declining acceptance rates for larger transactions in the payment industry means an opportunity for a “size-agnostic payment platform” going up to as much as €200,000.

This may be where the BNPL industry is heading.

Early innovators like Klarna, Trustly, Tink, and iZettle capitalized on this shift to online payments, but the expansion of e-commerce infrastructure has set the stage for an increase in the average online transaction value.

This shift first appeared in 2012 when Elon Musk proposed selling a Tesla online, and now today many OEMs are attempting to go “direct-to-consumer” using BNPL.

Investors in the Series A include FROS Ventures, Hållbar AB, Hobohm Brothers Equity, JOvB Investments, MAHR Projects, Skagerak Ventures, and the King.com founders, Sebastian Knutsson and Riccardo Zacconi.


Software Development in Sri Lanka

Robotic Automations

SaaS startup SingleInterface raises $30M to help more businesses get online | TechCrunch


SingleInterface, a SaaS startup offering tools to offline businesses to grow their revenues by leveraging the web, has raised $30 million in its maiden external fundraising round as the Singaporean startup seeks to expand its footprint internationally and improve products to make them more relevant to global brands.

While being offline is still prominent for enterprises across major markets, including the U.S., Asia and Europe, businesses have started embracing online marketing strategies to attract more customers and increase their revenues. The primary reason for that is the growing number of internet users across the globe. Nearly 67% of the world’s population, or 5.4 billion people, is online, according to the International Telecommunication Union. This shows a growth of 4.7% since 2022. In contrast, the United Nations’ agency said the global offline population steadily dipped to 2.6 billion in 2023.

Nonetheless, finding a one-stop solution to getting online is challenging. Some may help businesses build a website, whereas others may just be useful for getting listed on search engines. Similarly, some solutions are limited to a particular sector. SingleInterface addresses that problem by providing a suite of products to multi-location brands, whether in the food and beverage, retail, or automotive business.

The startup works with over 400 multi-location brands across India, Southeast Asia, and the Middle East to help them manage the digital presence of their physical stores and retail outlets. It provides tools to let businesses drive customer engagement online, enhance discoveries through search engines and maps listings, manage feedback and web reviews, and even build websites with SEO management, delivering insights for each business location. The startup also uses AI to ease businesses’ journey to digitize thousands of stores in one go.

Tarun Sobhani, co-founder and CEO of SingleInterface, told TechCrunch that the startup helps businesses grow their revenues by 15% to 20% using its products.

“Enabling marketing strategies at a storefront level becomes a very tedious task for a brand because devising thousands of marketing strategies for thousands of stores is never easy. That’s where the whole AI automation piece comes in, which enables a better marketing ROI for each store,” he said in an interview.

Alongside letting businesses create detailed store-level websites of their local stores, SingleInterface allows them to run localized offers and events within a particular location and communicate two-way over WhatsApp, Facebook and Google Business Messages. The startup also helps multi-location brands understand why some stores have poor ratings while others have four- or five-star ratings. Further, it helps run online campaigns for different locations from a single source and optimize them based on their local competitors, market dynamics and distinct business hours.

SingleInterface already counts brands such as KFC, Pizza Hut, Nissan, Apollo Tyres, HDFC Bank and TVS Motor, as well as multiple group companies of large Indian conglomerates, including Tata Group, Reliance Group, Aditya Birla Group and Bajaj Group, among its customers. It is also scaling up in Southeast Asia and Australia and is looking to enter Japan and Korea soon and scale up in the Middle East.

Led by Singapore’s growth investment firm Asia Partners, the all-equity round saw the participation of PayPal Ventures. The startup plans to use the new funds to grow its geographical presence and to continue investing in its products and further enhance consumer experience, Sobhani told TechCrunch.

Before the fresh round, SingleInterface was bootstrapped. Sobhani and Harish Bahl, the founder of consumer internet investor and venture-building firm Smile Group, co-founded the startup in 2015. However, Sobhani said it was 2017 when it went out and started offering its tools to customers.

SingleInterface currently has a team of about 235 people, most of whom are based in India, with its development center in New Delhi. Sobhani said the startup plans to add many people in the Asia-Pacific region to grow its presence.

“We are thrilled to be partnering with Tarun, Harish, and the SingleInterface team to support their growth ambitions in India and globally. SingleInterface has shown an exceptional track record of fostering customer engagement and commerce for large enterprises over the last several years and has firmly established itself as a prominent player in the region, successfully integrating offline and online customer journeys to drive growth for physical retail locations,” Oliver Rippel, co-founder of Asia Partners, said in a prepared statement.


Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber