From Digital Age to Nano Age. WorldWide.

Tag: building

Robotic Automations

Midi is building a digital platform for an oft-overlooked area of women's health | TechCrunch


When Joanna Strober was around 47 she stopped sleeping. While losing sleep is a common symptom of perimenopause, she first had to go to multiple providers, including driving 45 minutes out of San Francisco to pay $750 out of pocket, to get that diagnosis and proper treatment.

“That feeling of wow, I’ve really been suffering unnecessarily for the past year really stuck with me,” Strober said on a recent episode of TechCrunch’s Found podcast. “I started talking to all my friends and trying to understand what’s going on with them and what became clear is that perimenopause and menopause is this big thing. It kind of hits women like it’s pile of bricks. There’s lots of different symptoms to it and they’re very few providers who are trained to take care of this population.”

That realization is what inspired Strober to launch Midi Health, a telehealth platform designed to serve women in midlife by connecting them with providers that are trained in perimenopause and menopause symptoms and treatments.

Despite her “aha” moment, Strober explained why she couldn’t launch the startup right away. She said that Midi couldn’t have existed had the U.S. government not have changed its rules surrounding telehealth and where people could access care during the pandemic. Because of the changes surrounding digital health, Strober said the company was able to launch its platform that brought care to women as opposed to women having to find in-person care.

“Understanding that this problem that had been around for a long time and could finally be addressed using telehealth was a very exciting revelation,” Strober said. “And that’s why I wanted to start this company.”

Midi operates a little bit differently than many of the other digital health companies started in the post-pandemic wave, Strober said. She said Midi isn’t set up to be a digital avenue for users to get one-off care or treatment as fast as possible like many other companies of the same era, but rather to be a platform where women build long-term relationships with providers that make them feel seen.

This approach is also why Strober thinks Midi has been able to keep growing and raising VC funds as VCs have become less interested in the category. The company recently raised a $60 million Series B round led by Emerson Collective with participation from Google Ventures, SteelSky Ventures, and Muse Capital, among others. This round brings the company’s total funding to $99 million.

Digital health startup raised $13.2 billion globally in 2023, according to CB Insights data. This marks a decrease of 48% from 2022, $25.5 billion, and a decrease of 75% from 2021 when a record $52.7 billion was invested.

“I think too few telehealth companies didn’t think about that long-term customer relationship,” Strober said. “We view ourselves as building a healthcare trusted brand. So our brand is expert care for women. We need to give you that amazing care so you come back to us over and over and over again. That is what women are doing.”

Midi isn’t Strober’s first digital health startup and she talked about how her past experience building Kurbo Health, a startup focused on child obesity before digital health was even a thing, influenced her choices in building Midi. She also talked about how her past life as a venture capitalist also played a role in how she approached the business.

With this latest round of funding, Midi looks forward to expanding care in areas that fall under perimenopause and menopause including things like sexual wellness, hair and skin care and access to testosterone.

“People keep on asking, you know, when are you leaving perimenopause, and menopause?” Strober said. “But perimenopause and menopause is a big market. So we are working a lot on understanding what are the health needs of women during this period of their life and how do we appropriately rise to meet those concerns.”


Software Development in Sri Lanka

Robotic Automations

Anon is building an automated authentication layer for the GenAI age | TechCrunch


As the notion of the AI agent begins to take hold, and more tasks will be completed without a human involved, it is going to require a new kind of authentication to make sure only agents with the proper approval can access particular resources. Anon, an early-stage startup, is helping developers add automated authentication in a safe and secure way.

On Wednesday, the company announced a $6.5 million investment — and that the product is now generally available to all.

The founders came up with the idea for this company out of necessity. Their first idea was actually building an AI agent, but CEO Daniel Mason says they quickly came up against a problem around authentication — simply put, how to enter a username and password automatically and safely. “We kept running into this hard edge of customers wanting us to do X, but we couldn’t do X unless we had this delegated authentication system,” Mason told TechCrunch.

He began asking around about how other AI startups were handling authentication, and found there weren’t really any good answers. “In fact, a lot of the solutions, people that were using, were actually quite a bit less secure. They were mostly inheriting authentication credentials from a user’s local machine or browser-based permissions,” he said.

And as they explored this problem more in depth, they realized that this was in fact a better idea for a company than their original AI agent idea. At this point, they pivoted to becoming a developer tool for building an automated authentication layer designed for AI-driven applications and workflows. The solution is delivered in the form of a software development kit (SDK), and lets developers incorporate authentication for a specific service with a few lines of code. “We want to sit at that authentication level and really build access permissioning, and our customers are specifically the developers,” Mason said.

The company is addressing security concerns about an automated authentication tool by working toward building a zero trust architecture where they protect the credentials in a few key ways. For starters, they never control the credentials themselves; those are held by the end user. There is also an encryption layer, where half the key is held by the user and half by Anon, and it requires both to unlock the encryption. Finally, the user always has ultimate control.

“Our platform is such that as a user, when I grant access, I still maintain control of that session — I’m the ultimate holder of the password, user Name, 2FA — and so even in the event of our system, or even a customer system getting compromised, they do not have access to those root credentials,” company co-founder and CTO Kai Aichholz said.

The founders recognize that other companies, large and small, will probably take a stab at solving this problem, but they are banking on a head start and a broad vision to help them stave off eventual competitors. “We’re looking to basically become a one-stop integration platform where you can come and build these actions and build the automation and know that you’re doing it in a way that’s secure and your end user credentials are secure and the automations are going to happen,” Mason said.

The $6.5 million investment breaks down into two tranches: a pre-seed of around $2 million at launch and a seed that closed at the end of last year for around $4.5 million. Investors include Union Square Ventures and Abstract Ventures, which led the seed, and Impatient Ventures and ex/ante, which led the pre-seed, along with several industry angels.


Software Development in Sri Lanka

Robotic Automations

With Easel, ex-Snap researchers are building the next-generation Bitmoji thanks to AI | TechCrunch


Easel is a new startup that sits at the intersection of the generative AI and social trends, founded by two former employees at Snap. The company has been working on an app that lets you create images of yourself and your friends doing cool things directly from your favorite iMessage conversations.

There’s a reason why I mentioned that the co-founders previously worked at Snap before founding Easel. While Snap may never reach the scale of Instagram or TikTok, it has arguably been the most innovative social company since social apps started taking over smartphone home screens.

Before Apple made augmented reality and virtual reality cool again, Snap blazed the AR trail with lenses. Even if you never really used Snapchat, chances are you’ve played around with goofy lenses on your phone or using someone else’s phone. The feature has had a massive cultural impact.

Similarly, before Meta tried to make virtual avatars cool again with massive investments in Horizon Worlds and the company’s Reality Labs division, Snap made a curious move when it acquired Bitmoji back in 2016. At the time, people thought the ability to create a virtual avatar and use it to communicate with your friends was just a fad. Now, with Memojis in iMessage and FaceTime, and Meta avatars also popping up in Meta’s apps, virtual avatars have become a fun, innovative way to express yourself.

“I was at Snap for five years. Before that, I was at Stanford. I moved down to LA to join Snap in Bobby Murphy’s research team, where we kind of worked on a range of futuristic things,” Easel co-founder and CEO Rajan Vaish told TechCrunch in an exclusive interview. He co-founded Easel with Sven Kratz, who was a senior research engineer at Snap.

But this team was dissolved in 2022 as part of Snap’s various rounds of layoffs. The duo used the opportunity to bounce back and keep innovating — but outside of Snap.

AI as a personal communication vector

Easel is using generative AI to let users create Bitmoji-style stickers of themselves drinking coffee, chilling at the beach, riding a bicycle — anything you want as long as it can be described and generated by an AI model.

When you first start using Easel, you capture a few seconds of your face so that the company can create a personal AI model and use it to generate stickers. Easel is currently using Stable Diffusion‘s technology to create images. The fact that you can generate images with your own face in them is both a bit uncanny but also much more engaging than an average AI-generated image.

“Once you give your photos, we start training on our servers. And then we create an AI avatar model for you. We now know what your face looks like, how your hair looks like, etc.,” Vaish said.

But Easel isn’t just an image generation product. It’s a multiplayer experience that lives in your conversations. The startup has opted to integrate Easel into the native iOS Messages app so that you don’t have to move to a new platform, and create a new social graph, just to swap funny personal stickers.

Instead, sending an Easel sticker works just like sending an image via iMessage. On the receiving end, when you tap on the image, it opens up Easel on top of your conversation. This way, your friends can also install Easel and remix your stickers. This is one of the key features behind Bitmoji, too, as you can create scenes with both you and your friend in the stickers, amping up the virality.

Image Credits: Easel

Easel allows users to create more highly customized personal stickers than Bitmoji. Say, for example, you want a sticker that shows you’ll soon be drinking cocktails with your buddies in Paris. You could use a generic cocktail-drinking Bitmoji — but it won’t look like Paris. (And you’ve already seen this Bitmoji many times before.) Whereas, with Easel — and thanks to generative AI — you get to design the background scenes, locations and scenarios where your personal avatar appears.

Finally, Easel users can also share stickers to the app’s public feed to inspire others. This can create a sort of seasonality within the app as you might see a lot of firework stickers around July 4, for instance. It’s also a laid-back use case for Easel, as you can scroll until you find a sticker you like, tap “remix” and send a similar sticker (but with your own face) to your friends.

Easel has already secured $2.65 million in funding from Unusual Ventures, f7 Ventures and Corazon Capital, as well as various angel investors, including a few professors from Stanford University.

Now let’s see how well Easel blends into people’s conversations. “We have learned two very unique use cases. One is that there’s a big demographic that is not very comfortable sharing their faces,” said Vaish. “I’m not a selfie person and a lot of people are not. This is allowing them to share what they’re up to in a more visual format.”

“The second one is that Easel allows people to stay in the moment,” he added, pointing out that sometimes you just don’t want to take out your phone and capture the moment. But Easel still enables a form of visual communication after the fact.


Software Development in Sri Lanka

Robotic Automations

Women in AI: Allison Cohen on building responsible AI projects | TechCrunch


To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.

Allison Cohen is the senior applied AI projects manager at Mila, a Quebec-based community of more than 1,200 researchers specializing in AI and machine learning. She works with researchers, social scientists and external partners to deploy socially beneficial AI projects. Cohen’s portfolio of work includes a tool that detects misogyny, an app to identify online activity from suspected human trafficking victims, and an agricultural app to recommend sustainable farming practices in Rwanda.

Previously, Cohen was a co-lead on AI drug discovery at the Global Partnership on Artificial Intelligence, an organization to guide the responsible development and use of AI. She’s also served as an AI strategy consultant at Deloitte and a project consultant at the Center for International Digital Policy, an independent Canadian think tank.

Q&A

Briefly, how did you get your start in AI? What attracted you to the field?

The realization that we could mathematically model everything from recognizing faces to negotiating trade deals changed the way I saw the world, which is what made AI so compelling to me. Ironically, now that I work in AI, I see that we can’t — and in many cases shouldn’t — be capturing these kinds of phenomena with algorithms.

I was exposed to the field while I was completing a master’s in global affairs at the University of Toronto. The program was designed to teach students to navigate the systems affecting the world order — everything from macroeconomics to international law to human psychology. As I learned more about AI, though, I recognized how vital it would become to world politics, and how important it was to educate myself on the topic.

What allowed me to break into the field was an essay-writing competition. For the competition, I wrote a paper describing how psychedelic drugs would help humans stay competitive in a labor market riddled with AI, which qualified me to attend the St. Gallen Symposium in 2018 (it was a creative writing piece). My invitation, and subsequent participation in that event, gave me the confidence to continue pursuing my interest in the field.

What work are you most proud of in the AI field?

One of the projects I managed involved building a dataset containing instances of subtle and overt expressions of bias against women.

For this project, staffing and managing a multidisciplinary team of natural language processing experts, linguists and gender studies specialists throughout the entire project life cycle was crucial. It’s something that I’m quite proud of. I learned firsthand why this process is fundamental to building responsible applications, and also why it’s not done enough — it’s hard work! If you can support each of these stakeholders in communicating effectively across disciplines, you can facilitate work that blends decades-long traditions from the social sciences and cutting-edge developments in computer science.

I’m also proud that this project was well received by the community. One of our papers got a spotlight recognition in the socially responsible language modeling workshop at one of the leading AI conferences, NeurIPS. Also, this work inspired a similar interdisciplinary process that was managed by AI Sweden, which adapted the work to fit Swedish notions and expressions of misogyny.

How do you navigate the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?

It’s unfortunate that in such a cutting-edge industry, we’re still seeing problematic gender dynamics. It’s not just adversely affecting women — all of us are losing. I’ve been quite inspired by a concept called “feminist standpoint theory” that I learned about in Sasha Costanza-Chock’s book, “Design Justice.” \

The theory claims that marginalized communities, whose knowledge and experiences don’t benefit from the same privileges as others, have an awareness of the world that can bring about fair and inclusive change. Of course, not all marginalized communities are the same, and neither are the experiences of individuals within those communities.

That said, a variety of perspectives from those groups are critical in helping us navigate, challenge and dismantle all kinds of structural challenges and inequities. That’s why a failure to include women can keep the field of AI exclusionary for an even wider swath of the population, reinforcing power dynamics outside of the field as well.

In terms of how I’ve handled a male-dominated industry, I’ve found allies to be quite important. These allies are a product of strong and trusting relationships. For example, I’ve been very fortunate to have friends like Peter Kurzwelly, who’s shared his expertise in podcasting to support me in the creation of a female-led and -centered podcast called “The World We’re Building.” This podcast allows us to elevate the work of even more women and non-binary people in the field of AI.

What advice would you give to women seeking to enter the AI field?

Find an open door. It doesn’t have to be paid, it doesn’t have to be a career and it doesn’t even have to be aligned with your background or experience. If you can find an opening, you can use it to hone your voice in the space and build from there. If you’re volunteering, give it your all — it’ll allow you to stand out and hopefully get paid for your work as soon as possible.

Of course, there’s privilege in being able to volunteer, which I also want to acknowledge.

When I lost my job during the pandemic and unemployment was at an all-time high in Canada, very few companies were looking to hire AI talent, and those that were hiring weren’t looking for global affairs students with eight months’ experience in consulting. While applying for jobs, I began volunteering with an AI ethics organization.

One of the projects I worked on while volunteering was about whether there should be copyright protection for art produced by AI. I reached out to a lawyer at a Canadian AI law firm to better understand the space. She connected me with someone at CIFAR, who connected me with Benjamin Prud’homme, the executive director of Mila’s AI for Humanity Team. It’s amazing to think that through a series of exchanges about AI art, I learned about a career opportunity that has since transformed my life.

What are some of the most pressing issues facing AI as it evolves?

I have three answers to this question that are somewhat interconnected. I think we need to figure out:

  1. How to reconcile the fact that AI is built to be scaled while ensuring that the tools we’re building are adapted to fit local knowledge, experience and needs.
  2. If we’re to build tools that are adapted to the local context, we’re going to need to incorporate anthropologists and sociologists into the AI design process. But there are a plethora of incentive structures and other obstacles preventing meaningful interdisciplinary collaboration. How can we overcome this?
  3. How can we affect the design process even more profoundly than simply incorporating multidisciplinary expertise? Specifically, how can we alter the incentives such that we’re designing tools built for those who need it most urgently rather than those whose data or business is most profitable?

What are some issues AI users should be aware of?

Labor exploitation is one of the issues that I don’t think gets enough coverage. There are many AI models that learn from labeled data using supervised learning methods. When the model relies on labeled data, there are people that have to do this tagging (i.e., someone adds the label “cat” to an image of a cat). These people (annotators) are often the subjects of exploitative practices. For models that don’t require the data to be labeled during the training process (as is the case with some generative AI and other foundation models), datasets can still be built exploitatively in that the developers often don’t obtain consent nor provide compensation or credit to the data creators.

I would recommend checking out the work of Krystal Kauffman, who I was so glad to see featured in this TechCrunch series. She’s making headway in advocating for annotators’ labor rights, including a living wage, the end to “mass rejection” practices, and engagement practices that align with fundamental human rights (in response to developments like intrusive surveillance).

What is the best way to responsibly build AI?

Folks often look to ethical AI principles in order to claim that their technology is responsible. Unfortunately, ethical reflection can only begin after a number of decisions have already been made, including but not limited to:

  1. What are you building?
  2. How are you building it?
  3. How will it be deployed?

If you wait until after these decisions have been made, you’ll have missed countless opportunities to build responsible technology.

In my experience, the best way to build responsible AI is to be cognizant of — from the earliest stages of your process — how your problem is defined and whose interests it satisfies; how the orientation supports or challenges pre-existing power dynamics; and which communities will be empowered or disempowered through the AI’s use.

If you want to create meaningful solutions, you must navigate these systems of power thoughtfully.

How can investors better push for responsible AI?

Ask about the team’s values. If the values are defined, at least, in part, by the local community and there’s a degree of accountability to that community, it’s more likely that the team will incorporate responsible practices.


Software Development in Sri Lanka

Robotic Automations

Exclusive: Building owners are often in the dark about their carbon pollution. A new algorithm could shed light on it


Starting this year, thousands of buildings in New York City will have to start reducing their carbon emissions. But before that happens, owners need to understand how much pollution they are generating.

Electricity alone makes up 60% of the total energy use in commercial buildings, according to the U.S. Energy Information Administration. There are plenty of tools out there that can convert an electric bill into estimated carbon emissions, but many are based on rough estimates. With the growth of intermittent wind and solar, knowing when you’re using electricity is almost as important as how much you’re using.

It’s why Nzero, a carbon-tracking startup, developed a new algorithm, giving building owners reports that estimate carbon pollution down to the hour.

Some owners whose buildings are equipped with advanced meters and sensors already have that data, but many do not. “Better data is going to give you better outcomes,” John Rula, Nzero’s CTO, told TechCrunch, “but it should not be a blocker.”

The problem can be especially vexing for a class of real estate investment trusts, or REITs, favored by investors known as a triple net lease. The REIT is responsible for a building’s emissions, but because the owner doesn’t pay the utilities, it has little insight into the pollution the building generates.

“They’re begging their customers to provide this data and with very little success,” Rula said.

Using the building’s address and any additional information the owner can provide, including square footage and the types of heating and cooling systems it uses, Nzero can generate estimates that it says are more accurate than the owners previously had.

From there, the company’s software helps building owners identify upgrades and retrofits that will reduce emissions while also being the most cost effective.

“There’s all these different steps and hurdles of which data collection is one, compliance reporting is another, but they’re not the end goal, right?” Rula said. “The end goal is to promote and accelerate decarbonization.”


Software Development in Sri Lanka

Robotic Automations

Cambium is building a recycled wood supply chain | TechCrunch


The global demand for wood could grow by 54% between 2010 and 2050, according to a study by the World Resources Institute. While some building materials like steel get consistently recycled back into the supply chain, wood does not. Cambium hopes to fix that.

Cambium looks to build the supply chain that keeps wood from being wasted by connecting those with already-been-used wood to the businesses and folks that need it. Cambium co-founder and CEO Ben Christensen recently told TechCrunch’s Found podcast that only 5% to 10% of wood gets reused currently, with most ending up in landfills or turned into mulch.

“We’re building a better value chain where you can use local material, you can use salvaged material, and all of that is connected through our technology,” Christensen said. “So that’s what we do is we deliver carbon smart wood, locally salvaged wood, tracked on our technology, to large buyers to build buildings, to build furniture, to use any sort of thing that you use wood for. And we do that in a really efficient and cost-competitive way.”

Demand for more sustainable wood has been growing in recent years, Christensen said, but before Cambium there wasn’t a good system to find the recycled wood. Cambium fixes that and more, he said. The company goes to businesses with recycled wood to sell and shows them the demand for their products while also selling its software that helps with inventory management and point of sale to these suppliers.

Cambium also helps buyers get better visibility into where their wood is coming from and can further reduce their carbon footprint by selecting a local vendor, Christensen said.

“People like really, really want to buy this material, we’ve been really overwhelmed with demand there and that helps us get sourcing and volume onto the platform in order to go and meet that demand,” Christensen said.

Christensen added that the company has benefited from a generational shift too as construction companies and people in wood-related trades retire and the next generation of folks in those fields look to adopt technology and be more environmentally friendly.

Cambium was founded in 2019 and is based in Washington, D.C. The startup has raised more than $8.5 million in funding from VCs including The Alumni Fund, Gaingels and MaC Venture Capital, among others.


Software Development in Sri Lanka

Robotic Automations

NeuBird is building a generative AI solution for complex cloud-native environments | TechCrunch


NeuBird founders Goutham Rao and Vinod Jayaraman came from Portworx, a cloud-native storage solution they eventually sold to PureStorage in 2019 for $370 million. It was their third successful exit. 

When they went looking for their next startup challenge last year, they saw an opportunity to combine their cloud-native knowledge, especially around IT operations, with the burgeoning area of generative AI. 

Today NeuBird announced a $22 million investment from Mayfield to get the idea to market. It’s a hefty amount for an early-stage startup, but the firm is likely banking on the founders’ experience to build another successful company.

Rao, the CEO, says that while the cloud-native community has done a good job at solving a lot of difficult problems, it has created increasing levels of complexity along the way. 

“We’ve done an incredible job as a community over the past 10-plus years building cloud-native architectures with service-oriented designs. This added a lot of layers, which is good. That’s a proper way to design software, but this also came at a cost of increased telemetry. There’s just too many layers in the stack,” Rao told TechCrunch.

They concluded that this level of data was making it impossible for human engineers to find, diagnose and solve problems at scale inside large organizations. At the same time, large language models were beginning to mature, so the founders decided to put them to work on the problem.

“We’re leveraging large language models in a very unique way to be able to analyze thousands and thousands of metrics, alerts, logs, traces and application configuration information in a matter of seconds and be able to diagnose what the health of the environment is, detect if there’s a problem and come up with a solution,” he said.

The company is essentially building a trusted digital assistant to the engineering team. “So it’s a digital co-worker that works alongside SREs and ITOps engineers, and monitors all of the alerts and logs looking for issues,” he said. The goal is to reduce the amount of time it takes to respond to and solve an incident from hours to minutes, and they believe that by putting generative AI to work on the problem, they can help companies achieve that goal. 

The founders understand the limitations of large language models, and are looking to reduce hallucinated or incorrect responses by using a limited set of data to train the models, and by setting up other systems that help deliver more accurate responses.

“Because we’re using this in a very controlled manner for a very specific use case for environments we know, we can cross check the results that are coming out of the AI, again through a vector database and see if it’s even making sense and if we’re not comfortable with it, we won’t recommend it to the user.”

Customers can connect directly to their various cloud systems by entering their credentials, and without moving data, NeuBird can use the access to cross-check against other available information to come up with a solution, reducing the overall difficulty associated with getting the company-specific data for the model to work with. 

NeuBird uses various models, including Llama 2 for analyzing logs and metrics. They are using Mistral for other types of analysis. The company actually turns every natural language interaction into a SQL query, essentially turning unstructured data into structured data. They believe this will result in greater accuracy. 

The early-stage startup is working with design and alpha partners right now refining the idea as they work to bring the product to market later this year. Rao says they took a big chunk of money out of the gate because they wanted the room to work on the problem without having to worry about looking for more money too soon.


Software Development in Sri Lanka

Robotic Automations

Metalab goes from quietly building the internet to investing in it | TechCrunch


Nearly 20 years after finding success in helping startups build products, Canadian interface design firm Metalab has launched Metalab Ventures to invest in many of those product-led startups.

Serial entrepreneur and investor Andrew Wilkinson started Metalab in 2006, a company that went on to support product innovations by companies including Slack, Coinbase, Uber and Tumblr.

Metalab often works with startups, acting a bit like co-founders, to help them get a product off the ground. Then Metalab “lets them loose” to grow, CEO Luke Des Cotes told TechCrunch. Metalab had a record year in 2023 and was involved in the development of 40 products that went into the market last year.

Corporate venture capital has found its stride over the last decade as a stable source of capital or when startups have something Big Tech wants.

With Metalab Ventures, the venture arm will play the role of a long-term value investor, essentially “putting our money where our mouth is,” Des Cotes said.

“We want to go on a journey with them for the next 10 to 12 years,” he said. “We’ve been asked over and over again by founders when we will invest, and sometimes we have, but it’s been very ad hoc in the past. Today, we make that a formal process.”

Metalab Ventures has raised $15 million in capital commitments for its first fund to invest in product-led startups where strategy, design and technology are the key differentiators.

“Product-led” is how a product will be the differentiator for the business, Des Cotes said. Most businesses have some major component of success riding on how well a product is created and how well it’s connecting to the user. Metalab Ventures seeks out founders who “believe in the power of design as a tool to be able to connect with users in a way that’s different and special,” he said.

Des Cotes and David Tapp, head of partnerships at Metalab, are the general partners at Metalab Ventures and will invest in 25 to 35 startups at the pre-seed, seed and Series A stages. So far, the firm made a handful of unannounced investments, Des Cotes said.

The limited partnership makeup of the new fund includes institutional, funds to fund, angel investors and founders of companies with whom Metalab has previously worked. Metalab is also an LP in the fund.

The company performs diligence on thousands of founders each year to determine who it will help, and that same process was shifted to Metalab Ventures in the way it evaluates investments, Des Cotes said.

When determining who to invest in, the process includes getting to know the founders and if the firm can add value. Metalab often taps into its 160-person workforce for design, technology, product and research leadership.

“We’ve already operated very much like a venture fund,” Des Cotes said. “Now we are working through that process to understand what’s the product, what’s the opportunity, what’s the value that can be created here. When we believe in this business, we think of human capital as being our scarce resource that we can then deploy into those businesses.”

Have a juicy tip or lead about happenings in the venture world? Send tips to Christine Hall at [email protected] or via this Signal link. Anonymity requests will be respected. 


Software Development in Sri Lanka

Robotic Automations

Dark Space is building a rocket-powered boxing glove to push debris out of orbit | TechCrunch


Paris-based Dark Space is taking on the dual problems of debris and conflict in orbit with their mobile platform designed to launch, attach to, and ultimately deorbit uncooperative objects in space.

Dark CEO Clyde Laheyne said the company is aiming to become the “S.W.A.T. team of space.”

The three-year-old startup is developing Interceptor, a spacecraft that is essentially a rocket-powered boxing glove that can be launched on short order to gently punch a wayward object out of its orbit.

Interceptor is itself launched from a specially outfitted aircraft. Much like a Virgin Galactic launch, the aircraft will take the rocket above the tumultuous lower atmosphere, where it can be released and ignited. Once the rocket reaches the vicinity of the target object, the spacecraft detaches and uses onboard sensors and propulsion to find and approach it. When it’s lined up correctly, Interceptor pushes against the object with its cushioned “effector,” eventually deorbiting it.

“All the space sector is organized to do planned, long missions … but orbital defense is more about unplanned, short missions,” Laheyne said. In that sense, Interceptor “is more like an air defense missile,” he explained. “It has to be ready all the time. There is no excuse that is viable to not use it.”

Unlike an actual missile or anti-satellite weapon, however, the gentle strike of the Interceptor doesn’t produce a debris field or any other dangerous, unpredictable effects.

Dark Space was founded by Laheyne and CTO Guillaume Orvain, engineers who cut their teeth at multi-national missile developer MBDA. This work experience shines through in the Interceptor concept, which is being designed to operate on-call, similar to missile systems. That’s also why Dark is developing its own launching platform: to ensure readiness for defense, civil and commercial companies at a moment’s notice, Laheyne said.

Dark Space co-founders Clyde Laheyne and Guillaume Orvain. Image credit: Dark

Dark closed a $5 million funding round in 2021, with the cap table composed of European investors including lead investor Eurazeo. The team closed a $6 million extension yesterday, including participation from its first U.S.-based investor, Long Journey Ventures. (That fund is led by Arielle Zuckerberg, the younger sister of Meta founder Mark Zuckerberg.)

The company has a lot of work ahead before it comes close to removing something like a defunct rocket second stage from orbit. Dark has been focused on developing critical systems, like the cryogenic engine and software. Now the team is shifting its focus to developing the technologies needed for the type of unplanned, quick missions Interceptor will execute, like long distance detecting and tracking, autonomous flight algorithms, and a system for reliable controlled reentry.

The team must also retrofit an aircraft — which Laheyne estimated could cost $50 million, or around the price of building a new launch pad — and have the entire platform ready for a demonstration mission in 2026.

That mission would validate many of the core technologies of the full-scale platform, though it won’t actually aim to deorbit an object, just touch one. Even this is incredibly ambitious: no company has yet cracked so-called rendezvous and proximity operations, meaning moving close to another object in space and interacting with it.

The second demonstration mission, which is currently planed for 2027, will include a deorbit attempt. If all goes to plan, the company would start deorbiting objects for allied civil agencies. As far as defense customers go, “hopefully we don’t have to use it,” Laheyne said.

“I’ve been doing missiles for years, and it’s always the same topic: if you use it first, it’s an act of war. If you’re second, it’s an act of defense. If you can do it, and people know you can do it, it’s dissuasion,” he said. “The ideal is dissuasion, the system that makes the conflict unthinkable.”


Software Development in Sri Lanka

Robotic Automations

Intel and others commit to building open generative AI tools for the enterprise | TechCrunch


Can generative AI designed for the enterprise (e.g. AI that autocompletes reports, spreadsheet formulas and so on) ever be interoperable? Along with a coterie of organizations including Cloudera and Intel, the Linux Foundation — the nonprofit organization that supports and maintains a growing number of open source efforts — aim to find out.

The Linux Foundation today announced the launch of the Open Platform for Enterprise AI (OPEA), a project to foster the development of open, multi-provider and composable (i.e. modular) generative AI systems. Under the purview of the Linux Foundation’s LFAI and Data org, which focuses on AI- and data-related platform initiatives, OPEA’s goal will be to pave the way for the release of “hardened,” “scalable” generative AI systems that “harness the best open source innovation from across the ecosystem,” LFAI and Data executive director Ibrahim Haddad said in a press release.

“OPEA will unlock new possibilities in AI by creating a detailed, composable framework that stands at the forefront of technology stacks,” Haddad said. “This initiative is a testament to our mission to drive open source innovation and collaboration within the AI and data communities under a neutral and open governance model.”

In addition to Cloudera and Intel, OPEA — one of the Linux Foundation’s Sandbox Projects, an incubator program of sorts — counts among its members enterprise heavyweights like Intel, IBM-owned Red Hat, Hugging Face, Domino Data Lab, MariaDB and VMWare.

So what might they build together exactly? Haddad hints at a few possibilities, such as “optimized” support for AI toolchains and compilers, which enable AI workloads to run across different hardware components, as well as “heterogeneous” pipelines for retrieval-augmented generation (RAG).

RAG is becoming increasingly popular in enterprise applications of generative AI, and it’s not difficult to see why. Most generative AI models’ answers and actions are limited to the data on which they’re trained. But with RAG, a model’s knowledge base can be extended to info outside the original training data. RAG models reference this outside info — which can take the form of proprietary company data, a public database or some combination of the two — before generating a response or performing a task.

A diagram explaining RAG models.

Intel offered a few more details in its own press release:

Enterprises are challenged with a do-it-yourself approach [to RAG] because there are no de facto standards across components that allow enterprises to choose and deploy RAG solutions that are open and interoperable and that help them quickly get to market. OPEA intends to address these issues by collaborating with the industry to standardize components, including frameworks, architecture blueprints and reference solutions.

Evaluation will also be a key part of what OPEA tackles.

In its GitHub repository, OPEA proposes a rubric for grading generative AI systems along four axes: performance, features, trustworthiness and “enterprise-grade” readiness. Performance as OPEA defines it pertains to “black-box” benchmarks from real-world use cases. Features is an appraisal of a system’s interoperability, deployment choices and ease of use. Trustworthiness looks at an AI model’s ability to guarantee “robustness” and quality. And enterprise readiness focuses on the requirements to get a system up and running sans major issues.

Rachel Roumeliotis, director of open source strategy at Intel, says that OPEA will work with the open source community to offer tests based on the rubric — and provide assessments and grading of generative AI deployments on request.

OPEA’s other endeavors are a bit up in the air at the moment. But Haddad floated the potential of open model development along the lines of Meta’s expanding Llama family and Databricks’ DBRX. Toward that end, in the OPEA repo, Intel has already contributed reference implementations for an generative-AI-powered chatbot, document summarizer and code generator optimized for its Xeon 6 and Gaudi 2 hardware.

Now, OPEA’s members are very clearly invested (and self-interested, for that matter) in building tooling for enterprise generative AI. Cloudera recently launched partnerships to create what it’s pitching as an “AI ecosystem” in the cloud. Domino offers a suite of apps for building and auditing business-forward generative AI. And VMWare — oriented toward the infrastructure side of enterprise AI — last August rolled out new “private AI” compute products.

The question is — under OPEA — will these vendors actually work together to build cross-compatible AI tools?

There’s an obvious benefit to doing so. Customers will happily draw on multiple vendors depending on their needs, resources and budgets. But history has shown that it’s all too easy to become inclined toward vendor lock-in. Let’s hope that’s not the ultimate outcome here.


Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber