From Digital Age to Nano Age. WorldWide.

Tag: Vision

Robotic Automations

TechCrunch Minute: Where the Apple Vision Pro stands now the launch day hype has dropped off | TechCrunch


A few months after its launch, how is Apple’s Vision Pro faring? The company’s ambitious bet on computers that nestle on your face instead of sit on your desk made a huge splash when it was announced and later release. However, the hype has since seemingly come back down to Earth.

I am a long-term bull on augmented reality, virtual reality, and face-computers in general. I still recall my first session with what became the Microsoft Hololens project as one of the most impactful moments for my excitement for technology. So it is to my partial chagrin that the hype around the Apple Vision Pro has faded more rapidly than I anticipated.

Of course, with its Pro moniker, expensive price tag, and uneven developer support thus far, the new Apple device has a long road ahead of it. But I anticipated the Apple brand to keep the hardware in the news — and atop our collective minds — longer than it managed after its launch.

For now, we remain mostly in the dark regarding the device’s popularity. Sure, some folks returned theirs and TechCrunch’s own review was middling-to-positive in its view, but that doesn’t mean that most folks took their Apple Vision Pro back, or that some users are enjoying the gadget more than we did.

Here’s hoping that Apple and Meta, with its Quest line of VR headsets do not give up until they crack this particular nut. I find it archaic that my monitors are akin to digital chalkboards when they should be built into my glasses. Hit play, let’s have some fun.


Software Development in Sri Lanka

Robotic Automations

Belgian computer vision startup Robovision eyes US expansion to address labor shortages | TechCrunch


Faced with labor shortages, sectors such as manufacturing and agriculture are increasingly adopting AI in their automation.

Computer vision startups are looking to jump on that opportunity with a range of point solutions for both industries. From data collection to crop monitoring and harvesting, robots with eyes are entering the fields.

One big challenge that remains, however, is implementation: If such solutions are not easy to use, they won’t be used.

Belgian startup Robovision believes it has found a way around that. The company wants to industrialize deep learning tools and make them more accessible to businesses that are not tech companies at their core. It has built a “no-code” computer vision AI platform that doesn’t require software developers or data scientists to be involved at every step of the process. Robovision doesn’t make robots, but as its name suggests, the company also targets robotics companies that want to develop new machines that support AI-enabled automation.

In practice, this means Robovision customers can use its platform to upload data, label it, test their model and deploy it in production. The company says its model can be useful for a variety of use cases such as recognizing fruit at supermarket scale, identifying faults in newly made electrical components and even cutting rose stems.

Image Credits: Robovision

Out of its base in Belgium, Robovision already serves customers in 45 countries, CEO Thomas Van den Driessche told TechCrunch in an interview. Now, thanks to a recent sizable funding round, it’s expanding to the U.S., banking on interest from industrial and agribusiness customers in that gigantic market.

The Series A round of $42 million is being co-led by Belgian agtech investor Astanor Ventures and Target Global. The latter is a Berlin-based investor and its participation in this fundraise marks a departure from some of the other coverage it’s had of late: controversy over its ties to Russian money. Red River West, a French VC that focuses on funding European startups looking to break into North America, also participated in the round.

With a post-money valuation of $180 million, this new round brings the total amount of equity funding raised by Robovision to $65 million, including two converted notes. This still leaves the founders together with the staff owning more than 50% of Robovision, its chief growth officer, Florian Hendrickx, told TechCrunch via email.

What is the point?

One challenge that Robovision faces in its expansion is that working with different sectors complicates messaging and its go-to-market strategy. On the plus side, learnings and experiments in one application can be applied to another. Robovision, for example, was able to apply some of the 3D deep learning it had developed for disease detection in tulips to disease detection in human lungs during the COVID crisis.

“It’s a double-edged sword,” founder Jonathan Berte told TechCrunch. “It has been the DNA of Robovision of striking the delicate balance between diversity and focus.”

That DNA comes from Robovision’s history: It was founded in 2012 as a consultancy studio, and it was several years before it pivoted into the B2B platform approach that also made it more attractive to VCs.

The initial traction Robovision gained was in agtech, which represents 50% of its activities, Van den Driessche said. Agtech is also where its Series A’s co-lead investor, Astanor, comes from: That company focuses on what it describes as “impact agrifood.”

Agtech is a sizable opportunity because of labor shortages, and also due to Robovision’s track record — it helps its partner ISO Group plant a billion tulips annually. But other verticals are growing faster for Robovision, Van den Driessche said.

According to Van den Driessche, Robovision is seeing strong traction in life sciences and tech. For instance, Hitachi uses its platform to produce semiconductor wafers. “I don’t think agriculture is going to be the largest sector at scale,” said Bao-Y Van Cong, a partner at Target Global. “I think it’s going to be industrial manufacturing.”

Apple’s recent decision to acquire DarwinAI, an AI startup specializing in overseeing the manufacturing of components, shows rising interest in this space. For Robovision founder Jonathan Berte, it is also a sign that a toolbox that can support a wide variety of different industrialized applications makes more sense. “Apple would never [have bought that] company if it were only a point solution.”

From Ghent to the world

The convertible notes that Robovision raised in 2022 and 2023 following its pivot mostly came from Dutch and Belgian investors, but it had to look further afield to raise the capital it needed. The amount of capital that Robovision raised in the round would have been harder to secure from Benelux, or may have required more dilution.

Robovision’s Belgian roots are paying off in other ways. “The whole early team was very smart people from Ghent university,” Berte said. Van den Driessche became Robovision’s CEO in 2022, and Berte moved his focus to fundraising, partnerships and global expansion.

Robovision’s tech evolution has extended to rethinking the architecture of its computer vision tools in response to customer demand. Because low latency and delivery speed are requirements in certain environments, it launched Robovision Edge.

In today’s market, doing more with less has become key to competing globally. “I think the only way to do that is to innovate and to become more productive,” Van Cong said.


Software Development in Sri Lanka

Robotic Automations

Orchard vision system turns farm equipment into AI-powered data collectors | TechCrunch


Agricultural robotics are not a new phenomenon. We’ve seen systems that pick apples and berries, kill weeds, plant trees, transport produce and more. But while these functions are understood to be the core features of automated systems, the same thing is true here as it is across technology: It’s all about the data. A huge piece of any of these products’ value prop is the amount of actionable information their on-board sensors collect.

In a sense, Orchard Robotics’ system is cutting out the middle man. That’s not to say that there isn’t still a ton of potential value in automating these tasks during labor shortages, but the young startup’s system is lowering the barrier of entry with a sensing module that attaches to existing hardware like tractors and other farm vehicles.

While plenty of farmers are happy to embrace technologies that can potentially increase their yield and fill in roles that have been difficult to keep staffed, fully automated robotic systems can be too cost prohibitive to warrant taking the first step.

As the name suggests, Orchard is starting with a focus on apple crops. The system’s cameras can capture up to 100 images a second, recording information about every tree they pass. Then the Orchard OS software utilizes AI to build maps with the data collected. That includes every bud/fruit spotted on every tree, their distribution and even the hue of the apple.

“Our cameras image trees from bud to bloom to harvest, and use advanced computer vision and machine learning models we’ve developed to collect precise data about hundreds of millions of fruit,” says founder and CEO Charlie Wu. “This is a monumental step forward from traditional methods, which rely on manually collected samples of maybe 100 fruits.”

Mapped out courtesy of on-board GPS, farmers get a fuller picture of their crops’ success rate, down to the location and size of the tree, within a couple of inches. The firm was founded at Cornell University in 2022. Despite its young age, it has already begun testing the technology with farmers. Last season’s field testing has apparently been successful enough to drum up real investor interest.

This week, the Seattle-based firm is announcing a $3.2 million seed round, led by General Catalyst. Humba Ventures, Soma Capital, Correlation Ventures, VU Venture Partners and Genius Ventures also participated in the raise, which follows a previously unannounced pre-seed of $600,000.

Funding will go toward increasing headcount, R&D and accelerating Orchard’s go-to-market efforts.


Software Development in Sri Lanka

Robotic Automations

Apple Vision Pro’s Persona feature gets collaborative | TechCrunch


Much like the headset for which they were designed, Apple’s Personas are very much a work in progress. The original version of the beta avatars were — is “nightmarish” too strong a word? A subsequent update has made them more palatable and truer to life, and Apple says it’s continuing to work on the 3D captures.

The company on Tuesday debuted “spatial” Personas for Vision Pro headsets running visionOS 1.1 or later. Whereas the feature was previously limited to chat platforms like FaceTime and Zoom, the new version is designed to bring an added sense of collaboration to the headset.

The spatial aspect still starts with FaceTime, but Apple’s proprietary videoconferencing platform now serves as a gateway to other apps when combined with SharePlay. From there, users can select the spatial persona option, which utilizes the Vision Pro’s on-board sensors to place the Persona in the room with them.

Spatial audio, meanwhile, further places them at a specific point in space relative to the Vision Pro user.

In the video example shared by Apple, two Personas flank a window showcasing Freeform. Taken together, this approximates the sense of people collaborating on a project across an office conference table. In this case, however, that conference table is the user’s desk at home.

Is the effect neat? Yes. Is it creepy? Also, yes. Vision Pro users will continue doing business in the uncanny valley for the foreseeable future. That just comes with the territory of being an early adopter.

In addition to work collaborations, the feature can be used to watch movies and play games together. It supports up to five users at once.


Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber