From Digital Age to Nano Age. WorldWide.

Tag: nvidia

Robotic Automations

Alphabet-owned Intrinsic incorporates Nvidia tech into robotics platform | TechCrunch


The first bit of news out of the Automate conference this year arrives by way of Alphabet X spinout Intrinsic. The firm announced at the Chicago event on Monday that it is incorporating a number of Nvidia offerings into its Flowstate robotic app platform.

That includes Isaac Manipulator, a collection of foundational models designed to create workflows for robot arms. The offering launched at GTC back in March, with some of the biggest names in industrial automation already on-board. The list includes Yaskawa, Solomon, PickNik Robotics, Ready Robotics, Franka Robotics and Universal Robots.

The collaboration is focused specifically on grasping (grabbing and picking up objects) — one of the key modalities for both manufacturing and fulfillment automation. The systems are trained on large datasets, with the goal of executing tasks that work across hardware (i.e. hardware agnosticism) and with different objects.

That is to say methods of picking can be transferred to different settings, rather than having to train every system for every scenario. As humans, once we’ve figure out how to pick things up, that action can be adapted to different objects in different settings. For the most part, robots can’t do that — not for now, at least.

Image Credits: Intrinsic

“In the future, developers will be able to use ready-made universal grasping skills like these to greatly accelerate their programming processes,” Intrinsic founder and CEO Wendy Tan White said in a post. “For the broader industry, this development shows how foundation models could have a profound impact, including making today’s robot-programming challenges easier to manage at scale, creating previously infeasible applications, reducing development costs, and increasing flexibility for end users.”

Early Flowstate testing occurred in Isaac Sim — Nvidia’s robotic simulation platform. Intrinsic customer, Trumpf Machine Tools, has been working with a prototype of the system.

“This universal grasping skill, trained with 100% synthetic data in Isaac Sim, can be used to build sophisticated solutions that can perform adaptive and versatile object grasping tasks in sim and real,” Tan White says of Trumpf’s work with the platform. “Instead of hard-coding specific grippers to grasp specific objects in a certain way, efficient code for a particular gripper and object is auto-generated to complete the task using the foundation model.”

Intrinsic is also working with fellow Alphabet-owned DeepMind to crack pose estimation and path planning — two other key aspects of automation. For the latter, the system was trained on more than 130,000 objects. The company says the systems are able to determine the orientation of objects in “a few seconds” — an important part of being able to pick them up.

Another key piece of Intrinsic’s work with DeepMind is the ability to operate multiple robots in tandem. “Our teams have tested this 100% ML-generated solution to seamlessly orchestrate four separate robots working on a scaled-down car welding application simulation,” says Tan White. “The motion plans and trajectories for each robot are auto-generated, collision free, and surprisingly efficient – performing ~25% better than some traditional methods we’ve tested.”

The team is also working systems that use two arms at once — a set up more in line with the emerging world of humanoid robots. It’s something we’re going to see a whole lot more of over the next couple of years, humanoid or not. Moving from one arm to two opens up a whole world of additional applications for these systems.


Software Development in Sri Lanka

Robotic Automations

Nvidia acquires AI workload management startup Run:ai | TechCrunch


Nvidia is acquiring Run:ai, a Tel Aviv-based company that makes it easier for developers and operations teams to manage and optimize their AI hardware infrastructure, for an undisclosed sum.

Ctech reported earlier this morning the companies were in “advanced negotiations” that could see Nvidia pay upwards of $1 billion for Run:ai. Evidently, those negotiations went without a hitch.

Nvidia says that it’ll continue to offer Run:ai’s products “under the same business model” for the immediate future, and invest in Run:ai’s product roadmap as part of Nvidia’s DGX Cloud AI platform.

“Run:ai has been a close collaborator with Nvidia since 2020 and we share a passion for helping our customers make the most of their infrastructure,” Omri Geller, Run:ai’s CEO, said in a statement. “We’re thrilled to join Nvidia and look forward to continuing our journey together.”

Geller co-founded Run:ai with Ronen Dar several years ago after the two studied together at Tel Aviv University under professor Meir Feder, Run:ai’s third co-founder. Geller, Dar and Feder sought to build a platform that could “break up” AI models into fragments that run in parallel across hardware, whether on-premises, on clouds or at the edge.

While Run:AI has relatively few direct competitors, other startups are applying the concept of dynamic hardware allocation to AI workloads. For example, Grid.ai offers software that allows data scientists to train AI models across GPUs, processors and more in parallel.

But relatively early in its life, Run:AI managed to establish a large customer base of Fortune 500 companies — which in turn attracted VC investment. Prior to the acquisition, Run:ai had raised $118 million in capital from backers including Insight Partners, Tiger Global, S Capital and TLV Partners.


Software Development in Sri Lanka

Robotic Automations

French startup FlexAI exits stealth with $30M to ease access to AI compute | TechCrunch


A French startup has raised a hefty seed investment to “rearchitect compute infrastructure” for developers wanting to build and train AI applications more efficiently.

FlexAI, as the company is called, has been operating in stealth since October 2023, but the Paris-based company is formally launching Wednesday with €28.5 million ($30 million) in funding, while teasing its first product: an on-demand cloud service for AI training.

This is a chunky bit of change for a seed round, which normally means real substantial founder pedigree — and that is the case here. FlexAI co-founder and CEO Brijesh Tripathi was previously a senior design engineer at GPU giant and now AI darling Nvidia, before landing in various senior engineering and architecting roles at Apple; Tesla (working directly under Elon Musk); Zoox (before Amazon acquired the autonomous driving startup); and, most recently, Tripathi was VP of Intel’s AI and super compute platform offshoot, AXG.

FlexAI co-founder and CTO Dali Kilani has an impressive CV, too, serving in various technical roles at companies including Nvidia and Zynga, while most recently filling the CTO role at French startup Lifen, which develops digital infrastructure for the healthcare industry.

The seed round was led by Alpha Intelligence Capital (AIC), Elaia Partners and Heartcore Capital, with participation from Frst Capital, Motier Ventures, Partech and InstaDeep CEO Karim Beguir.

FlexAI team in Paris

The compute conundrum

To grasp what Tripathi and Kilani are attempting with FlexAI, it’s first worth understanding what developers and AI practitioners are up against in terms of accessing “compute”; this refers to the processing power, infrastructure and resources needed to carry out computational tasks such as processing data, running algorithms, and executing machine learning models.

“Using any infrastructure in the AI space is complex; it’s not for the faint-of-heart, and it’s not for the inexperienced,” Tripathi told TechCrunch. “It requires you to know too much about how to build infrastructure before you can use it.”

By contrast, the public cloud ecosystem that has evolved these past couple of decades serves as a fine example of how an industry has emerged from developers’ need to build applications without worrying too much about the back end.

“If you are a small developer and want to write an application, you don’t need to know where it’s being run, or what the back end is — you just need to spin up an EC2 (Amazon Elastic Compute cloud) instance and you’re done,” Tripathi said. “You can’t do that with AI compute today.”

In the AI sphere, developers must figure out how many GPUs (graphics processing units) they need to interconnect over what type of network, managed through a software ecosystem that they are entirely responsible for setting up. If a GPU or network fails, or if anything in that chain goes awry, the onus is on the developer to sort it.

“We want to bring AI compute infrastructure to the same level of simplicity that the general purpose cloud has gotten to — after 20 years, yes, but there is no reason why AI compute can’t see the same benefits,” Tripathi said. “We want to get to a point where running AI workloads doesn’t require you to become data centre experts.”

With the current iteration of its product going through its paces with a handful of beta customers, FlexAI will launch its first commercial product later this year. It’s basically a cloud service that connects developers to “virtual heterogeneous compute,” meaning that they can run their workloads and deploy AI models across multiple architectures, paying on a usage basis rather than renting GPUs on a dollars-per-hour basis.

GPUs are vital cogs in AI development, serving to train and run large language models (LLMs), for example. Nvidia is one of the preeminent players in the GPU space, and one of the main beneficiaries of the AI revolution sparked by OpenAI and ChatGPT. In the 12 months since OpenAI launched an API for ChatGPT in March 2023, allowing developers to bake ChatGPT functionality into their own apps, Nvidia’s shares ballooned from around $500 billion to more than $2 trillion.

LLMs are pouring out of the technology industry, with demand for GPUs skyrocketing in tandem. But GPUs are expensive to run, and renting them from a cloud provider for smaller jobs or ad-hoc use-cases doesn’t always make sense and can be prohibitively expensive; this is why AWS has been dabbling with time-limited rentals for smaller AI projects. But renting is still renting, which is why FlexAI wants to abstract away the underlying complexities and let customers access AI compute on an as-needed basis.

“Multicloud for AI”

FlexAI’s starting point is that most developers don’t really care for the most part whose GPUs or chips they use, whether it’s Nvidia, AMD, Intel, Graphcore or Cerebras. Their main concern is being able to develop their AI and build applications within their budgetary constraints.

This is where FlexAI’s concept of “universal AI compute” comes in, where FlexAI takes the user’s requirements and allocates it to whatever architecture makes sense for that particular job, taking care of the all the necessary conversions across the different platforms, whether that’s Intel’s Gaudi infrastructure, AMD’s Rocm or Nvidia’s CUDA.

“What this means is that the developer is only focused on building, training and using models,” Tripathi said. “We take care of everything underneath. The failures, recovery, reliability, are all managed by us, and you pay for what you use.”

In many ways, FlexAI is setting out to fast-track for AI what has already been happening in the cloud, meaning more than replicating the pay-per-usage model: It means the ability to go “multicloud” by leaning on the different benefits of different GPU and chip infrastructures.

For example, FlexAI will channel a customer’s specific workload depending on what their priorities are. If a company has limited budget for training and fine-tuning their AI models, they can set that within the FlexAI platform to get the maximum amount of compute bang for their buck. This might mean going through Intel for cheaper (but slower) compute, but if a developer has a small run that requires the fastest possible output, then it can be channeled through Nvidia instead.

Under the hood, FlexAI is basically an “aggregator of demand,” renting the hardware itself through traditional means and, using its “strong connections” with the folks at Intel and AMD, secures preferential prices that it spreads across its own customer base. This doesn’t necessarily mean side-stepping the kingpin Nvidia, but it possibly does mean that to a large extent — with Intel and AMD fighting for GPU scraps left in Nvidia’s wake — there is a huge incentive for them to play ball with aggregators such as FlexAI.

“If I can make it work for customers and bring tens to hundreds of customers onto their infrastructure, they [Intel and AMD] will be very happy,” Tripathi said.

This sits in contrast to similar GPU cloud players in the space such as the well-funded CoreWeave and Lambda Labs, which are focused squarely on Nvidia hardware.

“I want to get AI compute to the point where the current general purpose cloud computing is,” Tripathi noted. “You can’t do multicloud on AI. You have to select specific hardware, number of GPUs, infrastructure, connectivity, and then maintain it yourself. Today, that’s that’s the only way to actually get AI compute.”

When asked who the exact launch partners are, Tripathi said that he was unable to name all of them due to a lack of “formal commitments” from some of them.

“Intel is a strong partner, they are definitely providing infrastructure, and AMD is a partner that’s providing infrastructure,” he said. “But there is a second layer of partnerships that are happening with Nvidia and a couple of other silicon companies that we are not yet ready to share, but they are all in the mix and MOUs [memorandums of understanding] are being signed right now.”

The Elon effect

Tripathi is more than equipped to deal with the challenges ahead, having worked in some of the world’s largest tech companies.

“I know enough about GPUs; I used to build GPUs,” Tripathi said of his seven-year stint at Nvidia, ending in 2007 when he jumped ship for Apple as it was launching the first iPhone. “At Apple, I became focused on solving real customer problems. I was there when Apple started building their first SoCs [system on chips] for phones.”

Tripathi also spent two years at Tesla from 2016 to 2018 as hardware engineering lead, where he ended up working directly under Elon Musk for his last six months after two people above him abruptly left the company.

“At Tesla, the thing that I learned and I’m taking into my startup is that there are no constraints other than science and physics,” he said. “How things are done today is not how it should be or needs to be done. You should go after what the right thing to do is from first principles, and to do that, remove every black box.”

Tripathi was involved in Tesla’s transition to making its own chips, a move that has since been emulated by GM and Hyundai, among other automakers.

“One of the first things I did at Tesla was to figure out how many microcontrollers there are in a car, and to do that, we literally had to sort through a bunch of those big black boxes with metal shielding and casing around it, to find these really tiny small microcontrollers in there,” Tripathi said. “And we ended up putting that on a table, laid it out and said, ‘Elon, there are 50 microcontrollers in a car. And we pay sometimes 1,000 times margins on them because they are shielded and protected in a big metal casing.’ And he’s like, ‘let’s go make our own.’ And we did that.”

GPUs as collateral

Looking further into the future, FlexAI has aspirations to build out its own infrastructure, too, including data centers. This, Tripathi said, will be funded by debt financing, building on a recent trend that has seen rivals in the space including CoreWeave and Lambda Labs use Nvidia chips as collateral to secure loans — rather than giving more equity away.

“Bankers now know how to use GPUs as collaterals,” Tripathi said. “Why give away equity? Until we become a real compute provider, our company’s value is not enough to get us the hundreds of millions of dollars needed to invest in building data centres. If we did only equity, we disappear when the money is gone. But if we actually bank it on GPUs as collateral, they can take the GPUs away and put it in some other data center.”


Software Development in Sri Lanka

Robotic Automations

Nvidia's next-gen Blackwell platform will come to Google Cloud in early 2025 | TechCrunch


It’s Google Cloud Next in Las Vegas this week, and that means it’s time for a bunch of new instance types and accelerators to hit the Google Cloud Platform. In addition to the new custom Arm-based Axion chips, most of this year’s announcements are about AI accelerators, whether built by Google or from Nvidia.

Only a few weeks ago, Nvidia announced its Blackwell platform. But don’t expect Google to offer those machines anytime soon. Support for the high-performance Nvidia HGX B200 for AI and HPC workloads and GB200 NBL72 for large language model (LLM) training will arrive in early 2025. One interesting nugget from Google’s announcement: The GB200 servers will be liquid-cooled.

This may sound like a bit of a premature announcement, but Nvidia said that its Blackwell chips won’t be publicly available until the last quarter of this year.

Image Credits: Frederic Lardinois/TechCrunch

Before Blackwell

For developers who need more power to train LLMs today, Google also announced the A3 Mega instance. This instance, which the company developed together with Nvidia, features the industry-standard H100 GPUs but combines them with a new networking system that can deliver up to twice the bandwidth per GPU.

Another new A3 instance is A3 confidential, which Google described as enabling customers to “better protect the confidentiality and integrity of sensitive data and AI workloads during training and inferencing.” The company has long offered confidential computing services that encrypt data in use, and here, once enabled, confidential computing will encrypt data transfers between Intel’s CPU and the Nvidia H100 GPU via protected PCIe. No code changes required, Google says. 

As for Google’s own chips, the company on Tuesday launched its Cloud TPU v5p processors — the most powerful of its homegrown AI accelerators yet — into general availability. These chips feature a 2x improvement in floating point operations per second and a 3x improvement in memory bandwidth speed.

Image Credits: Frederic Lardinois/TechCrunch

All of those fast chips need an underlying architecture that can keep up with them. So in addition to the new chips, Google also announced Tuesday new AI-optimized storage options. Hyperdisk ML, which is now in preview, is the company’s next-gen block storage service that can improve model load times by up to 3.7x, according to Google.

Google Cloud is also launching a number of more traditional instances, powered by Intel’s fourth- and fifth-generation Xeon processors. The new general-purpose C4 and N4 instances, for example, will feature the fifth-generation Emerald Rapids Xeons, with the C4 focused on performance and the N4 on price. The new C4 instances are now in private preview, and the N4 machines are generally available today.

Also new, but still in preview, are the C3 bare-metal machines, powered by older fourth-generation Intel Xeons, the X4 memory-optimized bare metal instances (also in preview) and the Z3, Google Cloud’s first storage-optimized virtual machine that promises to offer “the highest IOPS for storage optimized instances among leading clouds.”


Software Development in Sri Lanka

Robotic Automations

Nvidia might be clouding the funding climate for AI chip startups, but Hailo is still fighting | TechCrunch


Hello, and welcome back to Equity, a podcast about the business of startups, where we unpack the numbers and nuance behind the headlines. This is our Wednesday show, when we take a moment to dig into a raft of startup and venture capital news. No Big Tech here!

Keep in mind that Y Combinator’s Demo Day kicks off today, so we’re going to be snowed under in startup news for the rest of the week. Consider today’s show the calm before the storm.

On the podcast this morning we have BlaBlaCar’s new credit facility and how it managed to land it, how PipeDreams could be onto a new model of startup construction, GoStudent’s rebound and profitability, Hailo’s chip business and massive new funding round, and the two new brands that GGV calls home as it divides up its operations on both sides of the Pacific Ocean.

Equity is TechCrunch’s flagship podcast and posts every Monday, Wednesday and Friday. You can subscribe to us on Apple Podcasts, Overcast, Spotify and all the casts.

You also can follow Equity on X and Threads, at @EquityPod.

For the full interview transcript, for those who prefer reading over listening, read on, or check out our full archive of episodes over at Simplecast.




Software Development in Sri Lanka

Robotic Automations

Hailo lands $120 million to keep battling Nvidia as most AI chip startups struggle | TechCrunch


The funding climate for AI chip startups, once as sunny as a mid-July day, is beginning to cloud over as Nvidia asserts its dominance.

According to a recent report, U.S. chip firms raised just $881 million from January 2023 to September 2023 — down from $1.79 billion in the first three quarters of 2022. AI chip company Mythic ran out of cash in 2022 and was nearly forced to halt operations, while Graphcore, a once-well-capitalized rival, now faces mounting losses.

But one startup appears to have found success in the ultra-competitive — and increasingly crowded — AI chip space.

Hailo, co-founded in 2017 by Orr Danon and Avi Baum, previously CTO for wireless connectivity at the microprocessor outfit Texas Instruments, designs specialized chips to run AI workloads on edge devices. Hailo’s chips execute AI tasks with lower memory usage and power consumption than a typical processor, making them a strong candidate for compact, offline and battery-powered devices such as cars, smart cameras and robotics.

“I co-founded Hailo with the mission to make high-performance AI available at scale outside the realm of data centers,” Danon told TechCrunch. “Our processors are used for tasks such as object detection, semantic segmentation and so on, as well as for AI-powered image and video enhancement. More recently, they’ve been used to run large language models (LLMs) on edge devices including personal computers, infotainment electronic control units and more.”

Many AI chip startups have yet to land one major contract, let alone dozens or hundreds. But Hailo has over 300 customers today, Danon claims, in industries such as automotive, security, retail, industrial automation, medical devices and defense.

In a bet on Hailo’s future prospects, a cohort of financial backers, including Israeli businessman Alfred Akirov, automotive importer Delek Motors and the VC platform OurCrowd invested $120 million in Hailo this week, an extension to the company’s Series C. Danon said that the new capital will “enable Hailo to leverage all opportunities in the pipeline” while “setting the stage for long-term growth.”

“We’re strategically positioned to bring AI to edge devices in ways that will significantly expand the reach and impact of this remarkable new technology,” Danon said.

Now, you might be wondering, does a startup like Hailo really stand a chance against chip giants like Nvidia, and to a lesser extent Arm, Intel and AMD? One expert, Christos Kozyrakis, Stanford professor of electrical engineering and computer science, thinks so — he believes accelerator chips like Hailo’s will become “absolutely necessary” as AI proliferates.

Image Credits: Hailo

“The energy efficiency gap between CPUs and accelerators is too large to ignore,” Kozyrakis told TechCrunch. “You use the accelerators for efficiency with key tasks (e.g. AI) and have a processor or two on the side for programmability.”

Kozyrakis does see longevity presenting a challenge to Hailo’s leadership — for example, if the AI model architectures its chips are designed to run efficiently fall out of vogue. Software support, too, could be an issue, Kozyrakis says, if a critical mass of developers aren’t willing to learn to use the tooling built around Hailo’s chips.

“Most of the challenges where it concerns custom chips are in the software ecosystem,” Kozyrakis said. “This is where Nvidia, for instance, has a huge advantage over other companies in AI, as they’ve been investing in software for their architectures for 15-plus years.”

But, with $340 million in the bank and a workforce numbering around 250, Danon’s feeling confident about Hailo’s path forward — at least in the short term. He sees the startup’s technology addressing many of the challenges companies encounter with cloud-based AI inference, particularly latency, cost and scalability.

“Traditional AI models rely on cloud-based infrastructure, often suffering from latency issues and other challenges,” Danon said. “They’re incapable of real-time insights and alerts, and their dependency on networks jeopardizes reliability and integration with the cloud, which poses data privacy concerns. Hailo is addressing these challenges by offering solutions that operate independently of the cloud, thus making them able to handle much higher amounts of AI processing.”

Curious for Danon’s perspective, I asked about generative AI and its heavy dependence on the cloud and remote data centers. Surely, Hailo sees the current top-down, cloud-centric model (e.g. OpenAI’s modus operandi) is an existential threat?

Danon said that, on the contrary, generative AI is driving new demand for Hailo’s hardware.

“In recent years, we’ve seen a surge in demand for edge AI applications in most industries ranging from airport security to food packaging,” he said. “The new surge in generative AI is further boosting this demand, as we’re seeing requests to process LLMs locally by customers not only in the compute and automotive industries, but also in industrial automation, security and others.”

How about that.


Software Development in Sri Lanka

Robotic Automations

Understanding humanoid robots | TechCrunch


Robots made their stage debut the day after New Year’s 1921. More than half-a-century before the world caught its first glimpse of George Lucas’ droids, a small army of silvery humanoids took to the stages of the First Czechoslovak Republic. They were, for all intents and purposes, humanoids: two arms, two legs, a head — the whole shebang.

Karel Čapek’s play, R.U.R (Rossumovi Univerzální Roboti), was a hit. It was translated into dozens of languages and played across Europe and North America. The work’s lasting legacy, however, was its introduction of the word “robot.” The meaning of the term has evolved a good bit in the intervening century, as Čapek’s robots were more organic than machine.

Decades of science fiction have, however, ensured that the public image of robots hasn’t strayed too far from its origins. For many, the humanoid form is still the platonic robot ideal — it’s just that the state of technology hasn’t caught up to that vision. Earlier this week, Nvidia held its own on-stage robot parade at its GTC developer conference, as CEO Jensen Huang was flanked by images of a half-dozen humanoids.

While the notion of the concept of the general-purpose humanoid has, in essence, been around longer than the word “robot,” until recently, the realization of the concept has seemed wholly out of grasp. We’re very much not there yet, but for the first time, the concept has appeared over the horizon.

What is a “general-purpose humanoid?”

Image Credits: Nvidia

Before we dive any deeper, let’s get two key definitions out of the way. When we talk about “general-purpose humanoids,” the fact is that both terms mean different things to different people. In conversations, most people take a Justice Potter “I know it when I see it” approach to both in conversation.

For the sake of this article, I’m going to define a general-purpose robot as one that can quickly pick up skills and essentially do any task a human can do. One of the big sticking points here is that multi-purpose robots don’t suddenly go general-purpose overnight.

Because it’s a gradual process, it’s difficult to say precisely when a system has crossed that threshold. There’s a temptation to go down a bit of a philosophical rabbit hole with that latter bit, but for the sake of keeping this article under book length, I’m going to go ahead and move on to the other term.

I received a bit of (largely good-natured) flack when I referred to Reflex Robotics’ system as a humanoid. People pointed out the plainly obvious fact that the robot doesn’t have legs. Putting aside for a moment that not all humans have legs, I’m fine calling the system a “humanoid” or more specifically a “wheeled humanoid.” In my estimation, it resembles the human form closely enough to fit the bill.

A while back, someone at Agility took issue when I called Digit “arguably a humanoid,” suggesting that there was nothing arguable about it. What’s clear is that robot isn’t as faithful an attempt to recreate the human form as some of the competition. I will admit, however, that I may be somewhat biased having tracked the robot’s evolution from its precursor Cassie, which more closely resembled a headless ostrich (listen, we all went through an awkward period).

Another element I tend to consider is the degree to which the humanlike form is used to perform humanlike tasks. This element isn’t absolutely necessary, but it’s an important part of the spirit of humanoid robots. After all, proponents of the form factor will quickly point out the fact that we’ve built our worlds around humans, so it makes sense to build humanlike robots to work in that world.

Adaptability is another key point used to defend the deployment of bipedal humanoids. Robots have had factory jobs for decades now, and the vast majority of them are single-purpose. That is to say, they were built to do a single thing very well a lot of times. This is why automation has been so well-suited for manufacturing — there’s a lot of uniformity and repetition, particularly in the world of assembly lines.

Brownfield vs. greenfield

Image Credits: Brian Heater

The terms “greenfield” and “brownfield” have been in common usage for several decades across various disciplines. The former is the older of two, describing undeveloped land (quite literally, a green field). Developed to contrast the earlier term, brownfield refers to development on existing sites. In the world of warehouses, it’s the difference between building something from scratch or working with something that’s already there.

There are pros and cons of both. Brownfields are generally more time and cost-effective, as they don’t require starting from scratch, while greenfields afford to opportunity to built a site entirely to spec. Given infinite resources, most corporations will opt for a greenfield. Imagine the performance of a space built ground-up with automated systems in mind. That’s a pipedream for most organizers, so when it comes time to automate, a majority of companies seek out brownfield solutions — doubly so when they’re first dipping their toes into the robotic waters.

Given that most warehouses are brownfield, it ought come as no surprise that the same can be said for the robots designed for these spaces. Humanoids fit neatly into this category — in fact, in a number of respects, they are among the brownest of brownfield solutions. This gets back to the earlier point about building humanoid robots for their environments. You can safely assume that most brownfield factories were designed with human workers in mind. That often comes with elements like stairs, which present an obstacle for wheeled robots. How large that obstacle ultimately is depends on a lot of factors, including layout and workflow.

Baby steps

Image Credits: Figure

Call me a wet blanket, but I’m a big fan of setting realistic expectations. I’ve been doing this job for a long time and have survived my share of hype cycles. There’s an extent to which they can be useful, in terms of building investor and customer interest, but it’s entirely too easy to fall prey to overpromises. This includes both stated promises around future functionality and demo videos.

I wrote about the latter last month in a post cheekily titled, “How to fake a robotics demo for fun and profit.” There are a number of ways to do this, including hidden teleoperation and creative editing. I’ve heard whispers that some firms are speeding up videos, without disclosing the information. In fact, that’s the origin of humanoid firm 1X’s name — all of their demos are run in 1X speed.

Most in the space agree that disclosure is important — even necessary — on such products, but there aren’t strict standards in place. One could argue that you’re wading into a legal gray area if such videos play a role in convincing investors to plunk down large sums of money. At the very least, they set wildly unrealistic expectations among the public — particularly those who are inclined to take truth-stretching executives’ words as gospel.

That can only serve to harm those who are putting in the hard work while operating in reality with the rest of us. It’s easy to see how hope quickly diminishes when systems fail to live up to those expectations.

The timeline to real-world deployment contains two primary constraints. The first is mechatronic: i.e. what the hardware is capable of. The second is software and artificial intelligence. Without getting into a philosophical debate around what qualifies as artificial general intelligence (AGI) in robots, one thing we can certainly say is that progress has — and will continue to be gradual.

As Huang noted at GTC the other week, “If we specified AGI to be something very specific, a set of tests where a software program can do very well — or maybe 8% better than most people — I believe we will get there within five years.” That’s on the optimistic end of the timeline I’ve heard from most experts in the field. A range of five to 10 years seems common.

Before hitting anything resembling AGI, humanoids will start as single-purpose systems, much like their more traditional counterparts. Pilots are designed to prove out that these systems can do one thing well at scale before moving onto the next. Most people are looking at tote moving for that lowest-hanging fruit. Of course, your average Kiva/Locus AMR can move totes around all day, but those systems lack the mobile manipulators required to move payloads on and off themselves. That’s where robot arms and end effectors come in, whether or not they happen to be attached to something that looks human.

Speaking to me the other week at the Modex show in Atlanta, Dexterity founding engineer Robert Sun floated an interesting point: humanoids could provide a clever stopgap on the way to lights out (fully automated) warehouses and factories. Once full automation is in place, you won’t necessarily require the flexibility of a humanoid. But can we reasonably expect these systems to be fully operational in time?

“Transitioning all logistics and warehousing work to roboticized work, I thought humanoids could be a good transition point,” Sun said. “Now we don’t have the human, so we’ll put the humanoid there. Eventually, we’ll move to this automated lights-out factory. Then the issue of humanoids being very difficult makes it hard to put them in the transition period.”

Take me to the pilot

Image Credits: Apptronik/Mercedes

The current state of humanoid robotics can be summed up in one word: pilot. It’s an important milestone, but one that doesn’t necessarily tell us everything. Pilot announcements arrive as press releases announcing the early stage of a potential partnership. Both parties love them.

For the startup, they represent real, provable interest. For the big corporation, they signal to shareholders that the firm is engaging with the state of the art. Rarely, however, are real figures mentioned. Those generally enter the picture when we start discussing purchase orders (and even then, often not).

The past year has seen a number of these announced. BMW is working with Figure, while Mercedes has enlisted Apptronik. Once again, Agility has a head start on the rest, having completed its pilots with Amazon — we are, however, still waiting for word on the next step. It’s particularly telling that — in spite of the long-term promise of general-purpose systems, just about everyone in the space is beginning with the same basic functionality.

Two legs to stand on

Image Credits: Brian Heater

At this point, the clearest path to AGI should look familiar to anyone with a smartphone. Boston Dynamics’ Spot deployment provides a clear real-world example of how the app store model can work with industrial robots. While there’s a lot of compelling work being done in the world of robot learning, we’re a ways off from systems that can figure out new tasks and correct mistakes on the fly at scale. If only robotics manufacturers could leverage third-party developers in a manner similar to phonemakers.

Interest in the category has increased substantially in recent months, but speaking personally, the needle hasn’t moved too much in either direction for me since late last year. We’ve seen some absolutely killer demos, and generative AI presents a promising future. OpenAI is certainly hedging its bets, first investing in 1X and — more recently — Figure.

A lot of smart people have faith in the form factor and plenty of others remain skeptical. One thing I’m confident saying, however, is that whether or not future factories will be populated with humanoid robots on a meaningful scale, all of this work will amount to something. Even the most skeptical roboticists I’ve spoken to on the subject have pointed to the NASA model, where the race to land humans on the moon led to the invention of products we use on Earth to this day.

We’re going to see continued breakthroughs in robotic learning, mobile manipulation and locomotion (among others) that will impact the role automation plays in our daily life one way or another.


Software Development in Sri Lanka

Robotic Automations

CES 2024: How to watch as Nvidia, Samsung and more reveal hardware, AI updates | TechCrunch


CES 2024 will be here before we know it, taking over Las Vegas with throngs of crowds, booths full of products and a lot of companies making claims about how AI is improving their offerings. As noted in our CES preview, though the conference has had its ups and downs of late, it’s increasingly become an opportunity for startups to capture attention while all eyes are drawn to the bigger budget announcements from the likes of Samsung, Sony and Nvidia.

TechCrunch will be on the ground at CES 2024 throughout the event next week, with a particular focus on those startups that might just be headlining a big livestream of their own in a couple years. You can follow along with our team’s CES coverage across the site and social handles here, but let’s cut to the chase, since we all know those big-name events still matter.

Monday, January 8 will give consumer tech and transportation aficionados plenty to watch starting at 8 a.m. PT / 11 a.m. ET, with many of the highest-profile press conferences being livestreamed to the public as has become the norm. These events will set the stage for the public CES show floor opening January 9 and running through January 12.

As you’ll see in the rundown below, AI will be the big through-line running across almost all of the big events, as CES 2024 marks the first iteration of the event fully in the new AI-centric era.

We’ll keep this list updated as the big day draws closer and as schedules change, but for now, these are the big-ticket companies looking to make a big splash before the convention doors open and CES 2024 begins for in-person attendees Tuesday.

Nvidia

8 a.m. PT / 11 a.m. ET

CES is wasting no time in getting to one of the main events. Nvidia is coming into the event riding high on its recent AI-fueled growth. So it’s no surprise Nvidia promises a focus on AI and content creation during their kickoff address at CES.

LG

8 a.m. PT / 11 a.m. ET

At the same time, LG will be showcasing its own updates, though they have already shown part of their hand by releasing the details on its new OLED TV lineup, featuring AI processors it claims will significantly improve visual and audio fidelity over prior models. LG will also feature updates on home, mobility and, you guessed it, AI in its CES event.

Asus

9 a.m. PT / noon ET

Asus takes the prize for most hyperbolic CES teaser, as it sets out to put viewers “in search of incredible transcendence.” That’s one way of framing the formal reveal of what Asus already showed to be a new dual-screen laptop design.

Panasonic

10 a.m. PT / 1:00 p.m. ET

Panasonic is leading with their energy and climate policies, in a break from the other companies keeping a big focus on AI reveals.

Honda

10:30 a.m. PT / 1:30 p.m. ET

Honda’s been pretty clear about what to expect from its CES event this year: the reveal of a new EV series, complete with a purple-tinted tease of its form factor. Will that coincide with more details on Honda’s partnership with Sony for the Afeela brand revealed at CES 2023? Time will tell.

Sennheiser

12:30pm PT / 3:30pm ET

The audio company Sennheiser will have their own CES showcase, with a pretty clear focus, promising new headphone announcements from their live stream, which you can watch below.

Hyundai

1 p.m. PT / 4 p.m. ET

Hyundai’s most attention-grabbing reveal looks to be an update on its Supernal eVTOL (electric vertical takeoff and landing vehicle), which was first showcased back at CES 2020. In addition to its CES kickoff, Hyudai is hosting separate events Tuesday focused solely on the eVTOL concept and its vision for mobility hubs for these flying vehicles to actually take off and land from. Beyond its aerial ambitions, Hyundai will be talking about sustainability, software and of course, AI in a stream that’s planned for, but not yet public on their YouTube channel.

Samsung

2 p.m. PT / 5 p.m. ET

If you’re looking for phone news from Samsung, you’ll have to wait until January 17, when their next Unpacked event will kick off. As has been the case for several years, Samsung will focus on the rest of their product lines at CES 2024.

And those products are about to get the AI push, if their press conference title “AI for All: Connectivity in the Age of AI” wasn’t enough of a signal. Samsung has already revealed some AI applications in the kitchen and in its updated robot vacuum lineup, with more expected from its CES event being livestreamed via their newsroom site.

Samsung also put out some additional teases over the weekend for “new generation of products that can be folded inward and outward,” which would include rollable and foldable displays building off of their existing lines of foldable phones.

Sony

5 p.m. PT / 8 p.m. ET

Witnessing the Ghostbusters logo wearing a VR headset is just the kind of corporate synergy CES is made for. Sony has highlighted the use of its technology within its film and gaming efforts at past CES events, and by focusing on “Powering Creativity with Technology,” that looks to be the same at CES 2024.

For more CES news as it rolls in, click the banner below to see our entire coverage, or check out some of these highlights from the event:


Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber