From Digital Age to Nano Age. WorldWide.

Tag: robots

Robotic Automations

Exclusive: Wayve co-founder Alex Kendall on the autonomous future for cars and robots | TechCrunch


U.K.-based autonomous vehicle startup Wayve started life as a software platform loaded into a tiny electric “car” called Renault Twizy. Festooned with cameras, the company’s co-founders and PhD graduates, Alex Kendall and Amar Shah, tuned the deep-learning algorithms powering the car’s autonomous systems until they’d got it to drive around a medieval city unaided. No […]

© 2024 TechCrunch. All rights reserved. For personal use only.


Software Development in Sri Lanka

Robotic Automations

Humanoid robots are learning to fall well | TechCrunch


The savvy marketers at Boston Dynamics produced two major robotics news cycles last week. The larger of the two was, naturally, the electric Atlas announcement. As I write this, the sub-40 second video is steadily approaching five million views. A day prior, the company tugged at the community’s heart strings when it announced that the original hydraulic Atlas was being put out to pasture, a decade after its introduction.

The accompanying video was a celebration of the older Atlas’ journey from DARPA research project to an impressively nimble bipedal ’bot. A minute in, however, the tone shifts. Ultimately, “Farewell to Atlas” is as much a celebration as it is a blooper reel. It’s a welcome reminder that for every time the robot sticks the landing on video there are dozens of slips, falls and sputters.

Image Credits: Boston Dynamics

I’ve long championed this sort of transparency. It’s the sort of thing I would like to see more from the robotics world. Simply showcasing the highlight reel does a disservice to the effort that went into getting those shots. In many cases, we’re talking years of trial and error spent getting robots to look good on camera. When you only share the positive outcomes, you’re setting unrealistic expectations. Bipedal robots fall over. In that respect, at least, they’re just like us. As Agility put it recently, “Everyone falls sometimes, it’s how we get back up that defines us.” I would take that a step further, adding that learning how to fall well is equally important.

The company’s newly appointed CTO, Pras Velagapudi, recently told me that seeing robots fall on the job at this stage is actually a good thing. “When a robot is actually out in the world doing real things, unexpected things are going to happen,” he notes. “You’re going to see some falls, but that’s part of learning to run a really long time in real-world environments. It’s expected, and it’s a sign that you’re not staging things.”

A quick scan of Harvard’s rules for falling without injury reflects what we intuitively understand about falling as humans:

  1. Protect your head
  2. Use your weight to direct your fall
  3. Bend your knees
  4. Avoid taking other people with you

As for robots, this IEEE Spectrum piece from last year is a great place to start.

“We’re not afraid of a fall—we’re not treating the robots like they’re going to break all the time,” Boston Dynamics CTO Aaron Saunders told the publication last year. “Our robot falls a lot, and one of the things we decided a long time ago [is] that we needed to build robots that can fall without breaking. If you can go through that cycle of pushing your robot to failure, studying the failure, and fixing it, you can make progress to where it’s not falling. But if you build a machine or a control system or a culture around never falling, then you’ll never learn what you need to learn to make your robot not fall. We celebrate falls, even the falls that break the robot.”

Image Credits: Boston Dynamics

The subject of falling also came up when I spoke with Boston Dynamics CEO Robert Playter ahead of the electric Atlas’ launch. Notably, the short video begins with the robot in a prone position. The way the robot’s legs arc around is quite novel, allowing the system to stand up from a completely flat position. At first glance, it almost feels as though the company is showing off, using the flashy move simply as a method to showcase the extremely robust custom-built actuators.

“There will be very practical uses for that,” Playter told me. “Robots are going to fall. You’d better be able to get up from prone.” He adds that the ability to get up from a prone position may also be useful for charging purposes.

Much of Boston Dynamics’ learnings around falling came from Spot. While there’s generally more stability in the quadrupedal form factor (as evidenced from decades trying and failing to kick the robots over in videos), there are simply way more hours of Spot robots working in real-world conditions.

Image Credits: Agility Robotics

“Spot’s walking something like 70,000 kms a year on factory floors, doing about 100,000 inspections per month,” adds Playter. “They do fall, eventually. You have to be able to get back up. Hopefully you get your fall rate down — we have. I think we’re falling once every 100-200 kms. The fall rate has really gotten small, but it does happen.”

Playter adds that the company has a long history of being “rough” on its robots. “They fall, and they’ve got to be able to survive. Fingers can’t fall off.”

Watching the above Atlas outtakes, it’s hard not to project a bit of human empathy onto the ’bot. It really does appear to fall like a human, drawing its extremities as close to its body as possible, to protect them from further injury.

When Agility added arms to Digit, back in 2019, it discussed the role they play in falling. “For us, arms are simultaneously a tool for moving through the world — think getting up after a fall, waving your arms for balance, or pushing open a door — while also being useful for manipulating or carrying objects,” co-founder Jonathan Hurst noted at the time.

I spoke a bit to Agility about the topic at Modex earlier this year. Video of a Digit robot falling over on a convention floor a year prior had made the social media rounds. “With a 99% success rate over about 20 hours of live demos, Digit still took a couple of falls at ProMat,” Agility noted at the time. “We have no proof, but we think our sales team orchestrated it so they could talk about Digits quick-change limbs and durability.”

As with the Atlas video, the company told me that something akin to a fetal position is useful in terms of protecting the robot’s legs and arms.

The company has been using reinforcement learning to help fallen robots right themselves. Agility shut off Digit’s obstacle avoidance for the above video to force a fall. In the video, the robot uses its arms to mitigate the fall as much as possible. It then utilizes its reinforcement learnings to return to a familiar position from which it is capable of standing again with a robotic pushup.

One of humanoid robots’ main selling points is their ability to slot into existing workflows — these factories and warehouses are known as “brownfield,” meaning they weren’t custom built for automation. In many existing cases of factory automation, errors mean the system effectively shuts down until a human intervenes.

“Rescuing a humanoid robot is not going to be trivial,” says Playter, noting that these systems are heavy and can be difficult to manually right. “How are you going to do that if it can’t get itself off the ground?”

If these systems are truly going to ensure uninterrupted automation, they’ll need to fall well and get right back up again.

“Every time Digit falls, we learn something new,” adds Velagapudi. “When it comes to bipedal robotics, falling is a wonderful teacher.”




Software Development in Sri Lanka

Robotic Automations

Muscle tissue harvested from mice cells move ‘biohybrid’ robots | TechCrunch


Sometimes nature provides the best blueprints for building effective robots. It also can provide the best material. Billions of years of natural selection has built some pretty impressive machinery, so you can’t really blame engineers for borrowing a bit of inspiration from the world around them. In particular, the field of soft robotics — with its flexible and compliant components — owes a lot to animal biology.

While these systems have soft forms, however, many of their components are still rigid like their more traditional counterparts. Researchers are working to bring flexible elements to create locomotion for these soft robots. As MIT succinctly puts it, “our muscles are nature’s perfect actuators.”

The team is going beyond simply mimicking muscles here, however. Researchers at the school are using live muscle tissue in tandem with synthetic robot parts for a classification of robots known as “biohybrid.”

MIT Professor of Engineering Ritu Raman confirmed the process with TechCrunch, noting, “We build the muscle tissues from mouse cells, and then we put the muscle tissues on our robot’s skeleton. The muscles then function as actuators for the robot — every time the muscle contracts, the robot moves.”

The muscle fibers are attached to a “spring-like” device called a “flexure,” which serves as a kind of skeletal structure for the system. Biological muscle tissue can be difficult to work with and generally unpredictable. Left in a Petri dish, the tissue will expand and contract as hoped for, but not in a controlled manner.

In order to be deployed in robotic systems, they have to be reliable, predictable and repeatable. In this instance, that requires the use of structures that are compliant in one direction and resistant in the other. Raman’s team found a solution in Professor Martin Culpepper’s MIT fabrication lab.

The flexures still needed to be tweaked to the specifications of the robot, ultimately opting for structures with 1/100th the stiffness of the muscle tissue. “When the muscle contracts, all the force is converted into movement in that direction,” Raman notes. It’s a huge magnification.”

The muscle fiber/flexure system can be applied to various kinds of robots in different sizes, but Raman says the team is focused on creating extremely small robots that could one day operate inside the body to perform minimally invasive procedures.


Software Development in Sri Lanka

Robotic Automations

Robots can make jobs less meaningful for human colleagues | TechCrunch


Much has been (and will continue to be) written about automation’s impact on the jobs market. In the short term, many employers have complained of an inability to fill roles and retain workers, further accelerating robotic adoption. The long-term impact these sorts of sweeping changes will have on the job market going forward remains to be seen.

One aspect of the conversation that is oft neglected, however, is how human workers feel about their robotic colleagues. There’s a lot to be said for systems that augment or remove the more backbreaking aspects of blue-collar work. But could the technology also have a negative impact on worker morale? Both things can certainly be true at once.

The Brookings Institute this week issued results gleaned from several surveys conducted over the past decade and a half to evaluate the impact that robotics have on job “meaningfulness.” The think tank defines the admittedly abstract notion thusly:

“In exploring what makes work meaningful, we rely on self-determination theory. According to this theory, satisfying three innate psychological needs—competence, autonomy, and relatedness—is key for motivating workers and enabling them to experience purpose through their work.”

Data was culled from worker surveys carried out in 14 industries across 20 countries in Europe, cross-referenced with robot deployment data issued by the International Federation of Robotics. Industries surveyed included automotive, chemical products, food and beverage and metal production, among others.

The institute reports a negative impact to worker-perceived meaningfulness and autonomy levels.

“If robot adoption in the food and beverages industry were to increase to match that of the automotive industry,” Brookings notes, “we estimate a staggering 6.8% decrease in work meaningfulness and a 7.5% decrease in autonomy.” The autonomy aspect speaks to an ongoing concern over whether the implementation of robotics in industrial settings will make the roles carried out by their human counterparts more robotic as well. Of course, the counterpoint has often been made that these systems effectively remove many of the most repetitive aspects of these roles.

The Institute goes on to suggest that these sorts of impacts are felt across roles and demographics. “We find that the negative consequences of robotization for work meaningfulness are the same, regardless of workers’ education level, skill level, or the tasks they perform,” the paper notes.

As for how to address this shift, the answer likely isn’t going to be simply saying no to automation. As long as robots have a positive impact on a corporation’s bottom line, adoption will continue at a rapidly increasing clip.

Brookings resident Milena Nikolova does offer a seemingly straightforward solution, writing, “If firms have mechanisms in place to ensure that humans and machines cooperate, rather than compete, for tasks, machines can help improve workers’ well-being.”

This is one of the defining pushes behind those automation firms touting collaborative robotics, rather than outright worker replacement. Pitting humans against their robotic counterparts will almost certainly be a losing battle.


Software Development in Sri Lanka

Robotic Automations

Watch: New Atlas robot stuns experts in first reveal from Boston Dynamics


This week Boston Dynamics retired its well-known Atlas robot that was powered by hydraulics. Then today it unveiled its new Atlas robot, which is powered by electricity.

The change might not seem like much, but TechCrunch’s Brian Heater told the TechCrunch Minute that the now-deprecated hydraulics system was out of date. It’s not hard to spot why Boston Dynamics, owned by Hyundai, wanted to go electric. Its new Atlas robot is leaner, and appears to have improved range-of-motion. Size and ability to contort and maneuver are not cosmetic elements to a humanoid robot — they can unlock new use cases and possible work environments.

The new Atlas is not incredibly well-defined today, which is not a massive surprise given that it’s still a work in progress. Still, we do know that it will first head to Hyundai factories before hitting the market more generally down the road.

Happily for those of us who want a domestic robot to handle household chores and hold our hands whilst we cry, there are other startups working on the humanoid robot project. Figure, Agility, Tesla, there are too many companies vying for the same prize to note in this short post. Which has me incredibly excited — more people working on the problem means quicker progress, and hopefully faster completion of a general purpose humanoid robot that can learn.

On that last bit, it’s worth keeping in mind that AI is set to play a large role in how robots go from being great at set, repetitive tasks to being able to learn, and do a great deal more without direct programming. While it will take time for LLMs’ ability to ingest language, write code, and the like to connect to robots under development today, you can spot the future if you squint far enough into the distance. To which all I can say is, faster, please!




Software Development in Sri Lanka

Robotic Automations

Exclusive: How Found Energy went from ‘self-cannibalizing robots’ to cleaning up heavy industry


Found Energy doesn’t have the typical startup origin story: It began with a space robot that was supposed to eat itself. Now, the company is developing that same technology with an eye toward powering aluminum smelters and long-haul shipping.

Nearly a decade ago, Peter Godart, Found Energy’s co-founder and CEO, was a scientist at NASA’s Jet Propulsion Laboratory. He and some colleagues were brainstorming how to power a probe that might visit Jupiter’s moon, Europa. The team was debating the energy density of batteries that might be suitable when a stray thought landed in Godart’s head. The aluminum used to make the spacecraft held more than 10 times the energy of any cutting-edge battery. Why not use the spacecraft’s parts to power itself?

“They gave me a bunch of money to start a program that I lovingly called the ‘self-cannibalizing robot lab,’” Godart told TechCrunch. “We looked at giving robots the ability to consume their vestigial aluminum components for fuel.”

But as he continued his research, Godart had another thought. “I had a moment where I realized my time would be better spent solving Earth problems,” he said. His timing couldn’t have been better. Congress cut some of the funding for the Europa missions, and JPL let Godart take the intellectual property to MIT where he continued to work on the problem during his doctorate.

To Godart, aluminum had several obvious upsides: It’s the most abundant metal in the Earth’s crust, it can store twice as much energy per unit volume as diesel without being volatile, and it’s possible to recover as heat 70% of the original electrical energy used to smelt it. “I was like, oh my god, we got to do something with this,” he said.

To release the energy embodied in refined aluminum, Godart had to figure out how to get past the metal’s defenses, so to speak. “If you throw a chunk of aluminum in water and try to oxidize it using water, it would take thousands of years,” he said.

Godart’s process is much, much faster. Once water is dropped on aluminum coated in Found Energy’s catalyst, the metal’s surface quickly starts bubbling as the reaction releases heat and hydrogen gas. Within seconds, the aluminum starts expanding as the hydrogen bubbles force it to exfoliate. That allows water to penetrate further into the metal, repeating the process over and over again until all that’s left is a gray powder. “We actually call it fractal exfoliation,” Godart said.

Found Energy harvests the resulting steam and hydrogen, each of which can be used for a range of industrial processes. “One of the hardest elements of heavy industry to decarbonize is the heat,” Godart said. “And now here we have this really flexible way of providing heat across a very wide range of temperatures, all the way down from 80 to 100 degrees Celsius all the way up to 1,000 degrees Celsius.” In total, about 8.6 megawatt-hours of energy can be recovered per metric ton of aluminum.

What’s left isn’t waste, either. The catalyst can be recovered, and the powder is aluminum trihydrate, which can be smelted once more to create metallic aluminum. Any contaminants, including food waste, plastic soda can liners and mixed alloys, remain larger than the aluminum trihydrate powder and can be easily filtered out.

“All of that stuff works in our process, because our catalyst just eats aluminum and basically leaves everything else untouched,” Godart said.

Found Energy recently raised an oversubscribed $12 million seed round, TechCrunch has exclusively learned. Investors in the round include the Autodesk Foundation, GiTV, Glenfield Partners, Good Growth Capital, J-Impact, Kompas VC, the Massachusetts Clean Energy Center and Munich Re Ventures.

When using scrap aluminum, which is Found Energy’s initial plan, the process is carbon negative. The startup is targeting industrial heat in its go-to-market strategy, but Godart also sees applications in marine shipping and long-haul trucking. Aluminum is slightly heavier than diesel or bunker fuel, but its energy density could be game changing for those industries.

One could imagine future ships powered by aluminum dropping their waste powder off at a smelter to be refueled for a return voyage. “Just sip a little bit of that energy as you go, and then you’ve essentially come up with a new maritime shipping fuel as well,” he said. “In a weird way, we’re sort of revamping the concept of a solid fuel.”


Software Development in Sri Lanka

Robotic Automations

Understanding humanoid robots | TechCrunch


Robots made their stage debut the day after New Year’s 1921. More than half-a-century before the world caught its first glimpse of George Lucas’ droids, a small army of silvery humanoids took to the stages of the First Czechoslovak Republic. They were, for all intents and purposes, humanoids: two arms, two legs, a head — the whole shebang.

Karel Čapek’s play, R.U.R (Rossumovi Univerzální Roboti), was a hit. It was translated into dozens of languages and played across Europe and North America. The work’s lasting legacy, however, was its introduction of the word “robot.” The meaning of the term has evolved a good bit in the intervening century, as Čapek’s robots were more organic than machine.

Decades of science fiction have, however, ensured that the public image of robots hasn’t strayed too far from its origins. For many, the humanoid form is still the platonic robot ideal — it’s just that the state of technology hasn’t caught up to that vision. Earlier this week, Nvidia held its own on-stage robot parade at its GTC developer conference, as CEO Jensen Huang was flanked by images of a half-dozen humanoids.

While the notion of the concept of the general-purpose humanoid has, in essence, been around longer than the word “robot,” until recently, the realization of the concept has seemed wholly out of grasp. We’re very much not there yet, but for the first time, the concept has appeared over the horizon.

What is a “general-purpose humanoid?”

Image Credits: Nvidia

Before we dive any deeper, let’s get two key definitions out of the way. When we talk about “general-purpose humanoids,” the fact is that both terms mean different things to different people. In conversations, most people take a Justice Potter “I know it when I see it” approach to both in conversation.

For the sake of this article, I’m going to define a general-purpose robot as one that can quickly pick up skills and essentially do any task a human can do. One of the big sticking points here is that multi-purpose robots don’t suddenly go general-purpose overnight.

Because it’s a gradual process, it’s difficult to say precisely when a system has crossed that threshold. There’s a temptation to go down a bit of a philosophical rabbit hole with that latter bit, but for the sake of keeping this article under book length, I’m going to go ahead and move on to the other term.

I received a bit of (largely good-natured) flack when I referred to Reflex Robotics’ system as a humanoid. People pointed out the plainly obvious fact that the robot doesn’t have legs. Putting aside for a moment that not all humans have legs, I’m fine calling the system a “humanoid” or more specifically a “wheeled humanoid.” In my estimation, it resembles the human form closely enough to fit the bill.

A while back, someone at Agility took issue when I called Digit “arguably a humanoid,” suggesting that there was nothing arguable about it. What’s clear is that robot isn’t as faithful an attempt to recreate the human form as some of the competition. I will admit, however, that I may be somewhat biased having tracked the robot’s evolution from its precursor Cassie, which more closely resembled a headless ostrich (listen, we all went through an awkward period).

Another element I tend to consider is the degree to which the humanlike form is used to perform humanlike tasks. This element isn’t absolutely necessary, but it’s an important part of the spirit of humanoid robots. After all, proponents of the form factor will quickly point out the fact that we’ve built our worlds around humans, so it makes sense to build humanlike robots to work in that world.

Adaptability is another key point used to defend the deployment of bipedal humanoids. Robots have had factory jobs for decades now, and the vast majority of them are single-purpose. That is to say, they were built to do a single thing very well a lot of times. This is why automation has been so well-suited for manufacturing — there’s a lot of uniformity and repetition, particularly in the world of assembly lines.

Brownfield vs. greenfield

Image Credits: Brian Heater

The terms “greenfield” and “brownfield” have been in common usage for several decades across various disciplines. The former is the older of two, describing undeveloped land (quite literally, a green field). Developed to contrast the earlier term, brownfield refers to development on existing sites. In the world of warehouses, it’s the difference between building something from scratch or working with something that’s already there.

There are pros and cons of both. Brownfields are generally more time and cost-effective, as they don’t require starting from scratch, while greenfields afford to opportunity to built a site entirely to spec. Given infinite resources, most corporations will opt for a greenfield. Imagine the performance of a space built ground-up with automated systems in mind. That’s a pipedream for most organizers, so when it comes time to automate, a majority of companies seek out brownfield solutions — doubly so when they’re first dipping their toes into the robotic waters.

Given that most warehouses are brownfield, it ought come as no surprise that the same can be said for the robots designed for these spaces. Humanoids fit neatly into this category — in fact, in a number of respects, they are among the brownest of brownfield solutions. This gets back to the earlier point about building humanoid robots for their environments. You can safely assume that most brownfield factories were designed with human workers in mind. That often comes with elements like stairs, which present an obstacle for wheeled robots. How large that obstacle ultimately is depends on a lot of factors, including layout and workflow.

Baby steps

Image Credits: Figure

Call me a wet blanket, but I’m a big fan of setting realistic expectations. I’ve been doing this job for a long time and have survived my share of hype cycles. There’s an extent to which they can be useful, in terms of building investor and customer interest, but it’s entirely too easy to fall prey to overpromises. This includes both stated promises around future functionality and demo videos.

I wrote about the latter last month in a post cheekily titled, “How to fake a robotics demo for fun and profit.” There are a number of ways to do this, including hidden teleoperation and creative editing. I’ve heard whispers that some firms are speeding up videos, without disclosing the information. In fact, that’s the origin of humanoid firm 1X’s name — all of their demos are run in 1X speed.

Most in the space agree that disclosure is important — even necessary — on such products, but there aren’t strict standards in place. One could argue that you’re wading into a legal gray area if such videos play a role in convincing investors to plunk down large sums of money. At the very least, they set wildly unrealistic expectations among the public — particularly those who are inclined to take truth-stretching executives’ words as gospel.

That can only serve to harm those who are putting in the hard work while operating in reality with the rest of us. It’s easy to see how hope quickly diminishes when systems fail to live up to those expectations.

The timeline to real-world deployment contains two primary constraints. The first is mechatronic: i.e. what the hardware is capable of. The second is software and artificial intelligence. Without getting into a philosophical debate around what qualifies as artificial general intelligence (AGI) in robots, one thing we can certainly say is that progress has — and will continue to be gradual.

As Huang noted at GTC the other week, “If we specified AGI to be something very specific, a set of tests where a software program can do very well — or maybe 8% better than most people — I believe we will get there within five years.” That’s on the optimistic end of the timeline I’ve heard from most experts in the field. A range of five to 10 years seems common.

Before hitting anything resembling AGI, humanoids will start as single-purpose systems, much like their more traditional counterparts. Pilots are designed to prove out that these systems can do one thing well at scale before moving onto the next. Most people are looking at tote moving for that lowest-hanging fruit. Of course, your average Kiva/Locus AMR can move totes around all day, but those systems lack the mobile manipulators required to move payloads on and off themselves. That’s where robot arms and end effectors come in, whether or not they happen to be attached to something that looks human.

Speaking to me the other week at the Modex show in Atlanta, Dexterity founding engineer Robert Sun floated an interesting point: humanoids could provide a clever stopgap on the way to lights out (fully automated) warehouses and factories. Once full automation is in place, you won’t necessarily require the flexibility of a humanoid. But can we reasonably expect these systems to be fully operational in time?

“Transitioning all logistics and warehousing work to roboticized work, I thought humanoids could be a good transition point,” Sun said. “Now we don’t have the human, so we’ll put the humanoid there. Eventually, we’ll move to this automated lights-out factory. Then the issue of humanoids being very difficult makes it hard to put them in the transition period.”

Take me to the pilot

Image Credits: Apptronik/Mercedes

The current state of humanoid robotics can be summed up in one word: pilot. It’s an important milestone, but one that doesn’t necessarily tell us everything. Pilot announcements arrive as press releases announcing the early stage of a potential partnership. Both parties love them.

For the startup, they represent real, provable interest. For the big corporation, they signal to shareholders that the firm is engaging with the state of the art. Rarely, however, are real figures mentioned. Those generally enter the picture when we start discussing purchase orders (and even then, often not).

The past year has seen a number of these announced. BMW is working with Figure, while Mercedes has enlisted Apptronik. Once again, Agility has a head start on the rest, having completed its pilots with Amazon — we are, however, still waiting for word on the next step. It’s particularly telling that — in spite of the long-term promise of general-purpose systems, just about everyone in the space is beginning with the same basic functionality.

Two legs to stand on

Image Credits: Brian Heater

At this point, the clearest path to AGI should look familiar to anyone with a smartphone. Boston Dynamics’ Spot deployment provides a clear real-world example of how the app store model can work with industrial robots. While there’s a lot of compelling work being done in the world of robot learning, we’re a ways off from systems that can figure out new tasks and correct mistakes on the fly at scale. If only robotics manufacturers could leverage third-party developers in a manner similar to phonemakers.

Interest in the category has increased substantially in recent months, but speaking personally, the needle hasn’t moved too much in either direction for me since late last year. We’ve seen some absolutely killer demos, and generative AI presents a promising future. OpenAI is certainly hedging its bets, first investing in 1X and — more recently — Figure.

A lot of smart people have faith in the form factor and plenty of others remain skeptical. One thing I’m confident saying, however, is that whether or not future factories will be populated with humanoid robots on a meaningful scale, all of this work will amount to something. Even the most skeptical roboticists I’ve spoken to on the subject have pointed to the NASA model, where the race to land humans on the moon led to the invention of products we use on Earth to this day.

We’re going to see continued breakthroughs in robotic learning, mobile manipulation and locomotion (among others) that will impact the role automation plays in our daily life one way or another.


Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber