Dell’s most forward-looking people spoke about the future at Dell World a few weeks ago. One of the sessions I attended dovetailed with something that appears to be glaringly obvious — to me, anyway — which is that is that robots likely will be the next big technology wave.
I then wandered around to find out what Dell was doing in robotics, and I couldn’t find anything. Dell is not alone, as I’m not aware of any of the current leading technology firms doing anything in robotics with one exception: Nvidia.
Nvidia also figured out early that autonomous cars were going to be a thing and largely pivoted from the mobile device efforts that were not going much of anyplace to self-driving cars. It now dominates the important part of that trend — the brain.
Well last week, Nvidia announced Isaac, which is based on its Jetson platform, and it is targeting robotics. Once again, Nvidia has anticipated the future and, in its segment, is largely going it alone.
Applying what it learned developing autonomous vehicles gave the company a huge jump on this segment, and its initial offering looks surprisingly mature as a result. I’ll share some observations about what Nvidia’s Isaac is going to enable and close with my product of the week: Cinego, a movie-watching solution that provides a big screen experience on your head and actually is damn comfortable.
The Elements of Success
Self-driving cars are basically robots that carry people. They are very advanced, because these robots must be able to deal with a massive variety of changing conditions in real time. Using a blend of cameras and technologies like LIDAR, they must look for and anticipate problems, respond to them in milliseconds, and ensure the safety of the vehicle, passengers, and anyone near the vehicle.
They are far more advanced and faster, in terms of being able to think and form decisions, than most defense systems, most computer systems, and most traffic control systems. They have to be — otherwise, they wouldn’t be safe on the road.
One of the elements Nvidia realized it needed late in the process was the ability to create electronic simulations of various traffic, road and weather conditions, and train the autonomous driving computers at computer speed.
Previously, training had been done at human speed on real roads, which limited significantly the system’s learning speed and created potential life-threatening risks. Training on a virtual system entails little or no risk, so the result of the pivot to simulation was a massive increase in system capabilities.
Nvidia has applied these same tools to Isaac, and the result is that its robotic solution starts out years ahead of where it otherwise might be.
So, the end result is a robotic intelligence system with much of the power of Nvidia’s Autonomous Vehicle system, giving it the ability to navigate, see and make decisions. Even voice command is built in, given that you largely will interface with an autonomous vehicle with your voice. Autonomous cars can read signs, so the robots based on this technology should be able to read as well.
The Robotic Future of Isaac
Using this system, developers should be able to give the robot the ability to respond to commands, read labels on food packaging and medicine bottles, and perform many of the same tasks as a caregiver over time.
Unlike with a monkey or a dog, should something happen to the robot requiring a replacement, the specific training could be passed on, so that the new robot wouldn’t need to be retrained. Able to come when called, recognize danger, and automatically call for help, this emerging generation of care robots could reduce massively the cost of caring for those who have limited mobility.
Applied to a class of cleaning robots, this technology could make the Roombas of today look positively ancient. They would be able to dust, vacuum, mop, clean windows, and potentially even cook food — initially basic meals like TV dinners. Eventually, they could evolve into full home care providers.
Outside, the robotic lawnmowers of today are very limited, requiring electronic borders and generally bouncing around the lawn like the first-generation Roombas. With this advanced ability to make decisions, the robot could not only make the lawn look better, but also issue alerts about problems, make recommendations about how to fix them, and begin to execute the fixes at an ever-more-capable scale.
Trimming hedges — and, depending on the model, trees — as well as doing menial labor like pulling weeds would be well within the platform’s capabilities, once trained. I’m thinking shoveling snow or running the snow blower on those frigid winter days could be the killer app in colder climates.
Wrapping Up: Nvidia Is Right Again
It amazes me that Nvidia has been able to do this twice. It anticipated the technology need for autonomous cars and the far larger coming wave of robotics.
I don’t think we yet realize how the coming wave of robots will change our lives, hopefully for the better. It certainly will be amazing, and with the boost Nvidia got from autonomous cars, the result will come far faster than I think any of us realize.
I just wonder how long it will be before the other tech companies catch on. I’m just looking forward to being able to sleep in and let something else do my winter morning chores.
I buy one or two things a quarter on Indiegogo, and the concept of Cinegocaught my eye. The idea was to put a high-resolution screen focused not on virtual reality but on watching video, though it also will play some games.
At a cost of about US$500 for the full kit, it isn’t a cheap date, but having received it last week, I don’t think it is a bad deal either. The resolution is impressive (4K), and you can adjust the focus for each eye (an important feature for me because I’ve had the surgery that allows one of my eyes to see distance and the other to see close up).
The device will both hold and stream movies on WiFi, and it takes a micro-SD card for memory. It has both a touchpad and circular controls for navigation (like the remote on an Amazon Fire TV).
You use your own headphones, which is nice, because you can pick the type you like — but it does make putting the thing on a bit more awkward. Once on, though, watching an entire movie isn’t a problem and the picture is pretty amazing.
I had two issues. One was a slight amount of light bleed on the side of my head. The other was that I couldn’t seem to get Amazon Prime Video to allow me to upgrade the app, because I couldn’t log into the Google store (this was weird because I could log into Google on the included browser).
Netflix worked just fine though, and it also supports Hulu, You Tube and a variety of other video apps. You can connect the device to DVD players, smartphones or PCs, but its greatest strength is as a standalone device (it has its own battery).
An ideal use for this is in bed when you don’t want to keep your spouse awake, or anyplace where you want to be isolated from what is going on around you. I’m thinking this would be great in a Dentist’s office during teeth cleaning (when I’m often bored).
I have a feeling there is motion sickness risk in cars and planes, as that has been a problem with other head-mounted movie solutions in the past with regard to moving vehicles.
If it weren’t so expensive, I’d suggest it as a tool to quiet your kid, but kids are pretty hard on stuff like this and $500 is a lot to lose if they brake the device.
Overall, I’m really impressed with this thing, and I am looking for other places to use it. I just wish it had come with a travel case. I don’t impress easily, so the Cinego personal cinema is my product of the week.
Leave a Reply