Sajid Sadi, VP of Research, Head at the Think Tank Team, Samsung – Interview Series
We recently had the pleasure to speak with Sajid Sadi the Vice President of Research and Head of Samsung’s Think Tank Team about some of the projects the team have been working on. Sajid and Samsung’s Think Tank Team of experts (researchers, scientists, designers and engineers) work together to invent real-world future products and technologies that focus on user experience. The Think Tank Team turns today’s ideas into tomorrow’s reality.
Project Ballie is a personal home robot companion that can interact with pets and digital devices. What was the mindset behind the shape of the robot being a ball versus a more traditional robotic assistant?
When we started thinking about home robots, we thought not just about the device but about how a robot will interact with a human being. So far, most robots in the home fall into one of three categories: they ignore people physically (like robot vacuums), they can see people but do not really move spatially (eg, social robots), or they are effectively toys. We wanted to build a robot that comes to you and interacts with you in a spatial as well as informational sense. It is meant to be as much friend as mobile sensor. As you can imagine, that can be a little intimidating, and so we designed Ballie to be minimal, friendly, and small, so that the users would not be alarmed by the autonomy of the robot. It’s that perfect gerbil size that feels like a pet, both on the ground and in your hand.
What type of sensors, and machine learning is used in the Ballie robot?
Ballie is still a pre-production prototype, so we aren’t quite ready to talk about specific sensors. As you could see from the demo, it is slated to have a direction-sensing microphone array, camera, and some additional internal sensors to help it manage its motion. Beyond that, we are still considering what might be needed. In terms of machine learning, Ballie is based on on-device machine learning to handle its motion control and navigation. As president HS Kim mentioned, we at Samsung are fully committed to the security and privacy of your data, and we believe that the considerable experience we have in on-device deep learning and machine learning will allow us to handle Ballie’s core functionality in a self-contained way. After all, the most secure data is the kind that never leaves your device!
PROJECT: BOT CHEF
Kitchens are often cluttered and vary in dimensions, what type of challenge is this for the Samsung Bot Chef?
Samsung Bot Chef is still early in its journey to product, but that is something that we are thinking about a lot. We want Bot Chef to function in normal kitchens, and we are doing a lot of work to allow it to handle messy and cluttered situations. For one, Bot Chef uses its perception systems in conjunction with its motion control to ensure safe maneuvering. Additionally, it is also aware of when it cannot do something safely, or when it cannot be sure of what it has found. Because it is meant to be collaborative with the human being in charge, in those cases it knows to ask for help to ensure nothing goes wrong. And finally, we are taking steps to make the mechanical design capable of handling challenging motion scenarios. One part of this is the specially designed gripper, which is able to carry large loads at difficult angles. This can allow the robot to pick up an awkwardly placed item and then place it somewhere else to get a better grip, just as we do as human beings when we pick a bottle out of a crowded cabinet. It is definitely not a simple challenge to overcome, and we expect that we will need human help for some time, but we also believe that there are many cases where Bot Chef can help in meaningful and time-saving ways, and we are always expanding the horizons of those capabilities.
Is the Samsung Bot Chef able to use machine learning to locate objects in the kitchen, or will a user need to instruct the Bot Chef as to the location of various ingredients and cookware?
Samsung Bot Chef uses a combination of machine vision and machine learning approaches to locate and calculate grasping solutions for physical items around the kitchen. Of course, that is not to say that it knows everything at first sight, just as you may not know the cayenne from the paprika in a new kitchen. One area we are working on is how the robot can tell when it’s unsure. In those cases, we can ask a human being to disambiguate or help out. Likewise, there are some tools that are specifically difficult to pick out, and in those cases, we may have a specific tool holder to assist with the task. However, the goal is to eventually know how to use everything in the kitchen as naturally as possible.
One of the projects I am most excited about is Project Green, which enables a user to grow vegetables in their kitchens in a climate controlled mini-greenhouse. Can you discuss how this will reduce the carbon footprint of shipping vegetables, wastage, etc.
Project Green is a technology demonstrator to showcase how we can bring gardening indoors, especially in the vast majority of the planet there is neither space nor accomodating climate to support such activities. On that front, perhaps the most difficult logistical problem in the food supply chain are leafy greens — they are fragile, sensitive to picking time and growing conditions, short-lived on the shelf, and labor-intensive to plant and harvest. Because they cannot be packed closely, they are also the most inefficient to move about. After all, the energy use of any form of transport is relatively less affected by weight carried compared to the baseline cost of miles traveled. Carrying mostly-empty clamshells of lettuce or parsley (which, by the way, are a terrible environmental hazard in their own right) is far less efficient than carrying the same truck full of watermelons. And in terms of food waste, how often has anyone of us bought a head of lettuce, only to have it turn purple with age? How often have we thrown out the outer leaves because they are soft and soggy? Green was designed to tackle exactly those sorts of produce by allowing fast, reliable growth in the home. By combining degradable nutrient+seed pods with fogponic growth technology and perfect control of growing temperature, light, and moisture, Green grows perfect vegetables without any need for a green thumb. Additionally, the internal monitoring systems allow you to know in advance when you can expect to enjoy the produce and controls the conditions to ensure that the living plant is maintained in peak edible condition for as long as possible.
I see that Project Green also provides recipes to users based on what is ready to use in the greenhouse and the items in the fridge. Can you discuss the technology behind the fridge identifying the available ingredients?
Project Green is designed to work with our Family Hub and AI technologies such as Whisk (a Samsung Next company) to help identify recipes for items already in the fridge, and in alignment with your preferences. As we demonstrated in CES 2020, Family Hub’s ViewInside cameras have been upgraded with brand-new AI image recognition technology, which automatically scans the products inside your fridge, identifies them, and sends you updates on items your family has added or depleted. Family Hub’s Meal Planner is also smarter, with the Quick Plan feature now offering a week’s worth of recommended recipes with just one click. Whisk technology can help you plan an entire meal—or even a week’s worth of meals—by adjusting recipes based on the number of guests you expect and building a smart shopping list that consolidates ingredients from several recipes, and of course Green integrates with that capability to dovetail your produce perfectly across an entire meal or set of meals.
Project Spot is a commercial product that employs a sensor to track user actions and generate an associated response, making an interactive user experience. Can you tell us what kind of businesses can benefit from this type of technology?
Often times, manufacturers want to provide deeper information on each item they sell. However, the physical space at stores allows for one display for many items. Spot overcomes this issue by turning the entire space in front of the display sensate, so you get information on the items you interact with. As a side effect, Spot also provides insight into one of the most difficult issues of brick-and-mortar stores: space allocation. While we know that some spaces in the store attract more attention (ie, endcaps), often it is difficult to know what will catch people’s eyes. Now, we can directly detect and track what is interacted with, and how much. This allows managers to allocate and manage space more intelligently than ever before. Another area where it has come into use is cafes and smaller food shops, where a person can find out details of the meals and explore the menu in detail, while seeing a sample of the real item right in front of them. SInce Spot supports in-air gesture, the user need only point at something to see the details or place an order. Spot is a completely open platform, and users are constantly surprising us with the uses they find for the technology.
Project Spot collects data about user behavior, can you tell us what type of data is collected and how this benefits the user and the commercial entity?
One of the privacy advantages of Spot is that it doesn’t look at the user, only at the surface on which its target items are placed. As a result, we only gain aggregated and intrinsically anonymous data. As mentioned above, this gives the commercial entities much better insight into the interests of the users, and that in turn helps companies like Samsung make more of what you love. At the same time, this allows space-constrained store managers to put more of the items that you love upfront, and also share much more information about those items than would be possible on the tradition (and minuscule) product placards. This allows the users to make more informed and interactive decisions than would be possible without Spot.
Project Beyond is the world’s first 3D omniview camera, which will enable the generation of Virtual Reality (VR) content. Can you discuss how users will be able to transfer the generated content into the Gear VR, and will users of other VR platforms be able to interact with or watch this content?
While we do produce Gear VR as well as Odyssey Windows MR devices, the goal of Project Beyond (and the ensuing product, Samsung 360 Round) was to open up creation of real-time 3D 360 content for all VR platforms. The content produced is fully compatible with all platforms including Samsung XR, YouTube, and anything else capable of accepting commonly formatted 3D or 2D 360 videos. In addition, Samsung XR supports rich editing capabilities that can be used to add additional interactive capabilities and transitions to bring full VR experiences based on true reality capture to life.
Is there anything else that you would like to share about Samsung’s Think Tank Team?
The Think Tank Team at Samsung Research has been at the forefront of creating new technology solutions that bring unique takes for Samsung’s forays into new business areas. We bring together people from science, design, engineering, strategy, and more to consider problems holistically, and work to develop the most salient and elegant solutions to difficult user and technology challenges. In a nutshell, we dream about the future, and then we discover and build our way to that reality.
You can learn more about Samsung’s Think Tank Team by visiting their website.