What will it be like when the “smart home” no longer refers to notifications and actually refers to the house does the work? CES has been toying with that future all years with robot vacuums, appliances connected to an app, and dozens of demonstrations that seemed to work better in the trade-show light than in a living room. The latest offering of LG, at CES 2026, is less oblique: a home robot that will take the final annoying mile of household automation, such as the task that even the most organized of households cannot accomplish in an orderly manner: laundry.

The CLOiD of LG is introduced as an AI-based home robot that is constructed by two flexible arms, a stylized head unit, and a wheeled base that provides stability and maneuverability. In the case of LG itself, CLOiD passes through kitchen-and-laundry steps: retrieving products in a fridge, putting food in an oven, starting laundry, and finally folding and stacking clothes after drying. The company positions CLOiD as the extension of its “Zero Labor Home” vision, in which the robot is not so much an independent device but a mobile controller of other appliances via ThinQ. The head unit is also a mobile interface, which also includes display, speakers, cameras, and sensors, to aid voice interaction and visual comprehension of the home surroundings. The reason that this combination is important is that the pitch becomes not a robot that performs a single task, but a robot that can switch tasks without the need to be reprogrammed with each new object and each slightly different kitchen design.
It is not in vain that the bulk of this story is made up of laundry. Automation can be achieved in washing, drying but not in folding which is a dexterity issue, masquerading as a lifestyle issue. Clothing is loose, random and can hardly be offered to a machine in a clean, standardized form. Other efforts to address the same pain point were also shown in CES 2026, such as the AI laundry robot prototypes by Tenet, one smaller and egg-shaped device designed to wash and hang-dry clothes within the machine and a larger front-loader when larger loads are needed. The smaller unit has been reported to fold too, but not folding has been shown. That difference, between what is promised and what is presented, has been the criteria of a decade of the laundry robotics, and it is also what LG is still doing, in positioning CLOiD as a general home manipulator and not a purpose-specific folding box.
Hardware is only half the bet. The arms of CLOiD are said to have seven degrees of freedom each, and each hand contains five independently actuated fingers, which is intended to deal with common items and not merely gripping pre-prepared props. LG also points out more practical limitations: the arms are able to pick-up objects since they are not only knee-level or higher but also above that point, which concedes quietly the challenge of cleaning floors with current humanoid form factors. It is not a cosmetic limitation, but it determines what chores can be realistic without making the robot a nuisance. Even a home robot that can fold towels, but cannot pick up a dropped sock, alters the workflow in a house, although in that particular manner, forcing humans to do the most degrading and guttering work as the robot does the clean-edging.
The difference in CES 2026 is the extent to which the “robot brain” is becoming standardized at the top. Nvidia has been bundling robotics development as a stack of models, simulation tools, and edge compute targeted to reduce the gap between lab demonstration and reproducible behavior. The company outlines new physical AI open resources, such as Cosmos Transfer 2.5 and Cosmos Predict 2.5 on physical AI world modeling and synthetic data generation, and a Cosmos Reason 2 reasoning vision-language model to assist machines in making sense of messy real-world scenes. In the case of humanoid-like systems, Nvidia can also identify an open model that it uses to control full-body, Isaac GR00T N1.6. It is important to appliance brands since the most challenging aspect of home robotics is not creating one smart demo, but creating a system that can withstand infinite variation: varying lighting, cluttered counters, pets roaming around, humans disrupting one task, long tail of shapes of household objects that no longer appear in training data.
Meanwhile, Nvidia has been promoting the concept of “reasoning” as the missing connection between perception and motion – models, which are not merely recognized, but can plan actions and reasons. Nvidia presents an 10-billion parameters vision-language-action model of Alpamayo 1 in its autonomous-vehicle tooling, and relates it to an open dataset of over 1,700 hours of driving data of rare scenarios and edge cases. The figures are per car, not kitchen, but the approach is generally transferable: simulate, create synthetic data, train at scale, and check behaviors until a physical machine has access to the real world. Home robotics requires the same rigor since a laundry room contains an equal number of edge cases, wet clothes balling up, shirt snagged on a drum lip, towels knotting, etc., just none of which have the standardized sensor suites and decades of validation culture that the automotive industry has acquired.
LG also is not only doing robotics as a showroom feature but as a components business. Together with CLOiD, it released a new brand, AXIUM, of actuators, and characterized actuators as among the most expensive parts of a robot, and modular and lightweight high-torque joints as a competitive edge. It is hard to see that type of supply-chain focus in a demo video, and it is what makes the difference between a single prototype and a product line, which can be produced, maintained, and refined.
The most obvious lesson to draw out of the laundry-folding moment of CES 2026 is that the consumer robotics market is being redesigned on a fresh base: models, simulation, training data, and standardized hardware blocks. The folding itself is the headline, though the actual change is subtler in nature, that household robots are beginning to look less like novelty appliances and more like platforms, capable of acquiring functionality as the underlying physical AI becomes more advanced.

