åǥÁö

[GlT] MIT°¡ ·Îº¿ÀÇ »õ·Î¿î °ø°£Áö°¢ ´É·Â ¸ðµ¨ °³¹ß

ÄÄÇ»Æà ÃʱâºÎÅÍ »ç¶÷µéÀº ¡°ºÎ¾ý¿¡ °¡¼­ Ä¿ÇÇ ÀÜ °¡Á®¿Í¡±¿Í °°Àº, ¿À´Ã³¯ ¾Æ¸¶Á¸ÀÌ Á¦°øÇÏ´Â ¾Ë·º»ç(Alexa) À½¼ººñ¼­ ¼öÁØÀÇ °íµµÀÇ ¸í·ÉÀ» µû¸£´Â ·Îº¿ ÇÏÀÎÀ» »ó»ó..



¾î¶² ½Å±â¼úÀÌ ¼¼»óÀ» ±ØÀûÀ¸·Î º¯È­½Ãų±î? ¼¼°è ÃÖ°íÀÇ ¿¬±¸¼Ò¿¡¼­ ³ª¿À´Â ³î¶ó¿î Çõ½ÅÀ» µ¶Á¡ ¼Ò°³ÇÕ´Ï´Ù.

 

ÄÄÇ»Æà ÃʱâºÎÅÍ »ç¶÷µéÀº ¡°ºÎ¾ý¿¡ °¡¼­ Ä¿ÇÇ ÀÜ °¡Á®¿Í¡±¿Í °°Àº, ¿À´Ã³¯ ¾Æ¸¶Á¸ÀÌ Á¦°øÇÏ´Â ¾Ë·º»ç(Alexa) À½¼ººñ¼­ ¼öÁØÀÇ °íµµÀÇ ¸í·ÉÀ» µû¸£´Â ·Îº¿ ÇÏÀÎÀ» »ó»óÇØ ¿Ô´Ù. ±×·¯³ª MIT °øÇÐÀÚµéÀÌ ¼³¸íÇÑ ¹Ù¿Í °°ÀÌ, ÀÌ·¯ÇÑ ³ôÀº ¼öÁØÀÇ ÀÛ¾÷ ¼öÇàÀº ·Îº¿ÀÌ Àΰ£Ã³·³ ¹°¸®Àû ȯ°æÀ» ÀνÄÇÒ ¼ö ÀÖ¾î¾ß ÇÑ´Ù´Â °ÍÀ» ÀǹÌÇÑ´Ù.

 

½ÇÁ¦ ¼¼»ó¿¡¼­ ¾î¶² ÀÏÀ» ÇÏ·Á¸é, ÁÖº¯ ȯ°æ¿¡ ´ëÇÑ ¸àÅÐ(mental) ¸ðµ¨ÀÌ ÀÖ¾î¾ß ÇÑ´Ù. ÀÌ°ÍÀº Àΰ£¿¡°Ô´Â ½¬¿î ÀÏÀÌ´Ù. ÇÏÁö¸¸ ·Îº¿ÀÇ °æ¿ì Ä«¸Þ¶ó¸¦ ÅëÇØ º¸´Â Çȼ¿ °ªÀ» ¼¼»ó¿¡ ´ëÇÑ ÀÌÇØ·Î º¯È¯ÇؾßÇÏ´Â ¸Å¿ì ¾î·Á¿î ¹®Á¦À̱⵵ ÇÏ´Ù.

´ÙÇàÈ÷µµ, ¾Õ¼­ ¾ð±ÞÇÑ MIT °øÇÐÀÚµéÀÌ Àΰ£ÀÌ ¼¼»óÀ» ÀνÄÇÏ°í Ž»öÇÏ´Â ¹æ½ÄÀ» ¸ðµ¨·Î ÇÑ ·Îº¿¿¡ ´ëÇÑ °ø°£ ÀÎ½Ä Ç¥ÇöÀ» °³¹ßÇس´Ù. 3D µ¿Àû Àå¸é ±×·¡ÇÁ(3D Dynamic Scene Graphs)·Î ºÒ¸®´Â ÀÌ »õ·Î¿î ¸ðµ¨À» »ç¿ëÇÏ¸é ·Îº¿ÀÌ ¡®»ç¹°¡¯, ¡®»ç¶÷, ¹æ, º®, Å×À̺í, ÀÇÀÚ¡¯¿Í °°Àº ½Ã¸Çƽ ·¹À̺í(semantic labels), ±×¸®°í ·Îº¿ÀÌ ±×µé ȯ°æ¿¡¼­ º¼ ¼ö ÀÖÀ» °Í °°Àº ±âŸ ±¸Á¶µéÀÌ Æ÷ÇԵǴ ÁÖº¯ ȯ°æ 3D Áöµµ¸¦ ½Å¼ÓÇÏ°Ô »ý¼ºÇÒ ¼ö ÀÖµµ·Ï ÇØÁØ´Ù. ¶ÇÇÑ ÀÌ ¸ðµ¨À» ÅëÇØ ·Îº¿Àº 3D Áöµµ¿¡¼­ °ü·Ã Á¤º¸¸¦ ÃßÃâÇÏ°í °æ·Î¿¡¼­ ¹°Ã¼, ¹æ ¶Ç´Â ¿òÁ÷ÀÌ´Â »ç¶÷ÀÇ À§Ä¡¸¦ ​​Äõ¸®ÇÒ ¼ö ​​ÀÖ´Ù.

 

ÀÌ·¸°Ô ȯ°æÀÌ ¾ÐÃàµÇ´Â °ÍÀº ·Îº¿¿¡°Ô À¯¿ëÇѵ¥, ½Å¼ÓÇÏ°Ô °áÁ¤À» ³»¸®°í °æ·Î¸¦ °èȹÇÒ ¼ö ÀÖµµ·Ï ÇØÁֱ⠶§¹®ÀÌ´Ù. ´õºÒ¾î ÀÌ°ÍÀº ¿ì¸® Àΰ£ÀÌ ÇÏ´Â ÀÏ°ú ±×¸® ´Ù¸£Áö ¾ÊÀº °ÍÀÌ´Ù. Áý¿¡¼­ Á÷Àå±îÁöÀÇ °æ·Î¸¦ »ý°¢ÇÒ ¶§, °í·ÁÇÒ ÇÊ¿ä°¡ ÀÖ´Â ¸ðµç ¿ä¼Ò¸¦ »ý°¢ÇÏÁø ¾ÊÀ» °ÍÀÌ´Ù. ¿ì¸®´Â °¢ °Å¸®¿Í ·£µå¸¶Å© Á¤µµÀÇ ¼öÁØ¿¡¼­¸¸ »ý°¢ÇÏ°í, ±×°ÍÀÌ ´õ ºü¸¥ °æ·Î¸¦ °èȹÇÏ´Â °ÍÀ» µ½´Â´Ù.

 

°¡»ç µµ¿ì¹Ì ÀÌ»óÀ¸·Î, ¿¬±¸¿øµéÀº ÀÌ·¯ÇÑ »õ·Î¿î Á¾·ùÀÇ È¯°æ ¸àÅÐ ¸ðµ¨À» äÅÃÇÑ ·Îº¿ÀÌ °øÀå¿¡¼­ »ç¶÷µé°ú ³ª¶õÈ÷ ÀÛ¾÷Çϰųª Àç³­ ÇöÀå¿¡¼­ »ýÁ¸ÀÚ¸¦ ã´Â °Í°ú °°Àº ´Ù¸¥ ³ôÀº ¼öÁØÀÇ ÀÛ¾÷¿¡µµ ÀûÇÕ ÇÒ ¼ö ÀÖ´Ù°í ¸»ÇÑ´Ù.

 

ÀÌ ¿¬±¸´Â ÃÖ±Ù ¡°2020 Robotics : Science and Systems °¡»ó ÄÁÆÛ·±½º¡±¿¡¼­ ¹ßÇ¥µÇ¾ú´Ù.

 

ÀÌ ¿¬±¸°¡ ¿Ö Áß¿äÇÒ±î? Áö±Ý±îÁö ·Îº¿ ºñÀü°ú ³»ºñ°ÔÀ̼ÇÀº ÁÖ·Î µÎ °¡Áö °æ·Î¸¦ µû¶ó ¹ßÀüÇØ¿Ô´Ù. ù ¹ø°´Â ·Îº¿ÀÌ ½Ç½Ã°£À¸·Î Ž»öÇϸ鼭 ȯ°æÀ» 3Â÷¿øÀ¸·Î À籸¼ºÇÒ ¼ö ÀÖµµ·Ï ÇÏ´Â 3D ¸ÅÇÎÀÌ´Ù. µÎ ¹ø°´Â ½Ã¸Çƽ ºÐÇÒÀ» È°¿ëÇÏ´Â °ÍÀ̾ú´Âµ¥, ÀÌ°ÍÀº ·Îº¿ÀÌ ÀÚµ¿Â÷ Vs ÀÚÀü°Å¿Í °°Àº ½Ã¸Çƽ °´Ã¼·Î¼­ ȯ°æÀÇ Æ¯Â¡µéÀ» ºÐ·ùÇÏ´Â µ¥ µµ¿òÀ» ÁØ´Ù. ´Ù¸¸ ½Ã¸Çƽ ºÐÇÒÀº Áö±Ý±îÁö´Â ´ëºÎºÐ 2D À̹ÌÁö¸¦ ÅëÇØ ¼öÇàµÇ¾ú´Ù. ±×·¯³ª MIT°¡ °³¹ßÇÑ ÀÌ »õ·Î¿î °ø°£ Áö°¢ ¸ðµ¨Àº ½Ç½Ã°£À¸·Î ȯ°æ 3D Áöµµ¸¦ »ý¼ºÇÏ´Â µ¿½Ã¿¡ ÇØ´ç 3D Áöµµ ³»¿¡¼­ ¹°Ã¼, »ç¶÷ ¹× ±¸Á¶¿¡ ·¹À̺íÀ» ÁöÁ¤ÇÏ´Â ÃÖÃÊÀÇ ¸ðµ¨ÀÌ´Ù.

 

- References 

To view or purchase this article, please visit:
https://www.researchgate.net/publication/342881852_3D_Dynamic_Scene_Graphs_Actionable_Spatial_Perception_with_Places_Objects_and_Humans

 

To view a related video media, from Massachusetts Institute of Technology, please visit:
https://www.youtube.com/watch?v=SWbofjhyPzI

Since the earliest days of computing, people have imagined robotic servants able to follow high-level, Alexa-type commands, such as ¡°Go to the kitchen and fetch me a coffee cup.¡± But as MIT engineers explain carrying out such high-level tasks means that robots will have to be able to perceive their physical environment as humans do.

 

In order to function in the world, you need to have a mental model of the environment around you. This is something that¡¯s effortless for humans. But for robots, it¡¯s a painfully hard problem, which requires transforming pixel values that they see through a camera, into an understanding of the world.

 

Fortunately, these MIT engineers have developed a representation of spatial perception for robots that is modeled after the way humans perceive and navigate the world. The new model, called 3D Dynamic Scene Graphs, enables a robot to quickly generate a 3D map of its surroundings that also includes objects and their semantic labels such as people, rooms, walls, tables, chairs, and other structures that the robot is likely to see in its environment. The model also allows the robot to extract relevant information from the 3D map and to query the location of objects, rooms, or the moving people in its path.

 

This compressed representation of the environment is useful because it allows a robot to quickly make decisions and plan its path. This is not too far from what we do as humans.  If you need to plan a path from your home to work, you don¡¯t plan every single position you need to take. You just think at the level of streets and landmarks, which helps you plan your route faster.

 

Beyond domestic helpers, the researchers say robots that adopt this new kind of mental model of the environment may also be suited for other high-level jobs, such as working side-by-side with people on a factory floor or exploring a disaster site for survivors.

 

The research presented recently at the 2020 Robotics: Science and Systems virtual conference.

 

Why is this important? Until now, robotic vision and navigation have advanced mainly along two routes: the first involves 3D mapping that enables robots to reconstruct their environment in three dimensions as they explore in real-time; and the second uses semantic segmentation, which helps a robot classify features in its environment as semantic objects, such as a car versus a bicycle, which so far is mostly done with 2D images. The new MIT model of spatial perception is the first to generate a 3D map of the environment in real-time, while also labeling objects, people, and structures within that 3D map.

 

The key component of the team¡¯s new model is Kimera, an open-source library that the team previously developed to simultaneously construct a 3D geometric model of an environment, while encoding the likelihood that an object is, say, a chair versus a desk. Like the mythical creature that is a mix of different animals, the team wanted Kimera to be a mix of mapping and semantic understanding in 3D.

 

Kimera works by taking in streams of images from a robot¡¯s camera, as well as inertial measurements from onboard sensors, to estimate the trajectory of the robot or camera and to reconstruct the scene as a 3D mesh, all in real-time.


To generate a semantic 3D mesh, Kimera uses an existing neural network trained on millions of real-world images, to predict the label of each set of pixels, and then projects these labels in 3D using a technique known as ray-casting, commonly used in computer graphics for real-time rendering.

 

The result is a map of a robot¡¯s environment that resembles a dense, three-dimensional mesh, where each face is color-coded as part of the objects, structures, and people within the environment.

 

If a robot were to rely on this mesh alone to navigate through its environment, it would be a computationally expensive and time-consuming task. So the researchers developed algorithms to construct 3D dynamic ¡°scene graphs¡± from Kimera¡¯s initial, highly dense, 3D semantic mesh. In the case of the 3D dynamic scene graphs, the associated algorithms abstract, or break down, Kimera¡¯s detailed 3D semantic mesh into distinct semantic layers, such that a robot can ¡°see¡± a scene through a particular layer, or lens. This layered representation avoids a robot having to make sense of billions of points and faces in the original 3D mesh. Within the layer of objects and people, the researchers have also been able to develop algorithms that track the movement and the shape of humans in the environment in real-time.

 

This is essentially enabling robots to have mental models similar to the one humans use. And it is expected to impact many applications, including self-driving cars, search and rescue, collaborative manufacturing, and domestic robots.

 

References
Robotics Science and Systems
, July 12-16, 2020, ¡°3D Dynamic scene graphs: Actionable spatial perception with places, objects, and humans,¡± by Antoni Rosinol, et al.  © 2020 RSS.  All rights reserved.

 

To view or purchase this article, please visit:
https://www.researchgate.net/publication/342881852_3D_Dynamic_Scene_Graphs_Actionable_Spatial_Perception_with_Places_Objects_and_Humans

 

To view a related video media, from Massachusetts Institute of Technology, please visit:
https://www.youtube.com/watch?v=SWbofjhyPzI