Using deep imaging for higher resolution | MIT News

Automation has existed since ancient Greece. Its form is changing, but the intention of technology to take on repetitive tasks remains unchanged, and the fundamental element of success has been the ability to image. The last iteration is robots, and the problem with most of them in industrial automation is that they work in an environment based on devices specifically designed for them. It’s good when nothing changes, but everything is inevitable. What robots should be capable of than they are not is to adapt quickly, see objects clearly, and then place them in the correct orientation to include operations such as stand-alone assembly and packaging.

Akasha Imaging is trying to change that. The California-based California-based startup uses passive imagery, diverse modalities and spectra combined with in-depth training to provide more efficient and cost-effective feature detection, tracking and posture orientation. Robots are the main application and current focus. In the future it may be for packaging and navigation systems. This is secondary, says Kartik Venkataraman, CEO of Akasha, but since the adaptation will be minimal, it speaks to the overall potential of what the company is developing. “It’s an interesting part of what this technology is capable of,” he says.

From the lab

Beginning in 2019, Venkataraman founded the company along with Massachusetts Institute of Technology associate professor Ramesh Roscar and Achuta Kadambi, Ph.D. Rascar is a professor of media at the Massachusetts Institute of Technology, and Kadambi is a former graduate student of the media lab, whose research during his doctoral dissertation will form the basis for Akashic technology.

The partners saw an opportunity in industrial automation, which in turn helped name the company. Akasha means “the foundation and essence of all things in the material world,” and it is this infinity that inspires a new kind of image and deep learning, says Venkataraman. In particular, this applies to the assessment of orientation and localization of objects. Traditional leader and laser vision systems project light of different wavelengths onto the surface and determine the time it takes for light to reach the surface and return to determine its location.

These methods had limitations. The farther the system should be, the more energy is required for lighting; for higher separation, the more light is projected. In addition, the accuracy with which the past time is determined depends on the speed of the electronic circuits, and there is a limitation around this based on physics. Company executives are constantly forced to make decisions about what is most important between resolution, cost, and capacity. “It’s always a compromise,” he says.

And projected light in itself creates problems. With shiny plastic or metal objects, the light is reflected back, and the reflectivity interferes with the lighting and accuracy of the readings. With transparent items and transparent packaging the light passes, and the system gives a picture of what is behind the intended purpose. And in dark objects there is almost no reflection, which complicates detection, not to mention the details.

Putting it into use

One of the company’s activities is to improve robotics. Currently in warehouses robots help in production, but the materials create the aforementioned optical problems. Objects can also be small, where, for example, you need to pick up a spring 5-6 millimeters long and pull it into the shaft 2 mm wide. Human operators can compensate for inaccuracies because they can touch things, but because robots don’t have tactile feedback, their vision needs to be accurate. If this is not the case, any slight deviation can lead to a blockage in which the person must intervene. Also, if the visualization system is not reliable and accurate for more than 90 percent of the time, the company creates more problems than it solves and loses money, he says.

Another potential is the improvement of car navigation systems. The leader, modern technology, can determine that there is an object on the road, but he cannot necessarily say what kind of object it is, and this information is often useful, “in some cases vital,” says Venkataraman.

In both areas, Akasha technology provides more. On a road or highway system, it can capture the texture of the material and determine whether it is an oncoming, pothole, animal or barrier to road work. In the unstructured environment of a factory or warehouse it can help a robot to lift and place this spring in a shaft or move objects from one transparent container to another. Ultimately, this means increasing their mobilization.

With robots in building automation one of the major obstacles was that most of them didn’t have a visual system. They can find an object only because it is fixed and they are programmed where to go. “It works, but it’s very inflexible,” he says. If new products appear or the process changes, the devices also need to change. It takes time, money and human intervention, and this leads to an overall loss of productivity.

Along with a lack of ability to see and understand, robots do not have the innate hand-eye coordination that humans have. “They can’t understand the chaos of the world on a daily basis,” Venkataraman said, “but he adds that with our technology, I think it’s going to happen.”

As with most new companies, the next step is to test reliability and reliability in real conditions to a “sub-millimeter level” of accuracy, he says. After that, in the next five years there should be an expansion of various industries. It is virtually impossible to predict which ones, but it is easier to see universal benefits. “In the long run, we will see that this improved vision will be an incentive to improve intelligence and learning,” says Venkataraman. “In turn, this will automate more complex tasks than have been possible so far.”

.

Leave a Comment