Issue link: https://iconnect007.uberflip.com/i/1502623
34 SMT007 MAGAZINE I JULY 2023 We bought four robots to make a robotic cell: a SCARA-type robot, an anthropomor- phic robot, and two 6-axis cobots. I hired some undergraduate students and told them to make it all work together. at's all the information I gave them, and we worked hand-in-hand to see what would work. e students wanted a task for the robots, so we've settled on putting together little DUPLO trees. It's something that toddlers can do very well, but machinery struggles with. Even on this contrived task, we have failed early and oen. We struggled to do easy things like setting IP addresses on some of the robots so we could communicate with them, because we didn't understand masks and subnets, and how those all things work together. e failure has been so beneficial because it's synonymous to the problems with the digital twin. If we can't even communicate on the first level, we can't get to this other data. Because we failed and learned from our failure, I understand how to add more sensors and equipment with their own IP addresses, which gets me closer to the digital twin. We just dive in, so we can see where and how we fail. What's great about Omron is that they've allowed me to do it this way. is is not a grant; this is a gi they gave us. ey said, "Just go play." We've had some companies already ask to play in our sandbox, hoping to explore some of these questions: "How do you do this? Can you help us?" We don't promise anything except we will muddle through it like a cus- tomer would and learn from it. Nolan Johnson: The operational goal is to put these DUPLOs together into a tree. In getting there, are you documenting where the process is weak? Is the goal to identify where it will be difficult to program the robots to perform the motion necessary to build the tree? at's exactly what we're trying to do. We first tried to do it without vision assistance, to see how far we could go—only to find we have to fixture everything. en we asked, "What if we do this with vision?" We find it slows down certain things, but it's more robust. Now we're seeing how those limitations from the differ- ent approaches can be exploited. If we use vision, how quickly can we get the information we need and act on it? If I don't use vision, it's quicker from this other aspect, but maybe not robust when there's something that's not sup- posed to be there or has shied in some way. We then keep throwing other sensors at the problem to understand those limitations. We looked at force feedback and how that could be done, for example. We're always looking at those constraints on how these things work together. But, even big- ger than that, consider whether I have the sen- sor and how I talk to it. How do I get that infor- mation quicker, and how do I get the data that the sensor needs loaded in? is is almost like Industry 2.0 Lean manu- facturing, except we're applying Industry 4.0- style technology, then leveraging that feed- back we didn't have before so that we can do it right. at's the type of continuous improve- ment we're doing. Dr. Phil Voglewede