Recap of IDETC 2015

  |   Source

Table of Contents

IDETC 2015 concluded yesterday and there were a bunch of great talks and events I wanted to recap.

Conference Overall

This year, ASME held IDETC in the beautiful Back Bay neighborhood in South Boston, with great nearby restaurants and perfect weather. The size has grown this year with the increased size and co-location of the AM3D industry conference. This comes with the additional logistical complexity of having more people, but it was also nice to run into industry friends of mine who I might not have seen otherwise. I'm looking forward to the more intimate Design Computing and Cognition conference in 2016, which is a nice counter-point to the behemoth IDETC has become. Overall though, the conference was nicely done, and I'm looking forward to IDETC next year in Charlotte.

NSF Broadening Participation Committee Workshop: Managing your Career Like an Entrepreneur

This Pre-conference Sunday workshop was the first NSF BP workshop I've attended, but it certainly won't be the last. Great community and diversity of ideas, with a fun atmosphere. We discussed the connection between Entrepreneurship and Academia, and how to identify strong/weak entrepreneurial skills within ourselves that we could work on. I had some great conversations, particularly with Li Shu of U-Toronto on how to build high functioning research teams.

The best sign of the quality of the workshop: it raised more questions than it answered. Great job to the Broadening Participation committee, particularly co-chairs Susan Finger and Kate Fu.

Design Theory and Methodology (DTM)

I spent most of my in-session time at the Design Theory and Methodology sub-conference this year. The room is always packed, the topics are diverse, and the quality tends to be very high overall. Specifically, here were some talks/papers that I thought raised some great questions:

DTM Creativity and Ideation

The Creativity and Ideation sessions where split into two sections this year (a sign of its growing popularity), and they covered a wide gambit:

  • Bradley Camburn looked at the prototyping behaviors of participants on Instructables.com, and collected some interesting principles on how DIY designs and makers create ideas. Would be interesting to see how members of that community interact with each other to promote ideas, a la Liz Gerber's work on crowd-funding platforms and my work on Open-Source design communities.
  • Mahmoud Dinar looked at how to capture and understand problem formulation in a data-driven fashion. Would be fascinating to combine some of these ideas with un-supervised learning techniques as they collect more information from their web-tool. I'm definitely going to be keep my eye on this!
  • Diana Moreno presented some great work on the application of creativity within the context of Service Design. I've been following her work for some time and it is great to see the expansion of DTM's efforts beyond traditional mechanical design. Dave Brown and Christine Toh triangulated a great discussion during the Q/A around the role of Fun and Novelty.
  • Ryan Arlitt gave an informative and entertaining talk that showed some of the power of combining automated text analysis with traditional Design by Analogy. The part I found the most intriguing was the delineation between compound and single analogies, and how to capture those connections in description text.
  • Bryan Levy gave an interesting talk on the open problem of identifying the fundamental characteristics of design problems; the central idea being that we need a minimal set of maximally diverse benchmark problems to consistently test our design methods. This is far from a solved problem, but he brought up some great points and the Q/A discussion shined.
  • Caitlin Dippo presented some follow-up work to her IDETC 2013 paper using the Alternative Uses Test (which I remember enjoying back then, so I'm glad they came back to DTM this year). This time, she focused on the role of concept elaboration, or how well-described a concept is. This reminded me of the concept of minimum information content/minimum description length in Computer Science, and how that might connect to design representations.
  • There were also some great talks by Christine Toh on Ownership Bias, Tahira Reid on reducing sketch inhibition, and by Tyler Johnson on linking ideation/creativity tests across Psychology and Engineering.

Really impressed with the diversity of topics this year that kept the session interesting. I gave the last talk in the session on the role of Statistical Tests in Design Cognition. It was primarily a review paper on the debate about Null Hypothesis Statistical Testing and how DTM has evolved over the past decade. We had an interesting discussion both after the talk and beyond the session, and I'm looking forward to exploring some of my ideas, particularly around review checklists, with the community in future years.

DTM User Preferences

The user preferences session this year was a brilliant display of diverse techniques, and also (personally interesting to me) the role that large-scale data can play in uncovering new questions:

  • Erin McDonald used eye-tracking techniques to debunk the assumption that shared features cancel each other out when users are considering alternatives. I love these kind of "things aren't as simple as you think they are" papers, since they always make me revisit my assumptions; I'm always better off as a result.
  • Alex Burnap presented this year's DTM Best Paper, which brought together a bunch of nice techniques under one umbrella. Specifically, he looked at the trade-offs car designers face between designs that are novel with those that preserve brand; a hard problem! They used a combination of crowd-sourced evaluations with a home-grown distance function to define a set of linear trade-offs for designers to explore. It reminded me of Pareto Frontiers, and could probably be expanded to non-linear convex frontiers in the future. A fascinating challenge they had to face is in defining reasonable distance metrics; this brought to mind techniques from the metric-learning community, where you can use human evaluations to actually learn the distance metric. So many neat directions they could take it! No wonder it was selected for the best paper award.
  • Cory Schaffhausen gave a talk near-and-dear to my own interests in identifying collections of user needs. His approach was interesting in that they studied the converge and concentration of user needs collected during an Amazon Mechanical Turk project. The fascinating thing about the work was that unique user needs did not seem to saturate as they increased the sample size; this is odd, since you'd imagine that a given design prompt would have a finite number of applicable user needs. The talk highlighted the power of combining user needs with crowd-sourcing and automated text analysis, and I'm looking forward to following where this work goes from here.
  • Jessica Armstrong gave a nice counter-point to the more data-centric talks with her work on designing suits for empathic design for disabilities. This highlights the diversity of how DTM papers approach user preference problems.

Design Computing

I've been fascinated by Design Computing, particularly Computational Design Synthesis, since following the great work of Matt Campbell and Kristi Shea over the years. I missed the beginning and end of this session, but the talks I did catch were great:

  • I only caught the tail-end of Andy Dong's talk on graph transformations of function structures. In his usual clarity, Andy connected the worlds of graph theory to real-world properties of physical and functional relationships. Given the post-talk Q/A that I did see, I anticipate that I'll enjoy reading through the whole paper.
  • Joran Booth talked about the relationship between function trees and the final design. He brought up some neat ideas around the quality of function decompositions which I'm still mulling over.
  • Clemens Münzer presented some new work in a line of functional synthesis that brought together prior work on the connection between configuration design and the Boolean Satisfiability Problem, and how to conduct joint optimization used simulation models. I love this kind of work because it marries the pragmatic choices one needs to make when adapting optimization to real engineering problems, with the elegance of finding unique connections to well-studied problems in computer science (I remember reading their first paper on connecting design synthesis to the SAT problem---brilliant!)
  • Fritz Stöckli presented work on optimizing Brachiating (i.e., swinging) passive robots using graph grammars. As expected, the objective space is highly non-linear, and so they turned to techniques like simulated annealing and showed some well-presented results. This got me thinking about relaxations within complex design configuration problems, and how we might approximate the solution space to simplify search.

This lead to a great conversation I had with Warren Seering and Chad Foster about how Design Computing could benefit from bridging to some of the theoretical work in computational complexity and lower-/upper-bounds on performance (e.g., Branch-and-Bound and Upper Confidence Bound/Minimal Regret algorithms) that are common in Computer Science. Some of the work by Kristi's group would be the most state-of-the-art that I can think of; I think the field is ripe for more efforts in this area. I anticipate recent work in Multi-Armed and Infinite-Armed Bandits will be relevant as we broach Design Computing topics that combine discrete and continuous parameter spaces.

DTM Trends and Technologies Impacting the Design Process

This was a neat session that brought together a set of topics that the chairs thought might represent future directions for DTM. Two papers were highly related to the intersection of design and data:

  • Chaoyang Song and Jianxi Luo summarized products that resulted from Crowd-funding, specifically on platforms such as Kickstarter and Indiegogo. Again this reminded me of Liz Gerber's work on crowd-funding and my work on online design communities. There does seem to be a growing interest in how to leverage "the crowd" in various capacities to aid product development, whether through funding or design itself. This was exemplified through the next DTM Trends talk:
  • Dev Ramanujan presented a Crowd Co-Creation study that used professional designers to seed a crowd-interface where users could parametrically alter the 3D models to produce new designs. Again, we see the use of crowd services like MTurk, as well as adoption of ML techniques such as k-means to help interpret the designs that get produced.

DTM Overall

There were several growing themes at DTM this year, some of which included:

  • The use of crowd platforms (MTurk, Instructables, etc.) for the purpose of conducting empirical work.
  • The combination of advanced statistical and computational techniques with human-generated data, whether through databases, crowd-sourcing, or otherwise.
  • The expansion of the scope that DTM typically covers; for example, product-service systems.

I'm excited to see where our community takes these at next year's conference

Design Automation Conference: Data-Driven Design

Two years ago the Design Automation Conference had "Data-Driven Design" as an emerging research topic, and now the the Data-Driven Design session at DAC is under full swing. I was really fascinated by this set of talks because of their obvious connection to my area of applying machine learning techniques to design data. Although I usually participate in the DTM sessions, this DAC session stole me away due to the sheer concentration of peer researchers who are doing great work.

  • Yi (Max) Ren kicked things off with his paper, which won this year's DAC Best Paper award. He explored how humans select designs and control policies through an online crowd-sourced game called "Eco-Racer". He then compared human performance to that of a global optimizer (specifically, EGO) and looked at the differences in convergence performance to the optimal policy. He highlighted the challenge of finding the right crowd to provide reasonable initial data for policy exploration. This reminded me a lot of parallels in robotics with apprenticeship learning and off-policy learning, though those have slightly different goals than what Max was trying to accomplish. As we start to explore how humans and computers can design in semi-supervised fashion, these connections with optimal control exploration and trust-regions from the ML community will become increasingly important.
  • Harrison Kim then presented work on predictive modeling of product returns. The key to the paper was in combining two disparate types of prior product return models: one that models the sales data and another that models product return distributions. By combining the two you in essence get to capture the relationship between these two linked distributions and build a joint model that can use both datasets.
  • Hyunmin Cheong gave two back-to-back presentations on some of the work he is spearheading at Autodesk Research. The first detailed a crowd-sourcing game called "find the pretender" based on a popular Chinese gameshow. The brilliance behind this approach is how he takes the fairly simple game mechanic (providing text information for objects), and leverages it to get functional data at the right level of abstraction to be useful in functional design. The solution looks both fun and elegant.
  • His second paper extracted functional relationships in what he called "Artifact-Function-Flow" Triplets. For example "Battery-Stores-Electrical Energy". He does this by selecting a sub-corpus wikipedia, and then using a combination of Sentence Parsers and Word similarity measures to collect these triplets from Wikipedia. This was a great example of how domain knowledge regarding functions could combine with modern text-processing and crowd-sourcing techniques. Both this paper and his last paper show great creativity in combining the best parts of DAC and DTM together. I also had no idea Autodesk was doing this kind of work, which was great news!

DFMLC: Panel on Additive Manufacturing's Impact on Design for Manufacturing, Assembly, and Life Cycle

Kazuhiro Saitou organized a great panel of academic and industry experts to discuss what role Additive Manufacturing is playing in DFM, DFA, and DFMLC. After the panelists gave an overview of their opinions of the field, he posed the question: "Is Complexity Free?"

The discussion took a series of interesting turns, with several people starting off agreeing the manufacturing complexity has essentially become free, pointing to examples of the GE LEAP engine fuel injectors as one example. However Erhan Arishoy of Siemens Corporate Research noted that while manufacturing might be free, there is no such thing as a free lunch: the costs of AM now come in the design and software side. For example, even if one could create a complex, light-weight lattice model, how would one store or transmit that data to machines in today's formats? This then opened up additional concerns about other aspects of complexity, such as surface finish and precision which is far from "Free" in AM.