Tag Archives: Dragonfly


Michael Braund, Chris Walsh, and I just presented a paper at this year’s iteration of ACADIA, a conference focusing on digital tools and techniques in architecture. Our submission was a paper featuring a human- space interaction analysis tool called, for lack of a better or even relevant name, “Dragonfly”. I will be posting some videos and hopefully a live demo over the next few days. We received some great feedback (mainly urging us to differentiate ourselves from existing tools and to do more testing), and we will certainly take those suggestions to heart.

Some of the outstanding papers for me were Achim Menges and company’s work with wood, Maciej Kaczynski’s (et al) work with digital fabrication (i.e. Robots!) of thin masonry vaults, Skyler Tibbits’ work on large-scale self-assembly, and of course Mark Foster Gage’s keynote talk telling us all to shut up about computation. Actually, I think he was telling Patrick Schumacher to shut up about computation….but perhaps it goes for the rest of us as well.

1 Comment

Filed under Architecture, Launch

Dragonfly: New AI Agent engine based only on Perception

Here’s a preview of the completely new AI agent engine in Dragonfly. We’re really excited about it. For a couple reasons….

The first is that this AI engine is based only on perception, i.e. what the agent “sees” in a given frame. No navigation meshes, no looking around corners – just raycasting. This means that you don’t need to preprocess your geometry at all: simply drop in the agents and let them go….

The second is that they go up (and down…)  ramps! This may seem fairly trivial, but we actually found it to be quite tricky without nav meshes or preprocessing. How do you know what a ramp is? When do you go up one? When on one, why do you keep going?



1 Comment

Filed under Architecture, Architecture in Combination, Dragonfly

Buildable vs. Useable

A few years ago, a major driver for research in architectural geometry was the growing gap between what was imaginable and what was buildable. With big advancements in the design and construction of freeform surfaces, parametric models, and live physics engines, that gap has been considerably lessened. However, these days, I see the same kind of gap growing between that which is buildable and that which is useable.

 Our methods of evaluating the usability of designs, even if they are a result of real-time-sensory-input-fed-into-a-genetic-algorithm-that-optimizes-some-structural-property, are very much based on human intuition and previous experience. Now, I am very much in favour of preserving and developing the role of intuition and experience in this digital design age, but I can’t help but feel the similarities to the problems that engineers faced when confronted with a doubly curved surface not five years ago.

Motivated by this, Mike Braund, a Ph.d candidate at York University, and I have been exploring the potential of using ecological psychology to quantify, in some way, the usability of digital designs. So far, we have managed to implement a rudimentary version of ecological psychology into the Grasshopper environment as a proof-of-concept, but we need to seriously rethink the code in order to develop the idea further. Ultimately, we are trying create a simulation environment that can quantify and inform the design of architectural spaces based not only on the intention of the designer, but also the intentions of the user.

1 Comment

Filed under Architecture, Grasshopper