Katie Resnick

Journal Entry For
Module 8 - Gen Des and ML
ACC Folder Link
Link to Student

Module 8 Questions:

  1. How does defining the design objectives influence the outcomes in a parametric design study using Generative Design in Revit? Provide examples of how different objectives might result in varied design options.

In a parametric design study, defining the design objectives is what sets the tone for everything that comes after. It tells the program what to prioritize, and that directly shapes the design options you get. For example, if your objective is to minimize construction cost, you’ll end up with more compact forms because they reduce GFA. If your goal is to maximize daylight, you’ll probably see wider footprints and more glazing. Because the design options shift depending on what you're optimizing for, defining those objectives early makes sure the generated outputs are relevant to your goals.

  1. Discuss the importance of identifying target taxonomies when generating synthetic datasets in architectural design. How can this help in managing large datasets and ensuring diversity and accuracy in your designs?

Identifying target taxonomies is helps create a synthetic dataset that captures a broad range of possible designs. It forces you to be intentional about the kinds of geometry you're generating, so the dataset doesn’t end up being skewed or repetitive. When you’re dealing with large datasets, having these categories helps you stay organized and avoid getting a bunch of similar designs and missing out on design diversity.

  1. What are the potential benefits and challenges of using automated workflows for generating diverse synthetic datasets in parametric design? How can modularity and scalability be achieved in such workflows?

Automated workflows for generating synthetic datasets are really powerful, especially when you're trying to test a lot of options quickly, because they save time and make it easier to explore more combinations than you could manually. That being said, it can also be easy to introduce bias like over-representing one building type, which can make the dataset less useful. To make the workflow modular and scalable, you need to design it in a way where you can easily swap out inputs or geometry categories without rebuilding the whole thing from scratch. In Dynamo, this can be achieved by creating flexible nodes that adapt seamlessly as the study scales.

  1. Explore the role of iterative processes in optimizing design options within the generative design framework. Why might it be necessary to tweak and repeat studies, and how can this approach lead to better design outcomes?

Iteration is a, important part of the generative design process because the first run of a study rarely gives you exactly what you want. You might need to tweak your parameters, adjust constraints, or even redefine your objectives once you see how the initial outputs perform. It's a cycle of test, evaluate, and refine, and going through that process usually leads to better results in the long run by giving you a broader range of results to pull from, and making it more obvious what the most influential parameters are.

  1. Describe the general workflow of creating a generative design study in Revit, as presented in the lecture. What are the key steps involved, and how do they contribute to the generation of optimized design options?

The general workflow for setting up a generative design study in Revit starts with defining your design objective as something measurable, like minimizing cost. For example, you might set a cost function like 200 dollars per square foot times the gross floor area. Then, you move into Dynamo to establish the parameters you want to explore, like different footprint shapes or building heights. After that, you choose a solver depending on your goal (randomized is good for quick samples, cross-product works for thorough coverage, optimize works to target a specific goal). Once that’s set up, you run the study and generate design options, each labeled with its performance metric. From there, you can start identifying trends and selecting promising solutions.

  1. Given the properties of a good synthetic dataset outlined in the class, such as being large, labeled, parametric, expandable, diverse, and balanced, how would you apply these principles to create a dataset for a specific building morphology study using Dynamo? Outline your approach.

If I were creating a dataset for a study focused on a certain building morphology (like star polygons), I’d start by setting up parametric controls in Dynamo for the building’s footprint, depth, and number of points. I’d make sure I could generate a large number of variations and label each one with an outcome, like cost. The dataset would need to be big enough to train a model, and I’d make sure each data point had consistent labeling. I’d also set things up so the study could be expanded later by adding new shapes or adjusting the cost formula. To keep it balanced, I’d generate an equal number of samples from different footprint categories so that one type didn’t dominate the data.

  1. Identify and discuss the four different solvers mentioned in the lecture that can be used to generate building masses in the Generative Design tool. How do these solvers impact the sample space and variety of design options produced?

The Randomize solver spits out random combinations of input parameters, which is good if you want to explore a broad sample quickly. The Cross-Product solver runs every possible combination, which gives you full coverage but takes more time. The Like-This solver generates new options that are similar to a selected base design, which is helpful when you're refining. The Optimize solver is the most targeted, as it uses algorithms to home in on the best-performing solutions based on your objective. Because each solver affects the sample space differently, picking the right one depends on whether you’re trying to explore options broadly or dial in on a specific variable.

  1. Reflect on the examples of building masses generated with different solvers from the class handout. What insights can you gain about the relationship between solver choice and design diversity? How would you leverage this understanding in a practical parametric design project using Dynamo?

The Randomize solver gave a scattered mix of results, while the Cross-Product solver covered the full design space. The Like-This solver gave variations that stayed close to the original, and the Optimize solver zeroed in on the lowest-cost options. In a real project, I’d probably start with Cross-Product to map the full space, then use Optimize once I had a clearer idea of what my goals were. This way I’d get both diverse results and then precision without wasting time/resources on designs that don’t meet my goals.