Module 8 Questions:
- How does defining the design objectives influence the outcomes in a parametric design study using Generative Design in Revit? Provide examples of how different objectives might result in varied design options.
The design objectives guide our selection of an appropriate generative design solver. Design objectives determine the type of design results and the criteria for evaluating those results. In the Autodesk class, one of the design objectives is to minimize construction cost (Construction cost = 200 ($/SF) × GFA). Different Base Star Radius and Number Star Mountains values will calculate the total construction cost, and the solution with the lowest cost is selected. In my Module 7 assignment, I set four design objectives in Generative Design: minimize Material Cost, minimize Energy Cost, maximize Solar Value, and minimize Embodied Carbon. These objectives reflect my intention to balance environmental performance and cost control. The design tool automatically generates alternative solutions based on input volume parameters and cost factors, and visualizes the trade-off relationships between various metrics using Parallel Coordinates Graphs and Scatterplots.For example, in the trade-off between “Embodied Carbon vs. Energy Cost,” certain low-carbon materials may have higher upfront costs but lower energy consumption during the operational phase, making them potentially more advantageous over the entire lifecycle.
- Discuss the importance of identifying target taxonomies when generating synthetic datasets in architectural design. How can this help in managing large datasets and ensuring diversity and accuracy in your designs?
When generating synthetic datasets in architectural design, target taxonomies are important and helpful for managing large datasets and ensuring design diversity and accuracy. Target taxonomies help break down the dataset into subsets, which can efficiently handle algorithmic logic in small chunks and better control and ensure dataset diversity. By decomposing the dataset into multiple subsets through algorithms, we can improve data accuracy and diversity. By constructing separate algorithms and generative workflows for each classification and integrating them through a category selection variable, a unified data generation system can be achieved while maintaining a modular structure. In classification, each category is independently differentiable and uniquely identifiable, with distinguishable geometric and parametric features. Furthermore, each category can be represented algorithmically through a divide-and-conquer approach, enabling independent parameterized modeling. This advantage allows components to be scalable and reusable in parts or as a whole. This helps avoid biases during training and prevents the machine learning model from overfitting specific data points by eliminating overlaps between categories and repetition within categories. For example, when modeling L shaped geometries, it is necessary to identify the parameter variables for this category, including height, length, width, indentation in the horizontal dimension, and indentation in the vertical dimension. Additionally, by constructing separate algorithms and generative workflows for each category and integrating them through a category selection variable, a unified data generation system can be achieved while maintaining a modular structure. Therefore, target taxonomies help manage large datasets and ensure diversity and accuracy in your designs.
- What are the potential benefits and challenges of using automated workflows for generating diverse synthetic datasets in parametric design? How can modularity and scalability be achieved in such workflows?
Benefits:
Automated workflows for generating diverse synthetic datasets in parametric design It can improve the efficiency and consistency of data generation. The benefits of the process design are automated, modular, scalable, and reproducible, allowing designers to handle complex parameter combinations on a large scale and iterate designs quickly. The steps in Workflow, such as "implement the generative algorithm," "define inputs and outputs," "generate models," and "run simulations," are all structured, which helps standardize data and visualize output. In addition, by directly connecting to the simulation engine, the automated workflows system can label data with simulated performance results, thereby establishing the input-output required for training while generating design samples. Therefore, the final generated dataset is adequately geometric variable and unbiased, supporting the training of high-quality surrogate models.
Challenges:
The first challenge is that parameters and targets are defined at the beginning of modeling, which means defining the problem from a machine learning lens. Only in this way can we ensure that the input features are synchronized with the performance results. In addition, different classification designs require different workflows and algorithms. These different workflows are composed of category selection variables, and keeping these data structures error-free is the second challenge. The final potential challenge is to avoid overlaps, repetition, and bias, otherwise data bias is likely to occur.
How can modularity and scalability be achieved in such workflows:
Modularity and scalability are achieved in the workflow through structured design and the allocation of parametric variables. Each geometry category is driven by a separate generative algorithm and corresponding input variables, which are fed into the generative workflow to form category-specific design pipelines. Different categories are integrated into a single system running structure through a category selection variable.
- Explore the role of iterative processes in optimizing design options within the generative design framework. Why might it be necessary to tweak and repeat studies, and how can this approach lead to better design outcomes?
In the generative design framework, the iterative process is a cyclical process in the early design stage, namely, come up with a design → run performance simulations → evaluate results → make design decisions → change initial design. This iterative process facilitates trial and error and rapid feedback. In the workflow, users first establish design variables and design objectives, and then design options through the solver. If the results are not as expected, the study can be tweaked and improved solutions can be generated again to optimize the critical path.Additionally, according to the Autodesk handout, the model is validated by calculating its accuracy. Through an iterative approach, data samples are updated or new ones are added to the dataset based on the surrogate predictions and simulation results. This indicates that the data itself can be continuously refined and expanded to gradually enhance the model's predictive capabilities, making the surrogate model more closely aligned with real simulation results.
Questions Related to the Autodesk Class:
- Describe the general workflow of creating a generative design study in Revit, as presented in the lecture. What are the key steps involved, and how do they contribute to the generation of optimized design options?
Creating a generative design study in Revit typically involves nine key steps. Here are these steps and how they contribute to the generation of optimized design options:
1)Define the design variables — Determine the key parameters that will control the subsequent automatic generation.
2)Define the design objectives — Provide evaluation criteria and basis for the results.
3)Assign variables as inputs — Enable solvers to sample within these parameter ranges.
4)Assign objectives as outputs — Enable each generated option to calculate corresponding results for subsequent evaluation and optimization.
5)Choose an appropriate solver, which is an algorithm that automates the sampling of the input design variables — Apply algorithms and input variables to optimize the objectives.
6)Generate design options — Generate multiple design options driven by the solver and input variables for designers to select.
7)Explore generated options — Analyze and compare the generated options to identify design directions for optimization.
8)Export the desired option — Lay the foundation for optimized design.
9)Tweak the study and repeat this process iteratively — Fine-tune parameter settings or objectives and repeat the above process to iteratively improve design results, supporting rapid feedback and continuous optimization in early-stage design.
- Given the properties of a good synthetic dataset outlined in the class, such as being large, labeled, parametric, expandable, diverse, and balanced, how would you apply these principles to create a dataset for a specific building morphology study using Dynamo? Outline your approach.
Large size of data points – The data set must be large enough to support deep learning models in identifying statistically significant patterns. In my Generative Design Study, I set the range of values for design variables such as Building Height and Building Width, e.g., 10m to 500m. This means that thousands of different volume combinations can be generated, forming a large-scale data sample space.
Ground truth labels – Each sample must have a true label, i.e., the output value corresponding to the performance metric. In my design, each generated geometry is calculated for corresponding outputs, such as Material Cost, Energy Cost, Solar Value, and Embodied Carbon. These metrics serve as training targets and can be directly used for supervised learning models.
Parametric representation – All design data is generated by parameters, ensuring that the design geometry is reusable, scalable, and resampled. For example, I use Dynamo scripts to modularize the geometric logic so that any set of parameter inputs can be changed and all performance labels updated.
Expandable – The data structure should allow for subsequent expansion. By modifying the input range or adding new variables (such as window-to-wall ratio, orientation, etc.), I can expand the dataset to accommodate more complex learning tasks or higher-precision model requirements.
Variation – The dataset should cover sufficient geometric diversity. I generate buildings of various scales, and these different typologies ensure the model's applicability across different building forms.
Balanced – Each building type should have sufficient samples to avoid data bias. In my setup, I uniformly sample all height and width combinations to ensure that the data is evenly distributed across the entire input space.
- Identify and discuss the four different solvers mentioned in the lecture that can be used to generate building masses in the Generative Design tool. How do these solvers impact the sample space and variety of design options produced?
In Generative Design in Revit (GDiR), there are four different solvers that can be used to generate building masses in the Generative Design tool. These solvers have an impact on the sample space and variety of design options produced:
Randomize – Assign random values to each input parameter to generate a set of design options, enabling rapid construction of diverse data samples. The building masses generated by the Randomize Solver exhibit significant freedom and variability in shape, with sample points distributed relatively sparsely, which helps form a diverse dataset.
Cross Product – Enumerates all parameter combinations in each step to construct combinatorial datasets. The Cross Product Solver generates neatly arranged mass samples, which are convenient for feature search and combination analysis, and is suitable for comprehensive scanning of the entire parameter space.
Like-this – Generates a series of slightly modified design variants based on the current input configuration, suitable for sensitivity analysis. The sample points generated by the Like-this Solver are clustered in the vicinity of a selected design, making it suitable for minor improvements to existing designs.
Optimize – Iteratively optimize based on target output values, using the previous input configuration as the basis for optimization, suitable for datasets with clear performance targets. Optimize Solver represents Design Optimization / Performance-oriented form-finding datasets. This solver is particularly suitable for quantifying target functions and defining them using Dynamo logic, and then gradually improving the performance of design solutions.
In my Module 7 project,I used the Randomize Solver to generate building volumes by randomly sampling Building Height and Width, generating a large number of design options with different performance characteristics. I analyzed the trade-offs between Material Cost, Energy Cost, Solar Value, and Embodied Carbon using Scatterplots and Parallel Coordinates Graphs. The diversity of this solver allowed me to comprehensively compare the environmental benefits and economic viability of different design options, thereby identifying the optimal solution with the best overall performance.
- Reflect on the examples of building masses generated with different solvers from the class handout. What insights can you gain about the relationship between solver choice and design diversity? How would you leverage this understanding in a practical parametric design project using Dynamo?
Randomize – Generates a wide range of design options through random sampling, improving design diversity.
Cross Product – Systematically traverses all combinations in the parameter space for comprehensive coverage.
Like-this – Performs local optimization on an existing solution, resulting in limited diversity.
Optimize – Continues to converge toward optimal performance, sacrificing diversity.
In a practical parametric design project, I will to build a Generative Design Tool to help users balance embodied carbon, material cost, and solar value for mid-rise residential buildings. I would select solvers depending on the design phase and objective. I would use Randomize for initial exploration of form and cost-carbon relationships, Cross Product to map out full combinations of key geometric variables, Like-this to refine alternatives with strong sustainability scores, and Optimize to converge toward high-performance solutions based on quantitative targets defined within the tool.