Novel Design Criteria

Novelty in Design Evaluation Metrics 

Whenever considering an engineering design concept, novel solutions should be considered more favorably than existing solutions. Constraining design concepts strictly to existing solutions results in a product which offers little in the way of innovation to the market. Without innovation, products are less competitive and may be subject to rapid depreciation relative to competitors 

With regards to our proposed experiment on critical examination during brainstorming, the experiment can be readily adapted to incorporate this metric. Brainstorming activities generate a large quantity of concepts or solutions in a short span of time. Each concept will be graded with regards to being contrary to the typical solution presented for that particular problem.  

In a hypothetical brainstorming session centered around a proposed design for a tool fixture (for example), atypical responses may be more heavily favored due to their novel approach to solving the problem. Responses that utilize uncommon features such as magnetics, gyroscopes, or photonics may allow for a unique solution to a problem that could otherwise go unsolved. These responses should be prioritized to ensure they receive due consideration.  

Shah writes that the scoring of novel designs should be done using a weighted summation system [1]. In the example used in Shah’s study, certain attributes related to propulsion of a vehicle were prioritized relative to other attributes. The medium the vehicle was designed to operate in was given a higher priority relative to the motive force behind moving the vehicle through that medium. The result is that the most prized features to develop novel solutions for are given priority over less relevant features [1] 

In the Shah study, the part count received the lowest novelty scoring [1]. This is to be expected as the propulsion method has a greater chance of lowering the cost of the product as opposed to a higher part count. In fact, the opposite is demonstrably true. Shah’s study placed the highest prioritization on thrust and medium of transport. This shows a strong preference towards non-ground based vehicles as any watercraft or aircraft would require a non-standard propulsion technique.  

In terms of scoring novelty in the proposed experiment, each solution provided by the control group should be categorized based on common attributes. These attribute categories could then be scored by the experiment staff based on desired outcomes for the product. Next, the control group’s responses could be scored for novelty. Finally, the test group’s responses could then be scored for novelty and the results compared to prove or disprove the hypothesis.  

Variety in Design Evaluation Metrics 

Variety is another metric by which the proposed experiment success may be measured. With improved variety comes a greater amount of divergent ideas in the solution space. This allows for a greater chance of success in achieving an ideal solution from a collaborative design effort. Variety is defined by Shah as the difference between a given set of ideas and the total summation of all ideas [1]. Concepts which utilize different physical principles have a large variety between them as opposed to concepts which differ only in quantifiable measurements of physical properties.  

Further elaboration of variety may be accomplished through functional design trees. As a proposed system diverges from the norm, the embodiment details of that system will become more specialized. The visual manifestation of this can be seen through an increased branch density from a top-down functional diagram. These genealogy diagrams may be non-subjectively scored based on weighting higher level branches higher than lower level branches. Variety may be determined based on the divergence from the norm at each branch level. 

For the purposes of the proposed experiment, variety may be measured using the Shah approach as discussed. Brainstorming will be performed by both the control and test groups with the results recorded and scored. Weighting and scoring will be accomplished by the experiment staff.  

Scoring for the experiment will be accomplished using Shah’s scoring system. The scoring system used by Shah for calculating variety metrics uses a weighted summation of all functions being evaluated. Each branch receives a weighted score based on its position in the genealogical tree. For each idea under consideration, the scores of all branches are added together.  

Quality in Design Evaluation Metrics 

Quality is a difficult to measure characteristic of an idea. As some measures of quality are subjective to the scorer, a better method must exist in order to ensure an objective score is provided. As early design decisions are the strongest drivers of future-state design, it is imperative that high quality ideas are scored appropriately.  

Quality metrics in design using Shah’s method of evaluation utilize a scored binary system [1]. Questions are posed about designs and points are awarded, not awarded, or subtracted based on meeting the assigned criteria. For example, Design for Manufacturing calls for length restrictions on machined parts based on the radius of the initial barstock. If these criteria are exceeded, the design could be awarded a point. If the criteria are met but lack a safety factor, the design could receive no points. If the design criteria are not met, the design could lose a point. In this way, subjective bias is removed from quality measurements.  

The quality of proposed solutions may also be evaluated using the Quality Function Deployment House of Quality system [2]. This system uses a matrix to score engineering requirements based on matching marketing-driven customer solutions. By ensuring that engineering solutions are closely aligned to customer requirements, the relative value of each proposed solution may be judged according to its individual ranking.  

For the purposes of the proposed experiment, quality rankings may be accomplished using a combination of the House of Quality system as well as the binary scoring system proposed by Shah. Both the control group and test group will be scored using these two systems. Scoring will be accomplished by experiment staff.  

Quantity in Design Evaluation Metrics 

In brainstorming, it is desirable to create a large number of potential solutions in a short amount of time. This allows for a greater chance of an ideal solution being presented during the brainstorming session. The proposed experiment will cause debate to occur during brainstorming sessions which risks the quantity metric due to reluctance of individuals to propose ideas for fear of potential ridicule. As a result, the quantity metric for this experiment may be one of the most telling metrics overall. 

Quantification of design results may be easily accomplished by counting the number of ideas proposed. Ideas may be counted on a per-person basis as well as on a per-session basis in order to baseline the control group results against the test group results. This will allow for ready identification of the impact of debate on the quantity of ideas posed during the brainstorming session.  

References: 

[1] Shah, J. J., Smith, S. M., and Vargas-Hernandez, N., 2003, “Metrics for measuring ideation effectiveness,” Design Studies, 24(2) pp. 111-134. 

[2Pahl, G., Beitz, W., Feldhusen, J., Grote, K., 2007, Engineering Design: A Systematic Approach, 3rd ed., Springer Science & Business Media, London. 

Leave a Reply